Science.gov

Sample records for acceptable error rates

  1. Post-manufacturing, 17-times acceptable raw bit error rate enhancement, dynamic codeword transition ECC scheme for highly reliable solid-state drives, SSDs

    NASA Astrophysics Data System (ADS)

    Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken

    2011-04-01

    A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.

  2. Controlling type-1 error rates in whole effluent toxicity testing

    SciTech Connect

    Smith, R.; Johnson, S.C.

    1995-12-31

    A form of variability, called the dose x test interaction, has been found to affect the variability of the mean differences from control in the statistical tests used to evaluate Whole Effluent Toxicity Tests for compliance purposes. Since the dose x test interaction is not included in these statistical tests, the assumed type-1 and type-2 error rates can be incorrect. The accepted type-1 error rate for these tests is 5%. Analysis of over 100 Ceriodaphnia, fathead minnow and sea urchin fertilization tests showed that when the test x dose interaction term was not included in the calculations the type-1 error rate was inflated to as high as 20%. In a compliance setting, this problem may lead to incorrect regulatory decisions. Statistical tests are proposed that properly incorporate the dose x test interaction variance.

  3. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  4. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration.

  5. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  6. Correcting the optimal resampling-based error rate by estimating the error rate of wrapper algorithms.

    PubMed

    Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure

    2013-09-01

    High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.

  7. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  8. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  9. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  10. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  11. Monitoring Error Rates In Illumina Sequencing

    PubMed Central

    Manley, Leigh J.; Ma, Duanduan; Levine, Stuart S.

    2016-01-01

    Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR’s unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted. PMID:27672352

  12. 105-KE Isolation Barrier Leak Rate Acceptance Test Report

    SciTech Connect

    McCracken, K.J.

    1995-06-14

    This Acceptance Test Report (ATR) contains the completed and signed Acceptance Procedure (ATP) for the 105-KE Isolations Barrier Leak Rate Test. The Test Engineer`s log, the completed sections of the ATP in the Appendix for Repeat Testing (Appendix K), the approved WHC J-7s (Appendix H), the data logger files (Appendices T and U), and the post test calibration checks (Appendix V) are included.

  13. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  14. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  15. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  16. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  17. Approximation of Bit Error Rates in Digital Communications

    DTIC Science & Technology

    2007-06-01

    and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase

  18. Technological Advancements and Error Rates in Radiation Therapy Delivery

    SciTech Connect

    Margalit, Danielle N.

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There

  19. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  20. Rating of acceptable load in manual sorting of postal parcels.

    PubMed

    Stålhammar, H R; Louhevaara, V; Troup, J D

    1996-10-01

    The psychophysical test, the rating of acceptable load (RAL) were used to assess acceptable weights for dynamic lifting in postal workers engaged in sorting parcels. The standard test (RALSt) and a work-simulating test (RALW) were administered to 103 volunteers: all experienced male sorters. In the RALSt, subjects selected the weight which would be acceptable for lifting in a box with handles from table to floor and back to the table once every 5 min for the working day. for the RALW, the box was without handles and the weight was chosen to be acceptable for transfer 4-6 times/min from a table to the parcel container and back to the table. Both tests were made during normal working hours at postal sorting centres. The overall means for RALSt and RALW were 16.4 kg and 9.4 kg respectively (p < 0.001): both being substantially higher than the average parcel weight of 4 kg. The RALSt and RALW tests proved to be repetitive and sensitive for differentiating the effects of load and task variable in actual manual material handling. Thus they appear to be applicable to the evaluation of manual materials handling problems.

  1. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore » rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  2. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  3. PVUSA procurement, acceptance, and rating practices for photovoltaic power plants

    SciTech Connect

    Dows, R.N.; Gough, E.J.

    1995-09-01

    This report is one in a series of PVUSA reports on PVUSA experiences and lessons learned at the demonstration sites in Davis and Kerman, California, and from participating utility host sites. During the course of approximately 7 years (1988--1994), 10 PV systems have been installed ranging from 20 kW to 500 kW. Six 20-kW emerging module technology arrays, five on universal project-provided structures and one turnkey concentrator, and four turnkey utility-scale systems (200 to 500 kW) were installed. PVUSA took a very proactive approach in the procurement of these systems. In the absence of established procurement documents, the project team developed a comprehensive set of technical and commercial documents. These have been updated with each successive procurement. Working closely with vendors after the award in a two-way exchange provided designs better suited for utility applications. This report discusses the PVUSA procurement process through testing and acceptance, and rating of PV turnkey systems. Special emphasis is placed on the acceptance testing and rating methodology which completes the procurement process by verifying that PV systems meet contract requirements. Lessons learned and recommendations are provided based on PVUSA experience.

  4. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  5. Estimating the annotation error rate of curated GO database sequence annotations

    PubMed Central

    Jones, Craig E; Brown, Alfred L; Baumann, Ute

    2007-01-01

    Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO) sequence database (GOSeqLite). This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006) at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS) had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS) had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information. PMID:17519041

  6. On quaternary DPSK error rates due to noise and interferences

    NASA Astrophysics Data System (ADS)

    Lye, K. M.; Tjhung, T. T.

    A method for computing the error rates of a quaternary, differentially encoded and detected, phase shift keyed (DPSK) system with Gaussian noise, intersymbol and adjacent channel interferences is presented. In the calculations, intersymbol effects due to the band-limiting IF filter were assumed to have come only from immediately adjacent symbols. Similarly, only immediately adjacent channels were assumed to have contributed toward interchannel interferences. Noise effects were handled by using a probability density formula for corrupted phase differences derived recently by Paula (1981). An experimental system was set up, and error rates measured to verify the analytical results. From the results, optimum receiver bandwidth and channel separation for quaternary DPSK systems can be determined.

  7. Calculate bit error rate for digital radio signal transmission

    NASA Astrophysics Data System (ADS)

    Sandberg, Jorgen

    1987-06-01

    A method for estimating symbol error rate caused by imperfect transmission channels is proposed. The method relates the symbol error rate to peak-to-peak amplitude and phase ripple, maximum gain slope, and maximum group delay distortion. The performance degradation of QPSK, offset QPSK (OQPSK), M-ary PSK (MSK) signals transmitted over a wideband channel, exhibiting either sinusoidal amplitude or phase ripples are evaluated using the proposed method. The transmission channel model, which is a single filter with transfer characteristics, for modeling the frequency of response of a system is described. Consideration is given to signal detection and system degradation. The calculations reveal that the QPSK modulated carrier degrades less then the OQPSK and MSK carriers for peak-to-peak amplitude ripple values less than 6 dB and peak-to-peak phase ripple values less than 45 deg.

  8. Theoretical Accuracy for ESTL Bit Error Rate Tests

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin

    1998-01-01

    "Bit error rate" [BER] for the purposes of this paper is the fraction of binary bits which are inverted by passage through a communication system. BER can be measured for a block of sample bits by comparing a received block with the transmitted block and counting the erroneous bits. Bit Error Rate [BER] tests are the most common type of test used by the ESTL for evaluating system-level performance. The resolution of the test is obvious: the measurement cannot be resolved more finely than 1/N, the number of bits tested. The tolerance is not. This paper examines the measurement accuracy of the bit error rate test. It is intended that this information will be useful in analyzing data taken in the ESTL. This paper is divided into four sections and follows a logically ordered presentation, with results developed before they are evaluated. However, first-time readers will derive the greatest benefit from this paper by skipping the lengthy section devoted to analysis, and treating it as reference material. The analysis performed in this paper is based on a Probability Density Function [PDF] which is developed with greater detail in a past paper, Theoretical Accuracy for ESTL Probability of Acquisition Tests, EV4-98-609.

  9. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  10. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  11. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    SciTech Connect

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-02-15

    conventional IMRT QA performance metrics (Gamma passing rates) and dose errors in anatomic regions-of-interest. The most common acceptance criteria and published actions levels therefore have insufficient, or at least unproven, predictive power for per-patient IMRT QA.

  12. Acceptance test procedure for the 105-KW isolation barrier leak rate

    SciTech Connect

    McCracken, K.J.

    1995-05-19

    This acceptance test procedure shall be used to: First establish a basin water loss rate prior to installation of the two isolation barriers between the main basin and the discharge chute in K-Basin West. Second, perform an acceptance test to verify an acceptable leakage rate through the barrier seals. This Acceptance Test Procedure (ATP) has been prepared in accordance with CM-6-1 EP 4.2, Standard Engineering Practices.

  13. Approximate Minimum Bit Error Rate Equalization for Fading Channels

    NASA Astrophysics Data System (ADS)

    Kovacs, Lorant; Levendovszky, Janos; Olah, Andras; Treplan, Gergely

    2010-12-01

    A novel channel equalizer algorithm is introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithm is based on minimizing the bit error rate (BER) using a fast approximation of its gradient with respect to the equalizer coefficients. This approximation is obtained by estimating the exponential summation in the gradient with only some carefully chosen dominant terms. The paper derives an algorithm to calculate these dominant terms in real-time. Summing only these dominant terms provides a highly accurate approximation of the true gradient. Combined with a fast adaptive channel state estimator, the new equalization algorithm yields better performance than the traditional zero forcing (ZF) or minimum mean square error (MMSE) equalizers. The performance of the new method is tested by simulations performed on standard wireless channels. From the performance analysis one can infer that the new equalizer is capable of efficient channel equalization and maintaining a relatively low bit error probability in the case of channels corrupted by frequency selectivity. Hence, the new algorithm can contribute to ensuring QoS communication over highly distorted channels.

  14. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  15. Deconstructing the "reign of error": interpersonal warmth explains the self-fulfilling prophecy of anticipated acceptance.

    PubMed

    Stinson, Danu Anthony; Cameron, Jessica J; Wood, Joanne V; Gaucher, Danielle; Holmes, John G

    2009-09-01

    People's expectations of acceptance often come to create the acceptance or rejection they anticipate. The authors tested the hypothesis that interpersonal warmth is the behavioral key to this acceptance prophecy: If people expect acceptance, they will behave warmly, which in turn will lead other people to accept them; if they expect rejection, they will behave coldly, which will lead to less acceptance. A correlational study and an experiment supported this model. Study 1 confirmed that participants' warm and friendly behavior was a robust mediator of the acceptance prophecy compared to four plausible alternative explanations. Study 2 demonstrated that situational cues that reduced the risk of rejection also increased socially pessimistic participants' warmth and thus improved their social outcomes.

  16. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  17. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  18. Error Rates and Channel Capacities in Multipulse PPM

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Moision, Bruce

    2007-01-01

    A method of computing channel capacities and error rates in multipulse pulse-position modulation (multipulse PPM) has been developed. The method makes it possible, when designing an optical PPM communication system, to determine whether and under what conditions a given multipulse PPM scheme would be more or less advantageous, relative to other candidate modulation schemes. In conventional M-ary PPM, each symbol is transmitted in a time frame that is divided into M time slots (where M is an integer >1), defining an M-symbol alphabet. A symbol is represented by transmitting a pulse (representing 1) during one of the time slots and no pulse (representing 0 ) during the other M 1 time slots. Multipulse PPM is a generalization of PPM in which pulses are transmitted during two or more of the M time slots.

  19. Effect of Repeated Evaluation and Repeated Exposure on Acceptability Ratings of Sentences

    ERIC Educational Resources Information Center

    Zervakis, Jennifer; Mazuka, Reiko

    2013-01-01

    This study investigated the effect of repeated evaluation and repeated exposure on grammatical acceptability ratings for both acceptable and unacceptable sentence types. In Experiment 1, subjects in the Experimental group rated multiple examples of two ungrammatical sentence types (ungrammatical binding and double object with dative-only verb),…

  20. Bit error rate measurement above and below bit rate tracking threshold

    NASA Technical Reports Server (NTRS)

    Kobayaski, H. S.; Fowler, J.; Kurple, W. (Inventor)

    1978-01-01

    Bit error rate is measured by sending a pseudo-random noise (PRN) code test signal simulating digital data through digital equipment to be tested. An incoming signal representing the response of the equipment being tested, together with any added noise, is received and tracked by being compared with a locally generated PRN code. Once the locally generated PRN code matches the incoming signal a tracking lock is obtained. The incoming signal is then integrated and compared bit-by-bit against the locally generated PRN code and differences between bits being compared are counted as bit errors.

  1. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Johnson, Sarah J.; Lance, Andrew M.; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Ralph, T. C.; Symul, Thomas

    2017-02-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates.

  2. 10 CFR 217.33 - Acceptance and rejection of rated orders.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... directed by the Department of Energy for a rated order involving all forms of energy: (1) A person shall... directed by the Department of Energy for a rated order involving all forms of energy, rated orders may be... 10 Energy 3 2013-01-01 2013-01-01 false Acceptance and rejection of rated orders. 217.33...

  3. Rates of computational errors for scoring the SIRS primary scales.

    PubMed

    Tyner, Elizabeth A; Frederick, Richard I

    2013-12-01

    We entered item scores for the Structured Interview of Reported Symptoms (SIRS; Rogers, Bagby, & Dickens, 1991) into a spreadsheet and compared computed scores with those hand-tallied by examiners. We found that about 35% of the tests had at least 1 scoring error. Of SIRS scale scores tallied by examiners, about 8% were incorrectly summed. When the errors were corrected, only 1 SIRS classification was reclassified in the fourfold scheme used by the SIRS. We note that mistallied scores on psychological tests are common, and we review some strategies for reducing scale score errors on the SIRS.

  4. Acceptance Control Charts with Stipulated Error Probabilities Based on Poisson Count Data

    DTIC Science & Technology

    1973-01-01

    Richard L / Scheaffer ’.* Richard S eavenwort December,... 198 *Department of Industrial and Systems Engineering University of Florida Gainesville...L. Scheaffer N00014-75-C-0783 Richard S. Leavenworth 9. PERFORMING ORGANIZATION NAME AND ADDRESS . PROGRAM ELEMENT. PROJECT, TASK Industrial and...PROBABILITIES BASED ON POISSON COUNT DATA by Suresh 1Ihatre Richard L. Scheaffer S..Richard S. Leavenworth ABSTRACT An acceptance control charting

  5. Effect of Electronic Editing on Error Rate of Newspaper.

    ERIC Educational Resources Information Center

    Randall, Starr D.

    1979-01-01

    A study of a North Carolina newspaper indicates that newspapers using fully integrated electronic editing systems have fewer errors in spelling, punctuation, sentence construction, hyphenation, and typography than newspapers not using electronic editing. (GT)

  6. Avoiding ambiguity with the Type I error rate in noninferiority trials.

    PubMed

    Kang, Seung-Ho

    2016-01-01

    This review article sets out to examine the Type I error rates used in noninferiority trials. Most papers regarding noninferiority trials only state Type I error rate without mentioning clearly which Type I error rate is evaluated. Therefore, the Type I error rate in one paper is often different from the Type I error rate in another paper, which can confuse readers and makes it difficult to understand papers. Which Type I error rate should be evaluated is related directly to which paradigm is employed in the analysis of noninferiority trial, and to how the historical data are treated. This article reviews the characteristics of the within-trial Type I error rate and the unconditional across-trial Type I error rate which have frequently been examined in noninferiority trials. The conditional across-trial Type I error rate is also briefly discussed. In noninferiority trials comparing a new treatment with an active control without a placebo arm, it is argued that the within-trial Type I error rate should be controlled in order to obtain approval of the new treatment from the regulatory agencies. I hope that this article can help readers understand the difference between two paradigms employed in noninferiority trials.

  7. Simultaneous control of error rates in fMRI data analysis.

    PubMed

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-12-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain.

  8. An Examination of Negative Halo Error in Ratings.

    ERIC Educational Resources Information Center

    Lance, Charles E.; And Others

    1990-01-01

    A causal model of halo error (HE) is derived. Three hypotheses are formulated to explain findings of negative HE. It is suggested that apparent negative HE may have been misinferred from existing correlational measures of HE, and that positive HE is more prevalent than had previously been thought. (SLD)

  9. A long lifetime, low error rate RRAM design with self-repair module

    NASA Astrophysics Data System (ADS)

    Zhiqiang, You; Fei, Hu; Liming, Huang; Peng, Liu; Jishun, Kuang; Shiying, Li

    2016-11-01

    Resistive random access memory (RRAM) is one of the promising candidates for future universal memory. However, it suffers from serious error rate and endurance problems. Therefore, exploring a technical solution is greatly demanded to enhance endurance and reduce error rate. In this paper, we propose a reliable RRAM architecture that includes two reliability modules: error correction code (ECC) and self-repair modules. The ECC module is used to detect errors and decrease error rate. The self-repair module, which is proposed for the first time for RRAM, can get the information of error bits and repair wear-out cells by a repair voltage. Simulation results show that the proposed architecture can achieve lowest error rate and longest lifetime compared to previous reliable designs. Project supported by the New Century Excellent Talents in University (No. NCET-12-0165) and the National Natural Science Foundation of China (Nos. 61472123, 61272396).

  10. Error Rates in Users of Automatic Face Recognition Software.

    PubMed

    White, David; Dunn, James D; Schmid, Alexandra C; Kemp, Richard I

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated 'candidate lists' selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers-who use the system in their daily work-and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced "facial examiners" outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems-potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.

  11. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  12. Construct and Predictive Validity of Social Acceptability: Scores From High School Teacher Ratings on the School Intervention Rating Form

    ERIC Educational Resources Information Center

    Harrison, Judith R.; State, Talida M.; Evans, Steven W.; Schamberg, Terah

    2016-01-01

    The purpose of this study was to evaluate the construct and predictive validity of scores on a measure of social acceptability of class-wide and individual student intervention, the School Intervention Rating Form (SIRF), with high school teachers. Utilizing scores from 158 teachers, exploratory factor analysis revealed a three-factor (i.e.,…

  13. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    ERIC Educational Resources Information Center

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  14. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    ERIC Educational Resources Information Center

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  15. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  16. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  17. An Examination of Three Texas High Schools' Restructuring Strategies that Resulted in an Academically Acceptable Rating

    ERIC Educational Resources Information Center

    Massey Fields, Chamara

    2011-01-01

    This study examined three high schools in a large urban school district in Texas that achieved an academically acceptable rating after being sanctioned to reconstitute by state agencies. Texas state accountability standards are a result of the No Child Left Behind Act of 2011 (NCLB). Texas state law requires schools to design a reconstitution plan…

  18. Reduction of LNG operator error and equipment failure rates. Topical report, 20 April 1990

    SciTech Connect

    Atallah, S.; Shah, J.N.; Betti, M.

    1990-04-01

    Tables summarizing human error rates and equipment failure frequencies applicable to the LNG industry are presented. Improved training, better supervision, emergency response drills and improved panel design were methods recommended for reducing human error rates. Outright scheduled replacement of critical components, regular inspection and maintenance, and the use of redundant components were reviewed as means for reducing equipment failure rates. The effect of reducing human error and equipment failure rates on the frequency of overfilling an LNG tank were examined. In addition, guidelines for estimating the cost and benefits of these mitigation measures were considered.

  19. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    PubMed

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data.

  20. Study of bit error rate (BER) for multicarrier OFDM

    NASA Astrophysics Data System (ADS)

    Alshammari, Ahmed; Albdran, Saleh; Matin, Mohammad

    2012-10-01

    Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technique that is being used more and more in recent wideband digital communications. It is known for its ability to handle severe channel conditions, the efficiency of spectral usage and the high data rate. Therefore, It has been used in many wired and wireless communication systems such as DSL, wireless networks and 4G mobile communications. Data streams are modulated and sent over multiple subcarriers using either M-QAM or M-PSK. OFDM has lower inter simple interference (ISI) levels because of the of the low data rates of carriers resulting in long symbol periods. In this paper, BER performance of OFDM with respect to signal to noise ratio (SNR) is evaluated. BPSK Modulation is used in s Simulation based system in order to get the BER over different wireless channels. These channels include additive white Gaussian Noise (AWGN) and fading channels that are based on Doppler spread and Delay spread. Plots of the results are compared with each other after varying some of the key parameters of the system such as the IFFT, number of carriers, SNR. The results of the simulation give visualization of what kind of BER to expect when the signal goes through those channels.

  1. Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis.

    PubMed

    Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E

    2013-12-01

    In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.

  2. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    PubMed

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  3. Technology Acceptance and Course Completion Rates in Online Education: A Non-experimental, Mixed Method Study

    NASA Astrophysics Data System (ADS)

    Allison, Colelia

    As the need for quality online courses increase in demand, the acceptance of technology and completion rates become the focus of higher education. The purpose of this non-experimental, mixed method study was to examine the relationship between the university students' perceptions and acceptance of technology and learner completion rates with respect to the development of online courses. This study involved 61 participants from two universities regarding their perceived usefulness (PU) of technology, intent to use technology, and intent to complete a course. Two research questions were examined regarding student perceptions regarding technology employed in an online course and the relationship, if any, between technology acceptance and completion of an online university course. The technology acceptance model (TAM) was used to collect data on the usefulness of course activities and student intent to complete the course. An open-ended questionnaire was administered to collect information concerning student perceptions of course activities. Quantitative data was analyzed using SPSS and Qualtrics, which indicated there was not a significant relationship between technology acceptance and course completion (p = .154). Qualitative data were examined by pattern matching to create a concept map of the theoretical patterns between constructs. Pattern matching revealed many students favored the use of the Internet over Canvas. Furthermore, data showed students enrolled in online courses because of the flexibility and found the multimedia used in the courses as helpful in course completion. Insight was investigated to offer reasons and decisions concerning choice that were made by the students. Future recommendations are to expand mixed methods studies of technology acceptance in various disciplines to gain a better understanding of student perceptions of technology uses, intent to use, and course completion.

  4. The Tukey Honestly Significant Difference Procedure and Its Control of the Type I Error-Rate.

    ERIC Educational Resources Information Center

    Barnette, J. Jackson; McLean, James E.

    Tukey's Honestly Significant Difference (HSD) procedure (J. Tukey, 1953) is probably the most recommended and used procedure for controlling Type I error rate when making multiple pairwise comparisons as follow-ups to a significant omnibus F test. This study compared observed Type I errors with nominal alphas of 0.01, 0.05, and 0.10 compared for…

  5. Coevolution of Quasispecies: B-Cell Mutation Rates Maximize Viral Error Catastrophes

    NASA Astrophysics Data System (ADS)

    Kamp, Christel; Bornholdt, Stefan

    2002-02-01

    Coevolution of two coupled quasispecies is studied, motivated by the competition between viral evolution and adapting immune response. In this coadaptive model, besides the classical error catastrophe for high virus mutation rates, a second ``adaptation'' catastrophe occurs, when virus mutation rates are too small to escape immune attack. Maximizing both regimes of viral error catastrophes is a possible strategy for an optimal immune response, reducing the range of allowed viral mutation rates to a minimum. From this requirement, one obtains constraints on B-cell mutation rates and receptor lengths, yielding an estimate of somatic hypermutation rates in the germinal center in accordance with observation.

  6. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  7. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  8. Bit-Error-Rate Performance of a Gigabit Ethernet O-CDMA Technology Demonstrator (TD)

    SciTech Connect

    Hernandez, V J; Mendez, A J; Bennett, C V; Lennon, W J

    2004-07-09

    An O-CDMA TD based on 2-D (wavelength/time) codes is described, with bit-error-rate (BER) and eye-diagram measurements given for eight users. Simulations indicate that the TD can support 32 asynchronous users.

  9. An improved lane detection algorithm and the definition of the error rate standard

    NASA Astrophysics Data System (ADS)

    Yu, Chung-Hsien; Su, Chung-Yen

    2012-04-01

    In this paper, we propose a method to improve the problem that the assistant lane marks caused by pulse. We also define a method to distinguish the assistant lane marks' error rate objectively. To improve the problem, we mainly use the Sobel edge detection to replace the Canny edge detection. Also, we make use of the Gaussian filter to filter noise. Finally, we improve the ellipse ROI size in tracking part and the performance of the FPS (frame per second) from 32 to 39. In the past, we distinguished the assistant lane marks' error rate very subjectively. To avoid judging subjectively, we propose an objective method to define the assistant lane marks' error rate as a standard. We use the performance and the error rate to choose the ellipse ROI parameter.

  10. Impact of the flu mask regulation on health care personnel influenza vaccine acceptance rates.

    PubMed

    Edwards, Frances; Masick, Kevin D; Armellino, Donna

    2016-10-01

    Achieving high vaccination rates of health care personnel (HCP) is critical in preventing influenza transmission from HCP to patients and from patients to HCP; however, acceptance rates remain low. In 2013, New York State adopted the flu mask regulation, requiring unvaccinated HCP to wear a mask when in areas where patients are present. The purpose of this study assessed the impact of the flu mask regulation on the HCP influenza vaccination rate. A 13-question survey was distributed electronically and manually to the HCP to examine their knowledge of influenza transmission and the influenza vaccine and their personal vaccine acceptance history and perception about the use of the mask while working if not vaccinated. There were 1,905 respondents; 87% accepted the influenza vaccine, and 63% were first-time recipients who agreed the regulation influenced their vaccination decision. Of the respondents who declined the vaccine, 72% acknowledge HCP are at risk for transmitting influenza to patients, and 56% reported they did not receive enough information to make an educated decision. The flu mask protocol may have influenced HCP's choice to be vaccinated versus wearing a mask. The study findings supported that HCP may not have adequate knowledge on the morbidity and mortality associated with influenza. Regulatory agencies need to consider an alternative approach to increase HCP vaccination, such as mandating the influenza vaccine for HCP.

  11. Conjunction error rates on a continuous recognition memory test: little evidence for recollection.

    PubMed

    Jones, Todd C; Atchley, Paul

    2002-03-01

    Two experiments examined conjunction memory errors on a continuous recognition task where the lag between parent words (e.g., blackmail, jailbird) and later conjunction lures (blackbird) was manipulated. In Experiment 1, contrary to expectations, the conjunction error rate was highest at the shortest lag (1 word) and decreased as the lag increased. In Experiment 2 the conjunction error rate increased significantly from a 0- to a 1-word lag, then decreased slightly from a 1- to a 5-word lag. The results provide mixed support for simple familiarity and dual-process accounts of recognition. Paradoxically, searching for an item in memory does not appear to be a good encoding task.

  12. Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates

    DOE PAGES

    Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.; ...

    2015-12-01

    Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.

  13. Comparison of the acceptability ratings of appetizers under laboratory, base level and high altitude field conditions.

    PubMed

    Premavalli, K S; Wadikar, D D; Nanjappa, C

    2009-08-01

    The relationship between laboratory and field ratings was investigated for six different appetizers, including four ready-to-reconstitute mixes and two ready-to-eat munches. Liking ratings on a 5-point hedonic scale were obtained from an Indian Army field study at base level as well as at an altitude of 11,500 ft above sea level and for the same appetizers in the laboratory. The field trials of the six products were conducted in two phases and results revealed that the products were more acceptable at altitude, with increased liking scores as compared to base level. Subjective ratings for hunger revealed that at altitude, appetizer consumption had stimulated the appetite of the soldiers. The ability of laboratory ratings to predict acceptability of foods consumed under realistic conditions appears to depend on the convenience of the appetizer as well as the environmental conditions and the psycho-physiological status of the participants. The appetizers received higher ratings at altitude because of the pungent and spicy nature of appetizer mixes as compared with base field and laboratory conditions. However, for all the appetizers the pungent and sweet taste of the appetizer munches was highly preferred.

  14. Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.

    PubMed

    Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda

    2016-09-24

    We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe.

  15. Mean and Random Errors of Visual Roll Rate Perception from Central and Peripheral Visual Displays

    NASA Technical Reports Server (NTRS)

    Vandervaart, J. C.; Hosman, R. J. A. W.

    1984-01-01

    A large number of roll rate stimuli, covering rates from zero to plus or minus 25 deg/sec, were presented to subjects in random order at 2 sec intervals. Subjects were to make estimates of magnitude of perceived roll rate stimuli presented on either a central display, on displays in the peripheral ield of vision, or on all displays simultaneously. Response was by way of a digital keyboard device, stimulus exposition times were varied. The present experiment differs from earlier perception tasks by the same authors in that mean rate perception error (and standard deviation) was obtained as a function of rate stimulus magnitude, whereas the earlier experiments only yielded mean absolute error magnitude. Moreover, in the present experiment, all stimulus rates had an equal probability of occurrence, whereas the earlier tests featured a Gaussian stimulus probability density function. Results yield a ood illustration of the nonlinear functions relating rate presented to rate perceived by human observers or operators.

  16. The effect of sampling on estimates of lexical specificity and error rates.

    PubMed

    Rowland, Caroline F; Fletcher, Sarah L

    2006-11-01

    Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.

  17. Pages from a Sociometric Notebook: An Analysis of Nomination and Rating Scale Measures of Acceptance, Rejection, and Social Preference.

    ERIC Educational Resources Information Center

    Bukowski, William M.; Sippola, Lorrie; Hoza, Betsy; Newcomb, Andrew F.

    2000-01-01

    Provides a conceptual and empirical analysis of the associations between the fundamental sociometric dimensions of acceptance, rejection, and social preference. Examines whether nomination and rating scale measures index the same constructs. Notes that sociometric ratings measure social preference, but can also yield indicators of acceptance and…

  18. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    PubMed

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  19. Bit error rate investigation of spin-transfer-switched magnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Wang, Zihui; Zhou, Yuchen; Zhang, Jing; Huai, Yiming

    2012-10-01

    A method is developed to enable a fast bit error rate (BER) characterization of spin-transfer-torque magnetic random access memory magnetic tunnel junction (MTJ) cells without integrating with complementary metal-oxide semiconductor circuit. By utilizing the reflected signal from the devices under test, the measurement setup allows a fast measurement of bit error rates at >106, writing events per second. It is further shown that this method provides a time domain capability to examine the MTJ resistance states during a switching event, which can assist write error analysis in great detail. BER of a set of spin-transfer-torque MTJ cells has been evaluated by using this method, and bit error free operation (down to 10-8) for optimized in-plane MTJ cells has been demonstrated.

  20. National suicide rates a century after Durkheim: do we know enough to estimate error?

    PubMed

    Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W

    2010-06-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.

  1. Compensatory and Noncompensatory Information Integration and Halo Error in Performance Rating Judgments.

    ERIC Educational Resources Information Center

    Kishor, Nand

    1992-01-01

    The relationship between compensatory and noncompensatory information integration and the intensity of the halo effect in performance rating was studied. Seventy University of British Columbia (Canada) students rated 27 teacher profiles. That the way performance information is mentally integrated affects the intensity of halo error was supported.…

  2. A stochastic node-failure network with individual tolerable error rate at multiple sinks

    NASA Astrophysics Data System (ADS)

    Huang, Cheng-Fu; Lin, Yi-Kuei

    2014-05-01

    Many enterprises consider several criteria during data transmission such as availability, delay, loss, and out-of-order packets from the service level agreements (SLAs) point of view. Hence internet service providers and customers are gradually focusing on tolerable error rate in transmission process. The internet service provider should provide the specific demand and keep a certain transmission error rate by their SLAs to each customer. This paper is mainly to evaluate the system reliability that the demand can be fulfilled under the tolerable error rate at all sinks by addressing a stochastic node-failure network (SNFN), in which each component (edge or node) has several capacities and a transmission error rate. An efficient algorithm is first proposed to generate all lower boundary points, the minimal capacity vectors satisfying demand and tolerable error rate for all sinks. Then the system reliability can be computed in terms of such points by applying recursive sum of disjoint products. A benchmark network and a practical network in the United States are demonstrated to illustrate the utility of the proposed algorithm. The computational complexity of the proposed algorithm is also analyzed.

  3. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  4. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    PubMed

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  5. Effect of a misspecification of response rates on type I and type II errors, in a phase II Simon design.

    PubMed

    Baey, Charlotte; Le Deley, Marie-Cécile

    2011-07-01

    Phase-II trials are a key stage in the clinical development of a new treatment. Their main objective is to provide the required information for a go/no-go decision regarding a subsequent phase-III trial. In single arm phase-II trials, widely used in oncology, this decision relies on the comparison of efficacy outcomes observed in the trial to historical controls. The false positive rate generally accepted in phase-II trials, around 10%, contrasts with the very high attrition rate of new compounds tested in phase-III trials, estimated at about 60%. We assumed that this gap could partly be explained by the misspecification of the response rate expected with standard treatment, leading to erroneous hypotheses tested in the phase-II trial. We computed the false positive probability of a defined design under various hypotheses of expected efficacy probability. Similarly we calculated the power of the trial to detect the efficacy of a new compound for different expected efficacy rates. Calculations were done considering a binary outcome, such as the response rate, with a decision rule based on a Simon two-stage design. When analysing a single-arm phase-II trial, based on a design with a pre-specified null hypothesis, a 5% absolute error in the expected response rate leads to a false positive rate of about 30% when it is supposed to be 10%. This inflation of type-I error varies only slightly according to the hypotheses of the initial design. Single-arm phase-II trials poorly control for the false positive rate. Randomised phase-II trials should, therefore, be more often considered.

  6. Development and Validation of the Controller Acceptance Rating Scale (CARS): Results of Empirical Research

    NASA Technical Reports Server (NTRS)

    Lee, Katharine K.; Kerns, Karol; Bone, Randall

    2001-01-01

    The measurement of operational acceptability is important for the development, implementation, and evolution of air traffic management decision support tools. The Controller Acceptance Rating Scale was developed at NASA Ames Research Center for the development and evaluation of the Passive Final Approach Spacing Tool. CARS was modeled after a well-known pilot evaluation rating instrument, the Cooper-Harper Scale, and has since been used in the evaluation of the User Request Evaluation Tool, developed by MITRE's Center for Advanced Aviation System Development. In this paper, we provide a discussion of the development of CARS and an analysis of the empirical data collected with CARS to examine construct validity. Results of intraclass correlations indicated statistically significant reliability for the CARS. From the subjective workload data that were collected in conjunction with the CARS, it appears that the expected set of workload attributes was correlated with the CARS. As expected, the analysis also showed that CARS was a sensitive indicator of the impact of decision support tools on controller operations. Suggestions for future CARS development and its improvement are also provided.

  7. Estimation of the minimum mRNA splicing error rate in vertebrates.

    PubMed

    Skandalis, A

    2016-01-01

    The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons.

  8. Minimum attainable RMS attitude error using co-located rate sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1989-01-01

    A closed form analytical expression for the minimum attainable attitude error (as well as the error rate) in a flexible beam by feedback control using co-located rate sensors is announced. For simplicity, researchers consider a beam clamped at one end with an offset mass (antenna) at the other end where the controls and sensors are located. Both control moment generators and force actuators are provided. The results apply to any beam-like lattice-type truss, and provide the kind of performance criteria needed under CSI - Controls-Stuctures-Integrated optimization.

  9. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  10. Bit error rate testing of a proof-of-concept model baseband processor

    NASA Technical Reports Server (NTRS)

    Stover, J. B.; Fujikawa, G.

    1986-01-01

    Bit-error-rate tests were performed on a proof-of-concept baseband processor. The BBP, which operates at an intermediate frequency in the C-Band, demodulates, demultiplexes, routes, remultiplexes, and remodulates digital message segments received from one ground station for retransmission to another. Test methods are discussed and test results are compared with the Contractor's test results.

  11. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    2001-01-01

    Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…

  12. Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.

    2010-01-01

    We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.

  13. Kurzweil Reading Machine: A Partial Evaluation of Its Optical Character Recognition Error Rate.

    ERIC Educational Resources Information Center

    Goodrich, Gregory L.; And Others

    1979-01-01

    A study designed to assess the ability of the Kurzweil reading machine (a speech reading device for the visually handicapped) to read three different type styles produced by five different means indicated that the machines tested had different error rates depending upon the means of producing the copy and upon the type style used. (Author/CL)

  14. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    The Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) was submitted to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and to identify possible study characteristics that are predictive of reliability variation. The meta-analysis was performed…

  15. A simple calculation method for heavy ion induced soft error rate in space environment

    NASA Astrophysics Data System (ADS)

    Galimov, A. M.; Elushov, I. V.; Zebrev, G. I.

    2016-12-01

    In this paper based on the new parameterization shape, an alternative heavy ion induced soft errors characterization approach is proposed and validated. The method provides an unambiguous calculation procedure to predict an upset rate in highly-scaled memory in a space environment.

  16. Type I Error Rate and Power of Some Alternative Methods to the Independent Samples "t" Test.

    ERIC Educational Resources Information Center

    Nthangeni, Mbulaheni; Algina, James

    2001-01-01

    Examined Type I error rates and power for four tests for treatment control studies in which a larger treatment mean may be accompanied by a larger treatment variance and examined these aspects of the independent samples "t" test and the Welch test. Evaluated each test and suggested conditions for the use of each approach. (SLD)

  17. Advanced Communications Technology Satellite (ACTS) Fade Compensation Protocol Impact on Very Small-Aperture Terminal Bit Error Rate Performance

    NASA Technical Reports Server (NTRS)

    Cox, Christina B.; Coney, Thom A.

    1999-01-01

    The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.

  18. Between‐Batch Pharmacokinetic Variability Inflates Type I Error Rate in Conventional Bioequivalence Trials: A Randomized Advair Diskus Clinical Trial

    PubMed Central

    Carroll, KJ; Mielke, J; Benet, LZ; Jones, B

    2016-01-01

    We previously demonstrated pharmacokinetic differences among manufacturing batches of a US Food and Drug Administration (FDA)‐approved dry powder inhalation product (Advair Diskus 100/50) large enough to establish between‐batch bio‐inequivalence. Here, we provide independent confirmation of pharmacokinetic bio‐inequivalence among Advair Diskus 100/50 batches, and quantify residual and between‐batch variance component magnitudes. These variance estimates are used to consider the type I error rate of the FDA's current two‐way crossover design recommendation. When between‐batch pharmacokinetic variability is substantial, the conventional two‐way crossover design cannot accomplish the objectives of FDA's statistical bioequivalence test (i.e., cannot accurately estimate the test/reference ratio and associated confidence interval). The two‐way crossover, which ignores between‐batch pharmacokinetic variability, yields an artificially narrow confidence interval on the product comparison. The unavoidable consequence is type I error rate inflation, to ∼25%, when between‐batch pharmacokinetic variability is nonzero. This risk of a false bioequivalence conclusion is substantially higher than asserted by regulators as acceptable consumer risk (5%). PMID:27727445

  19. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate.

  20. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors.

    PubMed

    Bányai, László; Patthy, László

    2016-08-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation.

  1. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors

    PubMed Central

    Bányai, László; Patthy, László

    2016-01-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation. PMID:27476717

  2. Rate-distortion optimal video transport over IP allowing packets with bit errors.

    PubMed

    Harmanci, Oztan; Tekalp, A Murat

    2007-05-01

    We propose new models and methods for rate-distortion (RD) optimal video delivery over IP, when packets with bit errors are also delivered. In particular, we propose RD optimal methods for slicing and unequal error protection (UEP) of packets over IP allowing transmission of packets with bit errors. The proposed framework can be employed in a classical independent-layer transport model for optimal slicing, as well as in a cross-layer transport model for optimal slicing and UEP, where the forward error correction (FEC) coding is performed at the link layer, but the application controls the FEC code rate with the constraint that a given IP packet is subject to constant channel protection. The proposed method uses a novel dynamic programming approach to determine the optimal slicing and UEP configuration for each video frame in a practical manner, that is compliant with the AVC/H.264 standard. We also propose new rate and distortion estimation techniques at the encoder side in order to efficiently evaluate the objective function for a slice configuration. The cross-layer formulation option effectively determines which regions of a frame should be protected better; hence, it can be considered as a spatial UEP scheme. We successfully demonstrate, by means of experimental results, that each component of the proposed system provides significant gains, up to 2.0 dB, compared to competitive methods.

  3. Children's Acceptance Ratings of a Child with a Facial Scar: The Impact of Positive Scripts

    ERIC Educational Resources Information Center

    Nabors, Laura A.; Lehmkuhl, Heather D.; Warm, Joel S.

    2004-01-01

    Children with visible pediatric conditions may be at risk for low peer acceptance. More knowledge is needed about how different types of information influence children's acceptance. For this study, we examined the influence of scripts emphasizing either positive information and/or medical information on young children's acceptance of a line…

  4. The effects of digitizing rate and phase distortion errors on the shock response spectrum

    NASA Technical Reports Server (NTRS)

    Wise, J. H.

    1983-01-01

    Some of the methods used for acquisition and digitization of high-frequency transients in the analysis of pyrotechnic events, such as explosive bolts for spacecraft separation, are discussed with respect to the reduction of errors in the computed shock response spectrum. Equations are given for maximum error as a function of the sampling rate, phase distortion, and slew rate, and the effects of the characteristics of the filter used are analyzed. A filter is noted to exhibit good passband amplitude, phase response, and response to a step function is a compromise between the flat passband of the elliptic filter and the phase response of the Bessel filter; it is suggested that it be used with a sampling rate of 10f (5 percent).

  5. The Impact of Sex of the Speaker, Sex of the Rater and Profanity Type of Language Trait Errors in Speech Evaluation: A Test of the Rating Error Paradigm.

    ERIC Educational Resources Information Center

    Bock, Douglas G.; And Others

    1984-01-01

    This study (1) demonstrates the negative impact of profanity in a public speech and (2) sheds light on the conceptualization of the term "rating error." Implications for classroom teaching are discussed. (PD)

  6. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  7. Pupillary response predicts multiple object tracking load, error rate, and conscientiousness, but not inattentional blindness.

    PubMed

    Wright, Timothy J; Boot, Walter R; Morgan, Chelsea S

    2013-09-01

    Research on inattentional blindness (IB) has uncovered few individual difference measures that predict failures to detect an unexpected event. Notably, no clear relationship exists between primary task performance and IB. This is perplexing as better task performance is typically associated with increased effort and should result in fewer spare resources to process the unexpected event. We utilized a psychophysiological measure of effort (pupillary response) to explore whether differences in effort devoted to the primary task (multiple object tracking) are related to IB. Pupillary response was sensitive to tracking load and differences in primary task error rates. Furthermore, pupillary response was a better predictor of conscientiousness than primary task errors; errors were uncorrelated with conscientiousness. Despite being sensitive to task load, individual differences in performance and conscientiousness, pupillary response did not distinguish between those who noticed the unexpected event and those who did not. Results provide converging evidence that effort and primary task engagement may be unrelated to IB.

  8. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    PubMed

    Kugelman, Jeffrey R; Wiley, Michael R; Nagle, Elyse R; Reyes, Daniel; Pfeffer, Brad P; Kuhn, Jens H; Sanchez-Lockhart, Mariano; Palacios, Gustavo F

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA) as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5) of all compared methods.

  9. Wireless Fetal Heart Rate Monitoring in Inpatient Full-Term Pregnant Women: Testing Functionality and Acceptability

    PubMed Central

    Boatin, Adeline A.; Wylie, Blair; Goldfarb, Ilona; Azevedo, Robin; Pittel, Elena; Ng, Courtney; Haberer, Jessica

    2015-01-01

    We tested functionality and acceptability of a wireless fetal monitoring prototype technology in pregnant women in an inpatient labor unit in the United States. Women with full-term singleton pregnancies and no evidence of active labor were asked to wear the prototype technology for 30 minutes. We assessed functionality by evaluating the ability to successfully monitor the fetal heartbeat for 30 minutes, transmit this data to Cloud storage and view the data on a web portal. Three obstetricians also rated fetal cardiotocographs on ease of readability. We assessed acceptability by administering closed and open-ended questions on perceived utility and likeability to pregnant women and clinicians interacting with the prototype technology. Thirty-two women were enrolled, 28 of whom (87.5%) successfully completed 30 minutes of fetal monitoring including transmission of cardiotocographs to the web portal. Four sessions though completed, were not successfully uploaded to the Cloud storage. Six non-study clinicians interacted with the prototype technology. The primary technical problem observed was a delay in data transmission between the prototype and the web portal, which ranged from 2 to 209 minutes. Delays were ascribed to Wi-Fi connectivity problems. Recorded cardiotocographs received a mean score of 4.2/5 (± 1.0) on ease of readability with an interclass correlation of 0.81(95%CI 0.45, 0.96). Both pregnant women and clinicians found the prototype technology likable (81.3% and 66.7% respectively), useful (96.9% and 66.7% respectively), and would either use it again or recommend its use to another pregnant woman (77.4% and 66.7% respectively). In this pilot study we found that this wireless fetal monitoring prototype technology has potential for use in a United States inpatient setting but would benefit from some technology changes. We found it to be acceptable to both pregnant women and clinicians. Further research is needed to assess feasibility of using this

  10. Wireless fetal heart rate monitoring in inpatient full-term pregnant women: testing functionality and acceptability.

    PubMed

    Boatin, Adeline A; Wylie, Blair; Goldfarb, Ilona; Azevedo, Robin; Pittel, Elena; Ng, Courtney; Haberer, Jessica

    2015-01-01

    We tested functionality and acceptability of a wireless fetal monitoring prototype technology in pregnant women in an inpatient labor unit in the United States. Women with full-term singleton pregnancies and no evidence of active labor were asked to wear the prototype technology for 30 minutes. We assessed functionality by evaluating the ability to successfully monitor the fetal heartbeat for 30 minutes, transmit this data to Cloud storage and view the data on a web portal. Three obstetricians also rated fetal cardiotocographs on ease of readability. We assessed acceptability by administering closed and open-ended questions on perceived utility and likeability to pregnant women and clinicians interacting with the prototype technology. Thirty-two women were enrolled, 28 of whom (87.5%) successfully completed 30 minutes of fetal monitoring including transmission of cardiotocographs to the web portal. Four sessions though completed, were not successfully uploaded to the Cloud storage. Six non-study clinicians interacted with the prototype technology. The primary technical problem observed was a delay in data transmission between the prototype and the web portal, which ranged from 2 to 209 minutes. Delays were ascribed to Wi-Fi connectivity problems. Recorded cardiotocographs received a mean score of 4.2/5 (± 1.0) on ease of readability with an interclass correlation of 0.81(95%CI 0.45, 0.96). Both pregnant women and clinicians found the prototype technology likable (81.3% and 66.7% respectively), useful (96.9% and 66.7% respectively), and would either use it again or recommend its use to another pregnant woman (77.4% and 66.7% respectively). In this pilot study we found that this wireless fetal monitoring prototype technology has potential for use in a United States inpatient setting but would benefit from some technology changes. We found it to be acceptable to both pregnant women and clinicians. Further research is needed to assess feasibility of using this

  11. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  12. Relationships of consumer sensory ratings, marbling score, and shear force value to consumer acceptance of beef strip loin steaks.

    PubMed

    Platter, W J; Tatum, J D; Belk, K E; Chapman, P L; Scanga, J A; Smith, G C

    2003-11-01

    Logistic regression was used to quantify and characterize the effects of changes in marbling score, Warner-Bratzler shear force (WBSF), and consumer panel sensory ratings for tenderness, juiciness, or flavor on the probability of overall consumer acceptance of strip loin steaks from beef carcasses (n = 550). Consumers (n = 489) evaluated steaks for tenderness, juiciness, and flavor using nine-point hedonic scales (1 = like extremely and 9 = dislike extremely) and for overall steak acceptance (satisfied or not satisfied). Predicted acceptance of steaks by consumers was high (> 85%) when the mean consumer sensory rating for tenderness,juiciness, or flavor for a steak was 3 or lower on the hedonic scale. Conversely, predicted consumer acceptance of steaks was low (< or = 10%) when the mean consumer rating for tenderness, juiciness, or flavor for a steak was 5 or higher on the hedonic scale. As mean consumer sensory ratings for tenderness, juiciness, or flavor decreased from 3 to 5, the probability of acceptance of steaks by consumers diminished rapidly in a linear fashion. These results suggest that small changes in consumer sensory ratings for these sensory traits have dramatic effects on the probability of acceptance of steaks by consumers. Marbling score displayed a weak (adjusted R2 = 0.053), yet significant (P < 0.01), relationship to acceptance of steaks by consumers, and the shape of the predicted probability curve for steak acceptance was approximately linear over the entire range of marbling scores (Traces67 to Slightly Abundant97), suggesting that the likelihood of consumer acceptance of steaks increases approximately 10% for each full marbling score increase between Slight to Slightly Abundant. The predicted probability curve for consumer acceptance of steaks was sigmoidal for the WBSF model, with a steep decline in predicted probability of acceptance as WBSF values increased from 3.0 to 5.5 kg. Changes in WBSF within the high (> 5.5 kg) or low (< 3.0 kg

  13. Influence of UAS Pilot Communication and Execution Delay on Controller's Acceptability Ratings of UAS-ATC Interactions

    NASA Technical Reports Server (NTRS)

    Vu, Kim-Phuong L.; Morales, Gregory; Chiappe, Dan; Strybel, Thomas Z.; Battiste, Vernol; Shively, Jay; Buker, Timothy J

    2013-01-01

    Successful integration of UAS in the NAS will require that UAS interactions with the air traffic management system be similar to interactions between manned aircraft and air traffic management. For example, UAS response times to air traffic controller (ATCo) clearances should be equivalent to those that are currently found to be acceptable with manned aircraft. Prior studies have examined communication delays with manned aircraft. Unfortunately, there is no analogous body of research for UAS. The goal of the present study was to determine how UAS pilot communication and execution delays affect ATCos' acceptability ratings of UAS pilot responses when the UAS is operating in the NAS. Eight radar-certified controllers managed traffic in a modified ZLA sector with one UAS flying in it. In separate scenarios, the UAS pilot verbal communication and execution delays were either short (1.5 s) or long (5 s) and either constant or variable. The ATCo acceptability of UAS pilot communication and execution delays were measured subjectively via post trial ratings. UAS verbal pilot communication delay, were rated as acceptable 92% of the time when the delay was short. This acceptability level decreased to 64% when the delay was long. UAS pilot execution delay had less of an influence on ATCo acceptability ratings in the present stimulation. Implications of these findings for UAS in the NAS integration are discussed.

  14. Safety Aspects of Pulsed Dose Rate Brachytherapy: Analysis of Errors in 1,300 Treatment Sessions

    SciTech Connect

    Koedooder, Kees Wieringen, Niek van; Grient, Hans N.B. van der; Herten, Yvonne R.J. van; Pieters, Bradley R.; Blank, Leo

    2008-03-01

    Purpose: To determine the safety of pulsed-dose-rate (PDR) brachytherapy by analyzing errors and technical failures during treatment. Methods and Materials: More than 1,300 patients underwent treatment with PDR brachytherapy, using five PDR remote afterloaders. Most patients were treated with consecutive pulse schemes, also outside regular office hours. Tumors were located in the breast, esophagus, prostate, bladder, gynecology, anus/rectum, orbit, head/neck, with a miscellaneous group of small numbers, such as the lip, nose, and bile duct. Errors and technical failures were analyzed for 1,300 treatment sessions, for which nearly 20,000 pulses were delivered. For each tumor localization, the number and type of occurring errors were determined, as were which localizations were more error prone than others. Results: By routinely using the built-in dummy check source, only 0.2% of all pulses showed an error during the phase of the pulse when the active source was outside the afterloader. Localizations treated using flexible catheters had greater error frequencies than those treated with straight needles or rigid applicators. Disturbed pulse frequencies were in the range of 0.6% for the anus/rectum on a classic version 1 afterloader to 14.9% for orbital tumors using a version 2 afterloader. Exceeding the planned overall treatment time by >10% was observed in only 1% of all treatments. Patients received their dose as originally planned in 98% of all treatments. Conclusions: According to the experience in our institute with 1,300 PDR treatments, we found that PDR is a safe brachytherapy treatment modality, both during and outside of office hours.

  15. A forward error correction technique using a high-speed, high-rate single chip codec

    NASA Technical Reports Server (NTRS)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    1989-01-01

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  16. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Astrophysics Data System (ADS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-06-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  17. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  18. Rate-Distortion Optimization for Stereoscopic Video Streaming with Unequal Error Protection

    NASA Astrophysics Data System (ADS)

    Tan, A. Serdar; Aksay, Anil; Akar, Gozde Bozdagi; Arikan, Erdal

    2008-12-01

    We consider an error-resilient stereoscopic streaming system that uses an H.264-based multiview video codec and a rateless Raptor code for recovery from packet losses. One aim of the present work is to suggest a heuristic methodology for modeling the end-to-end rate-distortion (RD) characteristic of such a system. Another aim is to show how to make use of such a model to optimally select the parameters of the video codec and the Raptor code to minimize the overall distortion. Specifically, the proposed system models the RD curve of video encoder and performance of channel codec to jointly derive the optimal encoder bit rates and unequal error protection (UEP) rates specific to the layered stereoscopic video streaming. We define analytical RD curve modeling for each layer that includes the interdependency of these layers. A heuristic analytical model of the performance of Raptor codes is also defined. Furthermore, the distortion on the stereoscopic video quality caused by packet losses is estimated. Finally, analytical models and estimated single-packet loss distortions are used to minimize the end-to-end distortion and to obtain optimal encoder bit rates and UEP rates. The simulation results clearly demonstrate the significant quality gain against the nonoptimized schemes.

  19. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    PubMed Central

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  20. Analysis of bit error rate for modified T-APPM under weak atmospheric turbulence channel

    NASA Astrophysics Data System (ADS)

    Liu, Zhe; Zhang, Qi; Wang, Yong-jun; Liu, Bo; Zhang, Li-jia; Wang, Kai-min; Xiao, Fei; Deng, Chao-gong

    2013-12-01

    T-APPM is combined of TCM (trellis-coded modulation) and APPM (Amplitude-Pulse-position modulation) and has broad application prospects in space optical communication. Set partitioning in standard T-APPM algorithm has the optimal performance in a multi-carrier system, but whether this method has the optimal performance in APPM which is a single-carrier system is unknown. To solve this problem, we first research the atmospheric channel model with weak turbulence; then a modified T-APPM algorithm was proposed, compared to the standard T-APPM algorithm, modified algorithm uses Gray code mapping instead of set partitioning mapping; finally, simulate the two algorithms with Monte-Carlo method. Simulation results showed that, when bit error rate at 10-4, the modified T-APPM algorithm achieved 0.4dB in SNR, effectively improve the system error performance.

  1. Preliminary error budget for an optical ranging system: Range, range rate, and differenced range observables

    NASA Technical Reports Server (NTRS)

    Folkner, W. M.; Finger, M. H.

    1990-01-01

    Future missions to the outer solar system or human exploration of Mars may use telemetry systems based on optical rather than radio transmitters. Pulsed laser transmission can be used to deliver telemetry rates of about 100 kbits/sec with an efficiency of several bits for each detected photon. Navigational observables that can be derived from timing pulsed laser signals are discussed. Error budgets are presented based on nominal ground stations and spacecraft-transceiver designs. Assuming a pulsed optical uplink signal, two-way range accuracy may approach the few centimeter level imposed by the troposphere uncertainty. Angular information can be achieved from differenced one-way range using two ground stations with the accuracy limited by the length of the available baseline and by clock synchronization and troposphere errors. A method of synchronizing the ground station clocks using optical ranging measurements is presented. This could allow differenced range accuracy to reach the few centimeter troposphere limit.

  2. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  3. Study of flow rate induced measurement error in flow-through nano-hole plasmonic sensor

    PubMed Central

    Tu, Long; Huang, Liang; Wang, Tianyi; Wang, Wenhui

    2015-01-01

    Flow-through gold film perforated with periodically arrayed sub-wavelength nano-holes can cause extraordinary optical transmission (EOT), which has recently emerged as a label-free surface plasmon resonance sensor in biochemical detection by measuring the transmission spectral shift. This paper describes a systematic study of the effect of microfluidic field on the spectrum of EOT associated with the porous gold film. To detect biochemical molecules, the sub-micron-thick film is free-standing in a microfluidic field and thus subject to hydrodynamic deformation. The film deformation alone may cause spectral shift as measurement error, which is coupled with the spectral shift as real signal associated with the molecules. However, this microfluid-induced measurement error has long been overlooked in the field and needs to be identified in order to improve the measurement accuracy. Therefore, we have conducted simulation and analytic analysis to investigate how the microfluidic flow rate affects the EOT spectrum and verified the effect through experiment with a sandwiched device combining Au/Cr/Si3N4 nano-hole film and polydimethylsiloxane microchannels. We found significant spectral blue shift associated with even small flow rates, for example, 12.60 nm for 4.2 μl/min. This measurement error corresponds to 90 times the optical resolution of the current state-of-the-art commercially available spectrometer or 8400 times the limit of detection. This really severe measurement error suggests that we should pay attention to the microfluidic parameter setting for EOT-based flow-through nano-hole sensors and adopt right scheme to improve the measurement accuracy. PMID:26649131

  4. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  5. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    PubMed

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments.

  6. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.

  7. Theoretical Bit Error Rate Performance of the Kalman Filter Excisor for FM Interference

    DTIC Science & Technology

    1992-12-01

    un filtre de Kalman as- servi numeriquement par verrouillage de phase et s’avere quasi-optimum quant a la demodulation d’une interference de type MF...Puisqu’on presuppose que l’interftrence est plus forte le signal ou que le bruit, le filtre de Kalman se verrouille sur l’interfdrence et permet...AD-A263 018 THEORETICAL BIT ERROR RATE PERFORMANCE OF THE KALMAN FILTER EXCISOR FOR FM INTERFERENCE by Brian RKominchuk APR 19 1993 DEFENCE RESEARCH

  8. Digitally modulated bit error rate measurement system for microwave component evaluation

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary Jo W.; Budinger, James M.

    1989-01-01

    The NASA Lewis Research Center has developed a unique capability for evaluation of the microwave components of a digital communication system. This digitally modulated bit-error-rate (BER) measurement system (DMBERMS) features a continuous data digital BER test set, a data processor, a serial minimum shift keying (SMSK) modem, noise generation, and computer automation. Application of the DMBERMS has provided useful information for the evaluation of existing microwave components and of design goals for future components. The design and applications of this system for digitally modulated BER measurements are discussed.

  9. Creation and implementation of department-wide structured reports: an analysis of the impact on error rate in radiology reports.

    PubMed

    Hawkins, C Matthew; Hall, Seth; Zhang, Bin; Towbin, Alexander J

    2014-10-01

    The purpose of this study was to evaluate and compare textual error rates and subtypes in radiology reports before and after implementation of department-wide structured reports. Randomly selected radiology reports that were generated following the implementation of department-wide structured reports were evaluated for textual errors by two radiologists. For each report, the text was compared to the corresponding audio file. Errors in each report were tabulated and classified. Error rates were compared to results from a prior study performed prior to implementation of structured reports. Calculated error rates included the average number of errors per report, average number of nongrammatical errors per report, the percentage of reports with an error, and the percentage of reports with a nongrammatical error. Identical versions of voice-recognition software were used for both studies. A total of 644 radiology reports were randomly evaluated as part of this study. There was a statistically significant reduction in the percentage of reports with nongrammatical errors (33 to 26%; p = 0.024). The likelihood of at least one missense omission error (omission errors that changed the meaning of a phrase or sentence) occurring in a report was significantly reduced from 3.5 to 1.2% (p = 0.0175). A statistically significant reduction in the likelihood of at least one comission error (retained statements from a standardized report that contradict the dictated findings or impression) occurring in a report was also observed (3.9 to 0.8%; p = 0.0007). Carefully constructed structured reports can help to reduce certain error types in radiology reports.

  10. Equilibrating errors: reliable estimation of information transmission rates in biological systems with spectral analysis-based methods.

    PubMed

    Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

    2014-06-01

    Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.

  11. Data-driven region-of-interest selection without inflating Type I error rate.

    PubMed

    Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard

    2017-01-01

    In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies.

  12. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    NASA Astrophysics Data System (ADS)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  13. Comparing acceptance and refusal rates of virtual reality exposure vs. in vivo exposure by patients with specific phobias.

    PubMed

    Garcia-Palacios, A; Botella, C; Hoffman, H; Fabregat, S

    2007-10-01

    The present survey explored the acceptability of virtual reality (VR) exposure and in vivo exposure in 150 participants suffering from specific phobias. Seventy-six percent chose VR over in vivo exposure, and the refusal rate for in vivo exposure (27%) was higher than the refusal rate for VR exposure (3%). Results suggest that VR exposure could help increase the number of people who seek exposure therapy for phobias.

  14. Analytical Evaluation of Bit Error Rate Performance of a Free-Space Optical Communication System with Receive Diversity Impaired by Pointing Error

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2015-06-01

    Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.

  15. Critical error rate of quantum-key-distribution protocols versus the size and dimensionality of the quantum alphabet

    NASA Astrophysics Data System (ADS)

    Sych, Denis V.; Grishanin, Boris A.; Zadkov, Victor N.

    2004-11-01

    A quantum-information analysis of how the size and dimensionality of the quantum alphabet affect the critical error rate of the quantum-key-distribution (QKD) protocols is given on an example of two QKD protocols—the six-state and ∞-state (i.e., a protocol with continuous alphabet) ones. In the case of a two-dimensional Hilbert space, it is shown that, under certain assumptions, increasing the number of letters in the quantum alphabet up to infinity slightly increases the critical error rate. Increasing additionally the dimensionality of the Hilbert space leads to a further increase in the critical error rate.

  16. TCP Flow Level Performance Evaluation on Error Rate Aware Scheduling Algorithms in Evolved UTRA and UTRAN Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Uchida, Masato; Tsuru, Masato; Oie, Yuji

    We present a TCP flow level performance evaluation on error rate aware scheduling algorithms in Evolved UTRA and UTRAN networks. With the introduction of the error rate, which is the probability of transmission failure under a given wireless condition and the instantaneous transmission rate, the transmission efficiency can be improved without sacrificing the balance between system performance and user fairness. The performance comparison with and without error rate awareness is carried out dependant on various TCP traffic models, user channel conditions, schedulers with different fairness constraints, and automatic repeat request (ARQ) types. The results indicate that error rate awareness can make the resource allocation more reasonable and effectively improve the system and individual performance, especially for poor channel condition users.

  17. Finding the right coverage: the impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates.

    PubMed

    Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah

    2016-07-01

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown. Here, we estimated genotyping error rates in SNPs genotyped with double digest RAD sequencing from Mendelian incompatibilities in known mother-offspring dyads of Hoffman's two-toed sloth (Choloepus hoffmanni) across a range of coverage and sequence quality criteria, for both reference-aligned and de novo-assembled data sets. Genotyping error rates were more sensitive to coverage than sequence quality and low coverage yielded high error rates, particularly in de novo-assembled data sets. For example, coverage ≥5 yielded median genotyping error rates of ≥0.03 and ≥0.11 in reference-aligned and de novo-assembled data sets, respectively. Genotyping error rates declined to ≤0.01 in reference-aligned data sets with a coverage ≥30, but remained ≥0.04 in the de novo-assembled data sets. We observed approximately 10- and 13-fold declines in the number of loci sampled in the reference-aligned and de novo-assembled data sets when coverage was increased from ≥5 to ≥30 at quality score ≥30, respectively. Finally, we assessed the effects of genotyping coverage on a common population genetic application, parentage assignments, and showed that the proportion of incorrectly assigned maternities was relatively high at low coverage. Overall, our results suggest that the trade-off between sample size and genotyping error rates be considered prior to building sequencing libraries, reporting genotyping error rates become standard practice, and that effects of genotyping errors on inference be evaluated in restriction-enzyme-based SNP studies.

  18. Measuring radiation induced changes in the error rate of fiber optic data links

    NASA Astrophysics Data System (ADS)

    Decusatis, Casimer; Benedict, Mel

    1996-12-01

    The purpose of this work is to investigate the effects of ionizing (gamma) radiation exposure on the bit error rate (BER) of an optical fiber data communication link. While it is known that exposure to high radiation dose rates will darken optical fiber permanently, comparatively little work has been done to evaluate modern dose rates. The resulting increase in fiber attenuation over time represents an additional penalty in the link optical power budget, which can degrade the BER if it is not accounted for in the link design. Modeling the link to predict this penalty is difficult, and it requires detailed information about the fiber composition that may not be available to the link designer. We describe a laboratory method for evaluating the effects of moderate dose rates on both single-mode and multimode fiber. Once a sample of fiber has been measured, the data can be fit to a simple model for predicting (at least to first order) BER as a function of radiation dose for fibers of similar composition.

  19. Threshold-Based Bit Error Rate for Stopping Iterative Turbo Decoding in a Varying SNR Environment

    NASA Astrophysics Data System (ADS)

    Mohamad, Roslina; Harun, Harlisya; Mokhtar, Makhfudzah; Adnan, Wan Azizun Wan; Dimyati, Kaharudin

    2017-01-01

    Online bit error rate (BER) estimation (OBE) has been used as a stopping iterative turbo decoding criterion. However, the stopping criteria only work at high signal-to-noise ratios (SNRs), and fail to have early termination at low SNRs, which contributes to an additional iteration number and an increase in computational complexity. The failure of the stopping criteria is caused by the unsuitable BER threshold, which is obtained by estimating the expected BER performance at high SNRs, and this threshold does not indicate the correct termination according to convergence and non-convergence outputs (CNCO). Hence, in this paper, the threshold computation based on the BER of CNCO is proposed for an OBE stopping criterion (OBEsc). From the results, OBEsc is capable of terminating early in a varying SNR environment. The optimum number of iterations achieved by the OBEsc allows huge savings in decoding iteration number and decreasing the delay of turbo iterative decoding.

  20. Bit Error Rate Performance of Partially Coherent Dual-Branch SSC Receiver over Composite Fading Channels

    NASA Astrophysics Data System (ADS)

    Milić, Dejan N.; Đorđević, Goran T.

    2013-01-01

    In this paper, we study the effects of imperfect reference signal recovery on the bit error rate (BER) performance of dual-branch switch and stay combining receiver over Nakagami-m fading/gamma shadowing channels with arbitrary parameters. The average BER of quaternary phase shift keying is evaluated under the assumption that the reference carrier signal is extracted from the received modulated signal. We compute numerical results illustrating simultaneous influence of average signal-to-noise ratio per bit, fading severity, shadowing, phase-locked loop bandwidth-bit duration (BLTb) product, and switching threshold on BER performance. The effects of BLTb on receiver performance under different channel conditions are emphasized. Optimal switching threshold is determined which minimizes BER performance under given channel and receiver parameters.

  1. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-01-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  2. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    SciTech Connect

    Sterling, D; Ehler, E

    2015-06-15

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing.

  3. Errors in general practice: development of an error classification and pilot study of a method for detecting errors

    PubMed Central

    Rubin, G; George, A; Chinn, D; Richardson, C

    2003-01-01

    Objective: To describe a classification of errors and to assess the feasibility and acceptability of a method for recording staff reported errors in general practice. Design: An iterative process in a pilot practice was used to develop a classification of errors. This was incorporated in an anonymous self-report form which was then used to collect information on errors during June 2002. The acceptability of the reporting process was assessed using a self-completion questionnaire. Setting: UK general practice. Participants: Ten general practices in the North East of England. Main outcome measures: Classification of errors, frequency of errors, error rates per 1000 appointments, acceptability of the process to participants. Results: 101 events were used to create an initial error classification. This contained six categories: prescriptions, communication, appointments, equipment, clinical care, and "other" errors. Subsequently, 940 errors were recorded in a single 2 week period from 10 practices, providing additional information. 42% (397/940) were related to prescriptions, although only 6% (22/397) of these were medication errors. Communication errors accounted for 30% (282/940) of errors and clinical errors 3% (24/940). The overall error rate was 75.6/1000 appointments (95% CI 71 to 80). The method of error reporting was found to be acceptable by 68% (36/53) of respondents with only 8% (4/53) finding the process threatening. Conclusion: We have developed a classification of errors and described a practical and acceptable method for reporting them that can be used as part of the process of risk management. Errors are common and, although all have the potential to lead to an adverse event, most are administrative. PMID:14645760

  4. Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error

    ERIC Educational Resources Information Center

    Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju

    2009-01-01

    Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…

  5. Asymptotic error-rate analysis of FSO links using transmit laser selection over gamma-gamma atmospheric turbulence channels with pointing errors.

    PubMed

    García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen

    2012-01-30

    Since free-space optical (FSO) systems are usually installed on high buildings and building sway may cause vibrations in the transmitted beam, an unsuitable alignment between transmitter and receiver together with fluctuations in the irradiance of the transmitted optical beam due to the atmospheric turbulence can severely degrade the performance of optical wireless communication systems. In this paper, asymptotic bit error-rate (BER) performance for FSO communication systems using transmit laser selection over atmospheric turbulence channels with pointing errors is analyzed. Novel closed-form asymptotic expressions are derived when the irradiance of the transmitted optical beam is susceptible to either a wide range of turbulence conditions (weak to strong), following a gamma-gamma distribution of parameters α and β, or pointing errors, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results provide significant insight into the impact of various system and channel parameters, showing that the diversity order is independent of the pointing error when the equivalent beam radius at the receiver is at least 2(min{α,β})(1/2) times the value of the pointing error displacement standard deviation at the receiver. Moreover, since proper FSO transmission requires transmitters with accurate control of their beamwidth, asymptotic expressions are used to find the optimum beamwidth that minimizes the BER at different turbulence conditions. Simulation results are further demonstrated to confirm the accuracy and usefulness of the derived results, showing that asymptotic expressions here obtained lead to simple bounds on the bit error probability that get tighter over a wider range of signal-to-noise ratio (SNR) as the turbulence strength increases.

  6. Measuring error rates in genomic perturbation screens: gold standards for human functional genomics

    PubMed Central

    Hart, Traver; Brown, Kevin R; Sircoulomb, Fabrice; Rottapel, Robert; Moffat, Jason

    2014-01-01

    Technological advancement has opened the door to systematic genetics in mammalian cells. Genome-scale loss-of-function screens can assay fitness defects induced by partial gene knockdown, using RNA interference, or complete gene knockout, using new CRISPR techniques. These screens can reveal the basic blueprint required for cellular proliferation. Moreover, comparing healthy to cancerous tissue can uncover genes that are essential only in the tumor; these genes are targets for the development of specific anticancer therapies. Unfortunately, progress in this field has been hampered by off-target effects of perturbation reagents and poorly quantified error rates in large-scale screens. To improve the quality of information derived from these screens, and to provide a framework for understanding the capabilities and limitations of CRISPR technology, we derive gold-standard reference sets of essential and nonessential genes, and provide a Bayesian classifier of gene essentiality that outperforms current methods on both RNAi and CRISPR screens. Our results indicate that CRISPR technology is more sensitive than RNAi and that both techniques have nontrivial false discovery rates that can be mitigated by rigorous analytical methods. PMID:24987113

  7. Detecting trends in raptor counts: power and type I error rates of various statistical tests

    USGS Publications Warehouse

    Hatfield, J.S.; Gould, W.R.; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.

    1996-01-01

    We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.

  8. A call for more transparent reporting of error rates: the quality of AFLP data in ecological and evolutionary research.

    PubMed

    Crawford, Lindsay A; Koscinski, Daria; Keyghobadi, Nusha

    2012-12-01

    Despite much discussion of the importance of quantifying and reporting genotyping error in molecular studies, it is still not standard practice in the literature. This is particularly a concern for amplified fragment length polymorphism (AFLP) studies, where differences in laboratory, peak-calling and locus-selection protocols can generate data sets varying widely in genotyping error rate, the number of loci used and potentially estimates of genetic diversity or differentiation. In our experience, papers rarely provide adequate information on AFLP reproducibility, making meaningful comparisons among studies difficult. To quantify the extent of this problem, we reviewed the current molecular ecology literature (470 recent AFLP articles) to determine the proportion of studies that report an error rate and follow established guidelines for assessing error. Fifty-four per cent of recent articles do not report any assessment of data set reproducibility. Of those studies that do claim to have assessed reproducibility, the majority (~90%) either do not report a specific error rate or do not provide sufficient details to allow the reader to judge whether error was assessed correctly. Even of the papers that do report an error rate and provide details, many (≥23%) do not follow recommended standards for quantifying error. These issues also exist for other marker types such as microsatellites, and next-generation sequencing techniques, particularly those which use restriction enzymes for fragment generation. Therefore, we urge all researchers conducting genotyping studies to estimate and more transparently report genotyping error using existing guidelines and encourage journals to enforce stricter standards for the publication of genotyping studies.

  9. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    PubMed

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  10. The effect of voice recognition software on comparative error rates in radiology reports.

    PubMed

    McGurk, S; Brauer, K; Macfarlane, T V; Duncan, K A

    2008-10-01

    This study sought to confirm whether reports generated in a department of radiology contain more errors if generated using voice recognition (VR) software than if traditional dictation-transcription (DT) is used. All radiology reports generated over a 1-week period in a British teaching hospital were assessed. The presence of errors and their impact on the report were assessed. Data collected included the type of report, site of dictation, the experience of the operator, and whether English was the first language of the operator. 1887 reports were reviewed. 1160 (61.5%) were dictated using VR and 727 reports (38.5%) were generated by DT. 71 errors (3.8% of all reports) were identified. 56 errors were made using VR (4.8% of VR reports), whereas 15 errors were identified in DT reports (2.1% of transcribed reports). The difference in report errors between these two dictation methods was statistically significant (p = 0.002). Of the 71 reports containing errors, 37 (52.1%) had errors that affecting understanding. Other factors were also identified that significantly increased the likelihood of errors in a VR-generated report, such as working in a busy inpatient environment (p<0.001) and having a language other than English as a first language (p = 0.034). Operator grade was not significantly associated with increased errors. In conclusion, using VR significantly increases the number of reports containing errors. Errors using VR are significantly more likely to occur in noisy areas with a high workload and are more likely to be made by radiologists for whom English is not their first language.

  11. Bit Error Rate Performance Limitations Due to Raman Amplifier Induced Crosstalk in a WDM Transmission System

    NASA Astrophysics Data System (ADS)

    Tithi, F. H.; Majumder, S. P.

    2017-03-01

    Analysis is carried out for a single span wavelength division multiplexing (WDM) transmission system with distributed Raman amplification to find the effect of amplifier induced crosstalk on the bit error rate (BER) with different system parameters. The results are evaluated in terms of crosstalk power induced in a WDM channel due to Raman amplification, optical signal to crosstalk ratio (OSCR) and BER at any distance for different pump power and number of WDM channels. The results show that the WDM system suffers power penalty due to crosstalk which is significant at higher pump power, higher channel separation and number of WDM channel. It is noticed that at a BER 10-9, the power penalty is 8.7 dB and 10.5 dB for the length of 180 km and number of WDM channel N=32 and 64 respectively when the pump power is 20 mW and is higher at high pump power. Analytical results are validated by simulation.

  12. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  13. Bit error rate analysis of free-space optical system with spatial diversity over strong atmospheric turbulence channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Krishnan, Prabu; Sriram Kumar, D.

    2014-12-01

    Free-space optical communication (FSO) is emerging as a captivating alternative to work out the hindrances in the connectivity problems. It can be used for transmitting signals over common lands and properties that the sender or receiver may not own. The performance of an FSO system depends on the random environmental conditions. The bit error rate (BER) performance of differential phase shift keying FSO system is investigated. A distributed strong atmospheric turbulence channel with pointing error is considered for the BER analysis. Here, the system models are developed for single-input, single-output-FSO (SISO-FSO) and single-input, multiple-output-FSO (SIMO-FSO) systems. The closed-form mathematical expressions are derived for the average BER with various combining schemes in terms of the Meijer's G function.

  14. Suffering makes you egoist: acute pain increases acceptance rates and reduces fairness during a bilateral ultimatum game.

    PubMed

    Mancini, Alessandra; Betti, Viviana; Panasiti, Maria Serena; Pavone, Enea Francesco; Aglioti, Salvatore Maria

    2011-01-01

    Social preferences like interpersonal altruism, fairness, reciprocity and inequity aversion are inherently linked to departures from pure self-interest. During economic interactions, for example, defectors may be punished even if this implies a cost for the punishers. This violation of canonical assumptions in economics indicates that socially oriented decisions may predominate over self-centred stances. Here we explore whether the personal experience of pain changes the balance between self-gain and socially based choices. We used laser stimulation to induce pain or a warm sensation in subjects playing a modified version of the Ultimatum Game (UG) both in the role of responder and proposer. After each shot, responders evaluated the fairness of the offer. Moreover, responders and proposers rated the intensity and unpleasantness of the sensation evoked by the laser stimulation. Results show that suffering proposers decrease fair offers and suffering responders increase their acceptance rate irrespective of economic offer. Crucially, the intensity of painful stimulation has a predictive role on Moderately Unfair offers' acceptance rates. Thus the personal experience of pain may favour the emergence of a self-centered perspective aimed at maximizing self-gain. The results suggest that bodily states play a fundamental role in higher-order interpersonal negotiations and interactions.

  15. Estimation of genotyping error rate from repeat genotyping, unintentional recaptures and known parent-offspring comparisons in 16 microsatellite loci for brown rockfish (Sebastes auriculatus).

    PubMed

    Hess, Maureen A; Rhydderch, James G; LeClair, Larry L; Buckley, Raymond M; Kawase, Mitsuhiro; Hauser, Lorenz

    2012-11-01

    Genotyping errors are present in almost all genetic data and can affect biological conclusions of a study, particularly for studies based on individual identification and parentage. Many statistical approaches can incorporate genotyping errors, but usually need accurate estimates of error rates. Here, we used a new microsatellite data set developed for brown rockfish (Sebastes auriculatus) to estimate genotyping error using three approaches: (i) repeat genotyping 5% of samples, (ii) comparing unintentionally recaptured individuals and (iii) Mendelian inheritance error checking for known parent-offspring pairs. In each data set, we quantified genotyping error rate per allele due to allele drop-out and false alleles. Genotyping error rate per locus revealed an average overall genotyping error rate by direct count of 0.3%, 1.5% and 1.7% (0.002, 0.007 and 0.008 per allele error rate) from replicate genotypes, known parent-offspring pairs and unintentionally recaptured individuals, respectively. By direct-count error estimates, the recapture and known parent-offspring data sets revealed an error rate four times greater than estimated using repeat genotypes. There was no evidence of correlation between error rates and locus variability for all three data sets, and errors appeared to occur randomly over loci in the repeat genotypes, but not in recaptures and parent-offspring comparisons. Furthermore, there was no correlation in locus-specific error rates between any two of the three data sets. Our data suggest that repeat genotyping may underestimate true error rates and may not estimate locus-specific error rates accurately. We therefore suggest using methods for error estimation that correspond to the overall aim of the study (e.g. known parent-offspring comparisons in parentage studies).

  16. Anthropometric, muscle strength, and spinal mobility characteristics as predictors in the rating of acceptable loads in parcel sorting.

    PubMed

    Stålhammar, H R; Louhevaara, V

    1992-09-01

    The rating of acceptable load (RAL) attained with a standard test (RALSt) and a wrk-simulating test (RALW) for postal parcel sorting was related to anthropometric, muscle strength, and spinal mobility characteristics of 18 male sorters. The subjects comprised a subsample of 103 experienced male sorters who carried out the RAL tests at postal sorting centres. The dynamic hand-grip endurance correlated significantly (p = 0.036) to the RALSt results. Correspondingly, there was a significant correlation (p = 0.044) between the ratio of maximal isometric strength of trunk extension to body weight and the RALW. The dynamic hand-grip endurance predicted 26% of the variation in the RALSt; in the RALW the maximal isometric strength of trunk flexion to body weight ratio predicted 24%. The subjects who rated heavier weights for RALSt tended to have a better trunk mobility. The dynamic endurance of hand-grip muscles, trunk strength, and spinal flexibility seemed to be the most powerful predictors for the psychophysically assessed 'acceptable loads' in experienced workers performing manual materials handling tasks.

  17. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.

  18. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    SciTech Connect

    Chau, H.F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1{radical}(5){approx_equal}27.6%, thereby making it the most error resistant scheme known to date.

  19. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    NASA Astrophysics Data System (ADS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-11-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  20. Controlling Type I Error Rate in Evaluating Differential Item Functioning for Four DIF Methods: Use of Three Procedures for Adjustment of Multiple Item Testing

    ERIC Educational Resources Information Center

    Kim, Jihye

    2010-01-01

    In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…

  1. Error-rate estimation in discriminant analysis of non-linear longitudinal data: A comparison of resampling methods.

    PubMed

    de la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Núñez-Antón, Vicente

    2016-07-08

    Consider longitudinal observations across different subjects such that the underlying distribution is determined by a non-linear mixed-effects model. In this context, we look at the misclassification error rate for allocating future subjects using cross-validation, bootstrap algorithms (parametric bootstrap, leave-one-out, .632 and [Formula: see text]), and bootstrap cross-validation (which combines the first two approaches), and conduct a numerical study to compare the performance of the different methods. The simulation and comparisons in this study are motivated by real observations from a pregnancy study in which one of the main objectives is to predict normal versus abnormal pregnancy outcomes based on information gathered at early stages. Since in this type of studies it is not uncommon to have insufficient data to simultaneously solve the classification problem and estimate the misclassification error rate, we put special attention to situations when only a small sample size is available. We discuss how the misclassification error rate estimates may be affected by the sample size in terms of variability and bias, and examine conditions under which the misclassification error rate estimates perform reasonably well.

  2. Comparison of Self-Scoring Error Rate for SDS (Self Directed Search) (1970) and the Revised SDS (1977).

    ERIC Educational Resources Information Center

    Price, Gary E.; And Others

    A comparison of Self-Scoring Error Rate for Self Directed Search (SDS) and the revised SDS is presented. The subjects were college freshmen and sophomores who participated in career planning as a part of their orientation program, and a career workshop. Subjects, N=190 on first study and N=84 on second study, were then randomly assigned to the SDS…

  3. Parallel Transmission Pulse Design with Explicit Control for the Specific Absorption Rate in the Presence of Radiofrequency Errors

    PubMed Central

    Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L.; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L.; Guerin, Bastien

    2016-01-01

    Purpose A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. Methods The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors (“worst-case SAR”) is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Results Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled “worst-case SAR” in the presence of errors of this magnitude at minor cost of the excitation profile quality. Conclusion Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. PMID:26147916

  4. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets

    PubMed Central

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W.; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  5. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  6. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  7. Error rates, PCR recombination, and sampling depth in HIV-1 whole genome deep sequencing.

    PubMed

    Zanini, Fabio; Brodin, Johanna; Albert, Jan; Neher, Richard A

    2016-12-27

    Deep sequencing is a powerful and cost-effective tool to characterize the genetic diversity and evolution of virus populations. While modern sequencing instruments readily cover viral genomes many thousand fold and very rare variants can in principle be detected, sequencing errors, amplification biases, and other artifacts can limit sensitivity and complicate data interpretation. For this reason, the number of studies using whole genome deep sequencing to characterize viral quasi-species in clinical samples is still limited. We have previously undertaken a large scale whole genome deep sequencing study of HIV-1 populations. Here we discuss the challenges, error profiles, control experiments, and computational test we developed to quantify the accuracy of variant frequency estimation.

  8. Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.

    1987-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  9. Bit-error-rate testing of high-power 30-GHz traveling wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Fujikawa, Gene

    1986-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30 GHz, 200 W, coupled-cavity traveling wave tubes (TWTs). The transmission effects of each TWT were investigated on a band-limited, 220 Mb/sec SMSK signal. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20 GHz technology development program. The approach taken to test the 30 GHz tubes is described and the resultant test data are discussed. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  10. Effect of Vertical Rate Error on Recovery from Loss of Well Clear Between UAS and Non-Cooperative Intruders

    NASA Technical Reports Server (NTRS)

    Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2016-01-01

    When an Unmanned Aircraft System (UAS) encounters an intruder and is unable to maintain required temporal and spatial separation between the two vehicles, it is referred to as a loss of well-clear. In this state, the UAS must make its best attempt to regain separation while maximizing the minimum separation between itself and the intruder. When encountering a non-cooperative intruder (an aircraft operating under visual flight rules without ADS-B or an active transponder) the UAS must rely on the radar system to provide the intruders location, velocity, and heading information. As many UAS have limited climb and descent performance, vertical position andor vertical rate errors make it difficult to determine whether an intruder will pass above or below them. To account for that, there is a proposal by RTCA Special Committee 228 to prohibit guidance systems from providing vertical guidance to regain well-clear to UAS in an encounter with a non-cooperative intruder unless their radar system has vertical position error below 175 feet (95) and vertical velocity errors below 200 fpm (95). Two sets of fast-time parametric studies was conducted, each with 54000 pairwise encounters between a UAS and non-cooperative intruder to determine the suitability of offering vertical guidance to regain well clear to a UAS in the presence of radar sensor noise. The UAS was not allowed to maneuver until it received well-clear recovery guidance. The maximum severity of the loss of well-clear was logged and used as the primary indicator of the separation achieved by the UAS. One set of 54000 encounters allowed the UAS to maneuver either vertically or horizontally, while the second permitted horizontal maneuvers, only. Comparing the two data sets allowed researchers to see the effect of allowing vertical guidance to a UAS for a particular encounter and vertical rate error. Study results show there is a small reduction in the average severity of a loss of well-clear when vertical maneuvers

  11. Bit-Error-Rate-Minimizing Channel Shortening Using Post-FEQ Diversity Combining and a Genetic Algorithm

    DTIC Science & Technology

    2009-03-10

    AFIT/GE/ENG/09-01 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio APPROVED FOR...the United States Air Force, Department of Defense, or the United States Government. AFIT/GE/ENG/09-01 Bit-Error-Rate-Minimizing Channel Shortening...School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the

  12. Single Event Test Methodologies and System Error Rate Analysis for Triple Modular Redundant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael

    2010-01-01

    We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.

  13. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  14. A Framework for Interpreting Type I Error Rates from a Product‐Term Model of Interaction Applied to Quantitative Traits

    PubMed Central

    Province, Michael A.

    2015-01-01

    ABSTRACT Adequate control of type I error rates will be necessary in the increasing genome‐wide search for interactive effects on complex traits. After observing unexpected variability in type I error rates from SNP‐by‐genome interaction scans, we sought to characterize this variability and test the ability of heteroskedasticity‐consistent standard errors to correct it. We performed 81 SNP‐by‐genome interaction scans using a product‐term model on quantitative traits in a sample of 1,053 unrelated European Americans from the NHLBI Family Heart Study, and additional scans on five simulated datasets. We found that the interaction‐term genomic inflation factor (lambda) showed inflation and deflation that varied with sample size and allele frequency; that similar lambda variation occurred in the absence of population substructure; and that lambda was strongly related to heteroskedasticity but not to minor non‐normality of phenotypes. Heteroskedasticity‐consistent standard errors narrowed the range of lambda, with HC3 outperforming HC0, but in individual scans tended to create new P‐value outliers related to sparse two‐locus genotype classes. We explain the lambda variation as a result of non‐independence of test statistics coupled with stochastic biases in test statistics due to a failure of the test to reach asymptotic properties. We propose that one way to interpret lambda is by comparison to an empirical distribution generated from data simulated under the null hypothesis and without population substructure. We further conclude that the interaction‐term lambda should not be used to adjust test statistics and that heteroskedasticity‐consistent standard errors come with limitations that may outweigh their benefits in this setting. PMID:26659945

  15. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Astrophysics Data System (ADS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  16. Error Rate Improvement in Underwater MIMO Communications Using Sparse Partial Response Equalization

    DTIC Science & Technology

    2006-09-01

    λn−kvi(k) vHi (k) (13) θi(n) = n∑ k=1 λn−kvi(k)x (s)H i (k) (14) are the (time averaged) output correlation matrix and the input-output cross...error vector [5] and Ki(n) is the RLS gain defined as αi(n) = x (s) i (n)− cHi (n− 1)vi(n) (17) Ki(n) = Pi(n− 1)vi(n) λi + vHi (n)Pi(n− 1)vi(n) · (18...Using equations 13, 14, and the matrix inversion lemma [5], the inverse correlation matrix Pi(n) can be updated as Pi(n) = [ I−Ki(n) vHi (n) ] Pi(n− 1

  17. The effect of administrative boundaries and geocoding error on cancer rates in California.

    PubMed

    Goldberg, Daniel W; Cockburn, Myles G

    2012-04-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods.

  18. Resident physicians' clinical training and error rate: the roles of autonomy, consultation, and familiarity with the literature.

    PubMed

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-03-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores the relationships between residents' error rates and three clinical training methods (1) progressive independence or level of autonomy, (2) consulting the physician on call, and (3) familiarity with up-to-date medical literature, and whether these relationships vary among the specialties of surgery and internal medicine and between novice and experienced residents. 142 Residents in 22 medical departments from two hospitals participated in the study. Results of hierarchical linear model analysis indicated that lower levels of autonomy, higher levels of consultation with the physician on call, and higher levels of familiarity with up-to-date medical literature were associated with lower levels of resident's error rates. The associations varied between internal and surgery specializations and novice and experienced residents. In conclusion, the study results suggested that the implicit curriculum that residents should be afforded autonomy and progressive independence with nominal supervision in accordance with their relevant skills and experience must be applied cautiously depending on specialization and experience. In addition, it is necessary to create a supportive and judgment free climate within the department that may reduce a resident's hesitation to consult the attending physician.

  19. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    PubMed

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Social Acceptance; A Possible Mediator in the Association between Socio-Economic Deprivation and Under-18 Pregnancy Rates?

    ERIC Educational Resources Information Center

    Smith, Debbie Michelle; Roberts, Ron

    2009-01-01

    This study examines the social acceptance of young (under-18) pregnancy by assessing people's acceptance of young pregnancy and abortion in relation to deprivation. A cross-sectional survey design was conducted in two relatively affluent and two relatively deprived local authorities in London (n=570). Contrary to previous findings, participants…

  1. Can the Misinterpretation Amendment Rate Be Used as a Measure of Interpretive Error in Anatomic Pathology?: Implications of a Survey of the Directors of Anatomic and Surgical Pathology.

    PubMed

    Parkash, Vinita; Fadare, Oluwole; Dewar, Rajan; Nakhleh, Raouf; Cooper, Kumarasen

    2017-03-01

    A repeat survey of the Association of the Directors of Anatomic and Surgical Pathology, done 10 years after the original was used to assess trends and variability in classifying scenarios as errors, and the preferred post signout report modification for correcting error by the membership of the Association of the Directors of Anatomic and Surgical Pathology. The results were analyzed to inform on whether interpretive amendment rates might act as surrogate measures of interpretive error in pathology. An analyses of the responses indicated that primary level misinterpretations (benign to malignant and vice versa) were universally qualified as error; secondary-level misinterpretations or misclassifications were inconsistently labeled error. There was added variability in the preferred post signout report modification used to correct report alterations. The classification of a scenario as error appeared to correlate with severity of potential harm of the missed call, the perceived subjectivity of the diagnosis, and ambiguity of reporting terminology. Substantial differences in policies for error detection and optimal reporting format were documented between departments. In conclusion, the inconsistency in labeling scenarios as error, disagreement about the optimal post signout report modification for the correction of the error, and variability in error detection policies preclude the use of the misinterpretation amendment rate as a surrogate measure for error in anatomic pathology. There is little change in uniformity of definition, attitudes and perception of interpretive error in anatomic pathology in the last 10 years.

  2. Error-prone DnaE2 Balances the Genome Mutation Rates in Myxococcus xanthus DK1622.

    PubMed

    Peng, Ran; Chen, Jiang-He; Feng, Wan-Wan; Zhang, Zheng; Yin, Jun; Li, Ze-Shuo; Li, Yue-Zhong

    2017-01-01

    dnaE is an alpha subunit of the tripartite protein complex of DNA polymerase III that is responsible for the replication of bacterial genome. The dnaE gene is often duplicated in many bacteria, and the duplicated dnaE gene was reported dispensable for cell survivals and error-prone in DNA replication in a mystery. In this study, we found that all sequenced myxobacterial genomes possessed two dnaE genes. The duplicate dnaE genes were both highly conserved but evolved divergently, suggesting their importance in myxobacteria. Using Myxococcus xanthus DK1622 as a model, we confirmed that dnaE1 (MXAN_5844) was essential for cell survival, while dnaE2 (MXAN_3982) was dispensable and encoded an error-prone enzyme for replication. The deletion of dnaE2 had small effects on cellular growth and social motility, but significantly decreased the development and sporulation abilities, which could be recovered by the complementation of dnaE2. The expression of dnaE1 was always greatly higher than that of dnaE2 in either the growth or developmental stage. However, overexpression of dnaE2 could not make dnaE1 deletable, probably due to their protein structural and functional divergences. The dnaE2 overexpression not only improved the growth, development and sporulation abilities, but also raised the genome mutation rate of M. xanthus. We argued that the low-expressed error-prone DnaE2 played as a balancer for the genome mutation rates, ensuring low mutation rates for cell adaptation in new environments but avoiding damages from high mutation rates to cells.

  3. Error-prone DnaE2 Balances the Genome Mutation Rates in Myxococcus xanthus DK1622

    PubMed Central

    Peng, Ran; Chen, Jiang-he; Feng, Wan-wan; Zhang, Zheng; Yin, Jun; Li, Ze-shuo; Li, Yue-zhong

    2017-01-01

    dnaE is an alpha subunit of the tripartite protein complex of DNA polymerase III that is responsible for the replication of bacterial genome. The dnaE gene is often duplicated in many bacteria, and the duplicated dnaE gene was reported dispensable for cell survivals and error-prone in DNA replication in a mystery. In this study, we found that all sequenced myxobacterial genomes possessed two dnaE genes. The duplicate dnaE genes were both highly conserved but evolved divergently, suggesting their importance in myxobacteria. Using Myxococcus xanthus DK1622 as a model, we confirmed that dnaE1 (MXAN_5844) was essential for cell survival, while dnaE2 (MXAN_3982) was dispensable and encoded an error-prone enzyme for replication. The deletion of dnaE2 had small effects on cellular growth and social motility, but significantly decreased the development and sporulation abilities, which could be recovered by the complementation of dnaE2. The expression of dnaE1 was always greatly higher than that of dnaE2 in either the growth or developmental stage. However, overexpression of dnaE2 could not make dnaE1 deletable, probably due to their protein structural and functional divergences. The dnaE2 overexpression not only improved the growth, development and sporulation abilities, but also raised the genome mutation rate of M. xanthus. We argued that the low-expressed error-prone DnaE2 played as a balancer for the genome mutation rates, ensuring low mutation rates for cell adaptation in new environments but avoiding damages from high mutation rates to cells. PMID:28203231

  4. Rate Constants for Fine-structure Excitations in O–H Collisions with Error Bars Obtained by Machine Learning

    NASA Astrophysics Data System (ADS)

    Vieira, Daniel; Krems, Roman V.

    2017-02-01

    We present an approach using a combination of coupled channel scattering calculations with a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate constants for non-adiabatic transitions in inelastic atomic collisions to variations of the underlying adiabatic interaction potentials. Using this approach, we improve the previous computations of the rate constants for the fine-structure transitions in collisions of O({}3{P}j) with atomic H. We compute the error bars of the rate constants corresponding to 20% variations of the ab initio potentials and show that this method can be used to determine which of the individual adiabatic potentials are more or less important for the outcome of different fine-structure changing collisions.

  5. Carbon and sediment accumulation in the Everglades (USA) during the past 4000 years: rates, drivers, and sources of error

    USGS Publications Warehouse

    Glaser, Paul H.; Volin, John C.; Givnish, Thomas J.; Hansen, Barbara C. S.; Stricker, Craig A.

    2012-01-01

    Tropical and sub-tropical wetlands are considered to be globally important sources for greenhouse gases but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida, to assess these problems and determine the factors that could govern carbon accumulation in this large sub-tropical wetland. Accelerator mass spectroscopy dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion ( 0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1 ) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands.

  6. A survey of computational methods and error rate estimation procedures for peptide and protein identification in shotgun proteomics

    PubMed Central

    Nesvizhskii, Alexey I.

    2010-01-01

    This manuscript provides a comprehensive review of the peptide and protein identification process using tandem mass spectrometry (MS/MS) data generated in shotgun proteomic experiments. The commonly used methods for assigning peptide sequences to MS/MS spectra are critically discussed and compared, from basic strategies to advanced multi-stage approaches. A particular attention is paid to the problem of false-positive identifications. Existing statistical approaches for assessing the significance of peptide to spectrum matches are surveyed, ranging from single-spectrum approaches such as expectation values to global error rate estimation procedures such as false discovery rates and posterior probabilities. The importance of using auxiliary discriminant information (mass accuracy, peptide separation coordinates, digestion properties, and etc.) is discussed, and advanced computational approaches for joint modeling of multiple sources of information are presented. This review also includes a detailed analysis of the issues affecting the interpretation of data at the protein level, including the amplification of error rates when going from peptide to protein level, and the ambiguities in inferring the identifies of sample proteins in the presence of shared peptides. Commonly used methods for computing protein-level confidence scores are discussed in detail. The review concludes with a discussion of several outstanding computational issues. PMID:20816881

  7. The acceptance rate of young wasps by alien colonies depends on colony developmental stages in the swarm-founding wasp, Polybia paulista von ihering (Hymenoptera: Vespidae).

    PubMed

    Kudô, Kazuyuki; Hozumi, Satoshi; Mateus, Sidnei; Zucchi, Ronaldo

    2010-01-01

    In social insects, newly emerged individuals learn the colony-specific chemical label from their natal comb shortly after their emergence. These labels help to identify each individual's colony of origin and are used as a recognition template against which individuals can discriminate nestmates from non-nestmates. Our previous studies with Polybia paulista von Ihering support this general pattern, and the acceptance rate of young female and male wasps decreased as a function of their age. Our study also showed in P. paulista that more than 90% of newly emerged female wasps might be accepted by conspecific unrelated colonies. However, it has not been investigated whether the acceptance rate of newly emerged female wasps depends on colony developmental stage of recipient colonies. We introduced newly emerged female wasps of P. paulista into different colony developmental stags of recipient colonies, i.e., worker-producing and male-producing colonies. We found that the acceptance rate of newly emerged female wasps by alien colonies was pretty lower by male-producing colonies than worker-producing colonies. This is the first study to show that the acceptance rate of young female wasps depends on stages of recipient colonies.

  8. Accurate Bit-Error Rate Evaluation for TH-PPM Systems in Nakagami Fading Channels Using Moment Generating Functions

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Gunawan, Erry; Law, Choi Look; Teh, Kah Chan

    Analytical expressions based on the Gauss-Chebyshev quadrature (GCQ) rule technique are derived to evaluate the bit-error rate (BER) for the time-hopping pulse position modulation (TH-PPM) ultra-wide band (UWB) systems under a Nakagami-m fading channel. The analyses are validated by the simulation results and adopted to assess the accuracy of the commonly used Gaussian approximation (GA) method. The influence of the fading severity on the BER performance of TH-PPM UWB system is investigated.

  9. Evaluation of write error rate for voltage-driven dynamic magnetization switching in magnetic tunnel junctions with perpendicular magnetization

    NASA Astrophysics Data System (ADS)

    Shiota, Yoichi; Nozaki, Takayuki; Tamaru, Shingo; Yakushiji, Kay; Kubota, Hitoshi; Fukushima, Akio; Yuasa, Shinji; Suzuki, Yoshishige

    2016-01-01

    We investigated the write error rate (WER) for voltage-driven dynamic switching in magnetic tunnel junctions with perpendicular magnetization. We observed a clear oscillatory behavior of the switching probability with respect to the duration of pulse voltage, which reveals the precessional motion of magnetization during voltage application. We experimentally demonstrated WER as low as 4 × 10-3 at the pulse duration corresponding to a half precession period (˜1 ns). The comparison between the results of the experiment and simulation based on a macrospin model shows a possibility of ultralow WER (<10-15) under optimum conditions. This study provides a guideline for developing practical voltage-driven spintronic devices.

  10. Influence of beam wander on bit-error rate in a ground-to-satellite laser uplink communication system.

    PubMed

    Ma, Jing; Jiang, Yijun; Tan, Liying; Yu, Siyuan; Du, Wenhe

    2008-11-15

    Based on weak fluctuation theory and the beam-wander model, the bit-error rate of a ground-to-satellite laser uplink communication system is analyzed, in comparison with the condition in which beam wander is not taken into account. Considering the combined effect of scintillation and beam wander, optimum divergence angle and transmitter beam radius for a communication system are researched. Numerical results show that both of them increase with the increment of total link margin and transmitted wavelength. This work can benefit the ground-to-satellite laser uplink communication system design.

  11. Packet error rate analysis of OOK, DPIM, and PPM modulation schemes for ground-to-satellite laser uplink communications.

    PubMed

    Jiang, Yijun; Tao, Kunyu; Song, Yiwei; Fu, Sen

    2014-03-01

    Performance of on-off keying (OOK), digital pulse interval modulation (DPIM), and pulse position modulation (PPM) schemes are researched for ground-to-satellite laser uplink communications. Packet error rates of these modulation systems are compared, with consideration of the combined effect of intensity fluctuation and beam wander. Based on the numerical results, performances of different modulation systems are discussed. Optimum divergence angle and transmitted beam radius of different modulation systems are indicated and the relations of the transmitted laser power to them are analyzed. This work can be helpful for modulation scheme selection and system design in ground-to-satellite laser uplink communications.

  12. Acceptance rate, probability of follow-up, and expulsion of postpartum intrauterine contraceptive device offered at two primary health centers, North India

    PubMed Central

    Kant, Shashi; Archana, S.; Singh, Arvind Kumar; Ahamed, Farhad; Haldar, Partha

    2016-01-01

    Introduction: Acceptance rate of postpartum intrauterine contraceptive device (PPIUCD) offered through a public health approach is unknown. Our aim was to describe the acceptance rate, expulsion, and follow-up and factors associated with it when PPIUCD was offered to women delivering at two primary health centers (PHCs). Methods: We analyzed routine health data of deliveries at two PHCs in district Faridabad, India between May and December 2014, having sociodemographic variables, obstetric history, and during the follow-up check-up at 6-weeks postpartum for in situ status of intrauterine contraceptive device, side effects, and complications. Results: The overall acceptance rate among those eligible for PPIUCD was 39% (95% confidence interval [CI]: 35.1–42.9). Independent predictor of acceptance was a monthly family income of rate, and removal rate at 6 weeks postpartum was 18.0% and 13.0%, respectively. Expulsion by 6 weeks was associated with, age >25 years (O.R.: 2.21, 95% CI: 1.03–4.73), gravida ≥4 (O.R.: 4.01, 95% CI: 1.28–12.56), and a living previous-child (O.R.: 1.51, 95% CI: 1.04–2.19). Conclusion: Acceptance rate of PPIUCD was higher than that reported in literature. Women from lower income family, having at least one living child, and having attended antenatal care clinic were more likely to accept PPIUCD. PMID:28348988

  13. The safety of electronic prescribing: manifestations, mechanisms, and rates of system-related errors associated with two commercial systems in hospitals

    PubMed Central

    Westbrook, Johanna I; Baysari, Melissa T; Li, Ling; Burke, Rosemary; Richardson, Katrina L; Day, Richard O

    2013-01-01

    Objectives To compare the manifestations, mechanisms, and rates of system-related errors associated with two electronic prescribing systems (e-PS). To determine if the rate of system-related prescribing errors is greater than the rate of errors prevented. Methods Audit of 629 inpatient admissions at two hospitals in Sydney, Australia using the CSC MedChart and Cerner Millennium e-PS. System related errors were classified by manifestation (eg, wrong dose), mechanism, and severity. A mechanism typology comprised errors made: selecting items from drop-down menus; constructing orders; editing orders; or failing to complete new e-PS tasks. Proportions and rates of errors by manifestation, mechanism, and e-PS were calculated. Results 42.4% (n=493) of 1164 prescribing errors were system-related (78/100 admissions). This result did not differ by e-PS (MedChart 42.6% (95% CI 39.1 to 46.1); Cerner 41.9% (37.1 to 46.8)). For 13.4% (n=66) of system-related errors there was evidence that the error was detected prior to study audit. 27.4% (n=135) of system-related errors manifested as timing errors and 22.5% (n=111) wrong drug strength errors. Selection errors accounted for 43.4% (34.2/100 admissions), editing errors 21.1% (16.5/100 admissions), and failure to complete new e-PS tasks 32.0% (32.0/100 admissions). MedChart generated more selection errors (OR=4.17; p=0.00002) but fewer new task failures (OR=0.37; p=0.003) relative to the Cerner e-PS. The two systems prevented significantly more errors than they generated (220/100 admissions (95% CI 180 to 261) vs 78 (95% CI 66 to 91)). Conclusions System-related errors are frequent, yet few are detected. e-PS require new tasks of prescribers, creating additional cognitive load and error opportunities. Dual classification, by manifestation and mechanism, allowed identification of design features which increase risk and potential solutions. e-PS designs with fewer drop-down menu selections may reduce error risk. PMID:23721982

  14. Slow-growing cells within isogenic populations have increased RNA polymerase error rates and DNA damage

    PubMed Central

    van Dijk, David; Dhar, Riddhiman; Missarova, Alsu M.; Espinar, Lorena; Blevins, William R.; Lehner, Ben; Carey, Lucas B.

    2015-01-01

    Isogenic cells show a large degree of variability in growth rate, even when cultured in the same environment. Such cell-to-cell variability in growth can alter sensitivity to antibiotics, chemotherapy and environmental stress. To characterize transcriptional differences associated with this variability, we have developed a method—FitFlow—that enables the sorting of subpopulations by growth rate. The slow-growing subpopulation shows a transcriptional stress response, but, more surprisingly, these cells have reduced RNA polymerase fidelity and exhibit a DNA damage response. As DNA damage is often caused by oxidative stress, we test the addition of an antioxidant, and find that it reduces the size of the slow-growing population. More generally, we find a significantly altered transcriptome in the slow-growing subpopulation that only partially resembles that of cells growing slowly due to environmental and culture conditions. Slow-growing cells upregulate transposons and express more chromosomal, viral and plasmid-borne transcripts, and thus explore a larger genotypic—and so phenotypic — space. PMID:26268986

  15. Time-resolved in vivo luminescence dosimetry for online error detection in pulsed dose-rate brachytherapy

    SciTech Connect

    Andersen, Claus E.; Nielsen, Soeren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari

    2009-11-15

    Purpose: The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Methods: Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with {sup 192}Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from {+-}5 to {+-}15 mm) were simulated in software in order to assess the ability of the system to detect errors. Results: For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when

  16. Errors in administrative-reported ventilator-associated pneumonia rates: are never events really so?

    PubMed

    Thomas, Bradley W; Maxwell, Robert A; Dart, Benjamin W; Hartmann, Elizabeth H; Bates, Dustin L; Mejia, Vicente A; Smith, Philip W; Barker, Donald E

    2011-08-01

    Ventilator-associated pneumonia (VAP) is a common problem in an intensive care unit (ICU), although the incidence is not well established. This study aims to compare the VAP incidence as determined by the treating surgical intensivist with that detected by the hospital Infection Control Service (ICS). Trauma and surgical patients admitted to the surgical critical care service were prospectively evaluated for VAP during a 5-month time period. Collected data included the surgical intensivist's clinical VAP (SIS-VAP) assessment using Centers for Disease Control and Prevention (CDC) VAP criteria. As part of the hospital's VAP surveillance program, these patients' medical records were also reviewed by the ICS for VAP (ICS-VAP) using the same CDC VAP criteria. All patients suspected of having VAP underwent bronchioalveolar lavage (BAL). The SIS-VAP and ICS-VAP were then compared with BAL-VAP. Three hundred twenty-nine patients were admitted to the ICU during the study period. One hundred thirty-three were intubated longer than 48 hours and comprised our study population. Sixty-two patients underwent BAL evaluation for the presence of VAP on 89 occasions. SIS-VAP was diagnosed in 38 (28.5%) patients. ICS-VAP was identified in 11 (8.3%) patients (P < 0.001). The incidence of VAP by BAL criteria was 23.3 per cent. When compared with BAL, SIS-VAP had 61.3 per cent sensitivity and ICS-VAP had 29 per cent sensitivity. VAP rates reported by hospital administrative sources are significantly less accurate than physician-reported rates and dramatically underestimate the incidence of VAP. Proclaiming VAP as a never event for critically ill for surgical and trauma patients appears to be a fallacy.

  17. Movement error rate for evaluation of machine learning methods for sEMG-based hand movement classification.

    PubMed

    Gijsberts, Arjan; Atzori, Manfredo; Castellini, Claudio; Muller, Henning; Caputo, Barbara

    2014-07-01

    There has been increasing interest in applying learning algorithms to improve the dexterity of myoelectric prostheses. In this work, we present a large-scale benchmark evaluation on the second iteration of the publicly released NinaPro database, which contains surface electromyography data for 6 DOF force activations as well as for 40 discrete hand movements. The evaluation involves a modern kernel method and compares performance of three feature representations and three kernel functions. Both the force regression and movement classification problems can be learned successfully when using a nonlinear kernel function, while the exp- χ(2) kernel outperforms the more popular radial basis function kernel in all cases. Furthermore, combining surface electromyography and accelerometry in a multimodal classifier results in significant increases in accuracy as compared to when either modality is used individually. Since window-based classification accuracy should not be considered in isolation to estimate prosthetic controllability, we also provide results in terms of classification mistakes and prediction delay. To this extent, we propose the movement error rate as an alternative to the standard window-based accuracy. This error rate is insensitive to prediction delays and it allows us therefore to quantify mistakes and delays as independent performance characteristics. This type of analysis confirms that the inclusion of accelerometry is superior, as it results in fewer mistakes while at the same time reducing prediction delay.

  18. Personal digital assistants to collect tuberculosis bacteriology data in Peru reduce delays, errors, and workload, and are acceptable to users: cluster randomized controlled trial

    PubMed Central

    Blaya, Joaquín A.; Cohen, Ted; Rodríguez, Pablo; Kim, Jihoon; Fraser, Hamish S.F.

    2009-01-01

    Summary Objectives To evaluate the effectiveness of a personal digital assistant (PDA)-based system for collecting tuberculosis test results and to compare this new system to the previous paper-based system. The PDA- and paper-based systems were evaluated based on processing times, frequency of errors, and number of work-hours expended by data collectors. Methods We conducted a cluster randomized controlled trial in 93 health establishments in Peru. Baseline data were collected for 19 months. Districts (n = 4) were then randomly assigned to intervention (PDA) or control (paper) groups, and further data were collected for 6 months. Comparisons were made between intervention and control districts and within-districts before and after the introduction of the intervention. Results The PDA-based system had a significant effect on processing times (p < 0.001) and errors (p = 0.005). In the between-districts comparison, the median processing time for cultures was reduced from 23 to 8 days and for smears was reduced from 25 to 12 days. In that comparison, the proportion of cultures with delays >90 days was reduced from 9.2% to 0.1% and the number of errors was decreased by 57.1%. The intervention reduced the work-hours necessary to process results by 70% and was preferred by all users. Conclusions A well-designed PDA-based system to collect data from institutions over a large, resource-poor area can significantly reduce delays, errors, and person-hours spent processing data. PMID:19097925

  19. Assessment of error rates in acoustic monitoring with the R package monitoR

    USGS Publications Warehouse

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were  for song event detection.

  20. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  1. Modelling non-linear redshift-space distortions in the galaxy clustering pattern: systematic errors on the growth rate parameter

    NASA Astrophysics Data System (ADS)

    de la Torre, Sylvain; Guzzo, Luigi

    2012-11-01

    We investigate the ability of state-of-the-art redshift-space distortion models for the galaxy anisotropic two-point correlation function, ξ(r⊥, r∥), to recover precise and unbiased estimates of the linear growth rate of structure f, when applied to catalogues of galaxies characterized by a realistic bias relation. To this aim, we make use of a set of simulated catalogues at z = 0.1 and 1 with different luminosity thresholds, obtained by populating dark matter haloes from a large N-body simulation using halo occupation prescriptions. We examine the most recent developments in redshift-space distortion modelling, which account for non-linearities on both small and intermediate scales produced, respectively, by randomized motions in virialized structures and non-linear coupling between the density and velocity fields. We consider the possibility of including the linear component of galaxy bias as a free parameter and directly estimate the growth rate of structure f. Results are compared to those obtained using the standard dispersion model, over different ranges of scales. We find that the model of Taruya et al., the most sophisticated one considered in this analysis, provides in general the most unbiased estimates of the growth rate of structure, with systematic errors within ±4 per cent over a wide range of galaxy populations spanning luminosities between L > L* and L > 3L*. The scale dependence of galaxy bias plays a role on recovering unbiased estimates of f when fitting quasi-non-linear scales. Its effect is particularly severe for most luminous galaxies, for which systematic effects in the modelling might be more difficult to mitigate and have to be further investigated. Finally, we also test the impact of neglecting the presence of non-negligible velocity bias with respect to mass in the galaxy catalogues. This can produce an additional systematic error of the order of 1-3 per cent depending on the redshift, comparable to the statistical errors the we

  2. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities

    PubMed Central

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-01-01

    Introduction Sound is among the significant environmental factors for people’s health, and it has an important role in both physical and psychological injuries, and it also affects individuals’ performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. Methods This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Results Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant’s performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). Conclusion This study found that a sound level of 110 dB had an important effect on the individuals’ performances, i.e., the performances were decreased. PMID:27123216

  3. The Effects of Type I Error Rate and Power of the ANCOVA "F" Test and Selected Alternatives under Nonnormality and Variance Heterogeneity.

    ERIC Educational Resources Information Center

    Rheinheimer, David C.; Penfield, Douglas A.

    2001-01-01

    Studied, through Monte Carlo simulation, the conditions for which analysis of covariance (ANCOVA) does not maintain adequate Type I error rates and power and evaluated some alternative tests. Discusses differences in ANCOVA robustness for balanced and unbalanced designs. (SLD)

  4. Exact error rate analysis of equal gain and selection diversity for coherent free-space optical systems on strong turbulence channels.

    PubMed

    Niu, Mingbo; Cheng, Julian; Holzman, Jonathan F

    2010-06-21

    Exact error rate performances are studied for coherent free-space optical communication systems under strong turbulence with diversity reception. Equal gain and selection diversity are considered as practical schemes to mitigate turbulence. The exact bit-error rate for binary phase-shift keying and outage probability are developed for equal gain diversity. Analytical expressions are obtained for the bit-error rate of differential phase-shift keying and asynchronous frequency-shift keying, as well as for outage probability using selection diversity. Furthermore, we provide the closed-form expressions of diversity order and coding gain with both diversity receptions. The analytical results are verified by computer simulations and are suitable for rapid error rates calculation.

  5. Outage Performance and Average Symbol Error Rate of M-QAM for Maximum Ratio Combining with Multiple Interferers

    NASA Astrophysics Data System (ADS)

    Ahn, Kyung Seung

    In this paper, we investigate the performance of maximum ratio combining (MRC) in the presence of multiple cochannel interferences over a flat Rayleigh fading channel. Closed-form expressions of signal-to-interference-plus-noise ratio (SINK), outage probability, and average symbol error rate (SER) of quadrature amplitude modulation (QAM) with Mary signaling are obtained for unequal-power interference-to-noise ratio (INR). We also provide an upper-bound for the average SER using moment generating function (MGF) of the SINR. Moreover, we quantify the array gain loss between pure MRC (MRC system in the absence of CCI) and MRC system in the presence of CCI. Finally, we verify our analytical results by numerical simulations.

  6. The Effects of Positional Stress and Receiver Apprehension on Leniency Errors in Speech Evaluation: A Test of the Rating Error Paradigm.

    ERIC Educational Resources Information Center

    Bock, Douglas G.; Bock, E. Hope

    1984-01-01

    Tested variables that affect how students rate speeches delivered by their classmates. Found, for example, that students who rate the speeches before giving their own are more positively lenient than they are when rating those speeches given after they deliver their own speeches. (PD)

  7. Comparison of four different mobile devices for measuring heart rate and ECG with respect to aspects of usability and acceptance by older people.

    PubMed

    Ehmen, Hilko; Haesner, Marten; Steinke, Ines; Dorn, Mario; Gövercin, Mehmet; Steinhagen-Thiessen, Elisabeth

    2012-05-01

    In the area of product design and usability, most products are developed for the mass-market by technically oriented designers and developers for use by persons who themselves are also technically adept by today's standards. The demands of older people are commonly not given sufficient consideration within the early developmental process. In the present study, the usability and acceptability of four different devices meant to be worn for the measurement of heart rate or ECG were analyzed on the basis of qualitative subjective user ratings and structured interviews of twelve older participants. The data suggest that there was a relatively high acceptance concerning these belts by older adults but none of the four harnesses was completely usable. Especially problematic to the point of limiting satisfaction among older subjects were problems encountered while adjusting the length of the belt and/or closing the locking mechanism. The two devices intended for dedicated heart rate recording yielded the highest user ratings for design, and were clearly preferred for extended wearing time. Yet for all the devices participants identified several important deficiencies in their design, as well as suggestions for improvement. We conclude that the creation of an acceptable monitoring device for older persons requires designers and developers to consider the special demands and abilities of the target group.

  8. Water-balance uncertainty in Honduras: a limits-of-acceptability approach to model evaluation using a time-variant rating curve

    NASA Astrophysics Data System (ADS)

    Westerberg, I.; Guerrero, J.-L.; Beven, K.; Seibert, J.; Halldin, S.; Lundin, L.-C.; Xu, C.-Y.

    2009-04-01

    The climate of Central America is highly variable both spatially and temporally; extreme events like floods and droughts are recurrent phenomena posing great challenges to regional water-resources management. Scarce and low-quality hydro-meteorological data complicate hydrological modelling and few previous studies have addressed the water-balance in Honduras. In the alluvial Choluteca River, the river bed changes over time as fill and scour occur in the channel, leading to a fast-changing relation between stage and discharge and difficulties in deriving consistent rating curves. In this application of a four-parameter water-balance model, a limits-of-acceptability approach to model evaluation was used within the General Likelihood Uncertainty Estimation (GLUE) framework. The limits of acceptability were determined for discharge alone for each time step, and ideally a simulated result should always be contained within the limits. A moving-window weighted fuzzy regression of the ratings, based on estimated uncertainties in the rating-curve data, was used to derive the limits. This provided an objective way to determine the limits of acceptability and handle the non-stationarity of the rating curves. The model was then applied within GLUE and evaluated using the derived limits. Preliminary results show that the best simulations are within the limits 75-80% of the time, indicating that precipitation data and other uncertainties like model structure also have a significant effect on predictability.

  9. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  10. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    PubMed

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  11. Sentinel lymph node biopsy after neo-adjuvant chemotherapy in patients with breast cancer: Are the current false negative rates acceptable?

    PubMed

    Patten, D K; Zacharioudakis, K E; Chauhan, H; Cleator, S J; Hadjiminas, D J

    2015-08-01

    The advent of sentinel lymph node biopsy has revolutionised surgical management of axillary nodal disease in patients with breast cancer. Patients undergoing neo-adjuvant chemotherapy for large breast primary tumours may experience complete pathological response on a previously positive sentinel node whilst not eliminating the tumour from the other lymph nodes. Results from 2 large prospective cohort studies investigating sentinel lymph node biopsy after neo-adjuvant chemotherapy demonstrate a combined false negative rate of 12.6-14.2% and identification rate of 80-89% with the minimal acceptable false negative rate and identification rate being set at 10% and 90%, respectively. A false negative rate of 14% would have been classified as unacceptable when compared to the figures obtained by the pioneers of sentinel lymph node biopsy which was 5% or less.

  12. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    ERIC Educational Resources Information Center

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  13. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    PubMed

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-02-10

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of FST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of FST. In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  14. Packet error rate analysis of digital pulse interval modulation in intersatellite optical communication systems with diversified wavefront deformation.

    PubMed

    Zhu, Jin; Wang, Dayan; Xie, Wanqing

    2015-02-20

    Diversified wavefront deformation is an inevitable phenomenon in intersatellite optical communication systems, which will decrease system performance. In this paper, we investigate the description of wavefront deformation and its influence on the packet error rate (PER) of digital pulse interval modulation (DPIM). With the wavelet method, the diversified wavefront deformation can be described by wavelet parameters: coefficient, dilation, and shift factors, where the coefficient factor represents the depth, dilation factor represents the area, and shift factor is for location. Based on this, the relationship between PER and wavelet parameters is analyzed from a theoretical viewpoint. Numerical results illustrate the validity of theoretical analysis: PER increases with the depth and area and decreases if location gets farther from the center of the optical antenna. In addition to describing diversified deformation, the advantage of the wavelet method over Zernike polynomials in computational complexity is shown via numerical example. This work provides a feasible method for the description along with influence analysis of diversified wavefront deformation from a practical viewpoint and will be helpful for designing optical systems.

  15. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    PubMed Central

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  16. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media.

    PubMed

    Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C

    2016-06-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.

  17. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.

  18. American Recovery and Reinvestment Act of 2009. Interim Report on Customer Acceptance, Retention, and Response to Time-Based Rates from the Consumer Behavior Studies

    SciTech Connect

    Cappers, Peter; Hans, Liesel; Scheer, Richard

    2015-06-01

    Time-based rate programs1, enabled by utility investments in advanced metering infrastructure (AMI), are increasingly being considered by utilities as tools to reduce peak demand and enable customers to better manage consumption and costs. There are several customer systems that are relatively new to the marketplace and have the potential for improving the effectiveness of these programs, including in-home displays (IHDs), programmable communicating thermostats (PCTs), and web portals. Policy and decision makers are interested in more information about customer acceptance, retention, and response before moving forward with expanded deployments of AMI-enabled new rates and technologies. Under the Smart Grid Investment Grant Program (SGIG), the U.S. Department of Energy (DOE) partnered with several utilities to conduct consumer behavior studies (CBS). The goals involved applying randomized and controlled experimental designs for estimating customer responses more precisely and credibly to advance understanding of time-based rates and customer systems, and provide new information for improving program designs, implementation strategies, and evaluations. The intent was to produce more robust and credible analysis of impacts, costs, benefits, and lessons learned and assist utility and regulatory decision makers in evaluating investment opportunities involving time-based rates. To help achieve these goals, DOE developed technical guidelines to help the CBS utilities estimate customer acceptance, retention, and response more precisely.

  19. Automated measurement of the bit-error rate as a function of signal-to-noise ratio for microwave communications systems

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Daugherty, Elaine S.; Kramarchuk, Ihor

    1987-01-01

    The performance of microwave systems and components for digital data transmission can be characterized by a plot of the bit-error rate as a function of the signal to noise ratio (or E sub b/E sub o). Methods for the efficient automated measurement of bit-error rates and signal-to-noise ratios, developed at NASA Lewis Research Center, are described. Noise measurement considerations and time requirements for measurement accuracy, as well as computer control and data processing methods, are discussed.

  20. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  1. Cognitive illusions of authorship reveal hierarchical error detection in skilled typists.

    PubMed

    Logan, Gordon D; Crump, Matthew J C

    2010-10-29

    The ability to detect errors is an essential component of cognitive control. Studies of error detection in humans typically use simple tasks and propose single-process theories of detection. We examined error detection by skilled typists and found illusions of authorship that provide evidence for two error-detection processes. We corrected errors that typists made and inserted errors in correct responses. When asked to report errors, typists took credit for corrected errors and accepted blame for inserted errors, claiming authorship for the appearance of the screen. However, their typing rate showed no evidence of these illusions, slowing down after corrected errors but not after inserted errors. This dissociation suggests two error-detection processes: one sensitive to the appearance of the screen and the other sensitive to keystrokes.

  2. Bit-error rate performance of coherent optical M-ary PSK/QAM using decision-aided maximum likelihood phase estimation.

    PubMed

    Yu, Changyuan; Zhang, Shaoliang; Kam, Pooi Yuen; Chen, Jian

    2010-06-07

    The bit-error rate (BER) expressions of 16- phase-shift keying (PSK) and 16- quadrature amplitude modulation (QAM) are analytically obtained in the presence of a phase error. By averaging over the statistics of the phase error, the performance penalty can be analytically examined as a function of the phase error variance. The phase error variances leading to a 1-dB signal-to-noise ratio per bit penalty at BER=10(-4) have been found to be 8.7 x 10(-2) rad(2), 1.2 x 10(-2) rad(2), 2.4 x 10(-3) rad(2), 6.0 x 10(-4) rad(2) and 2.3 x 10(-3) rad(2) for binary, quadrature, 8-, and 16-PSK and 16QAM, respectively. With the knowledge of the allowable phase error variance, the corresponding laser linewidth tolerance can be predicted. We extend the phase error variance analysis of decision-aided maximum likelihood carrier phase estimation in M-ary PSK to 16QAM, and successfully predict the laser linewidth tolerance in different modulation formats, which agrees well with the Monte Carlo simulations. Finally, approximate BER expressions for different modulation formats are introduced to allow a quick estimation of the BER performance as a function of the phase error variance. Further, the BER approximations give a lower bound on the laser linewidth requirements in M-ary PSK and 16QAM. It is shown that as far as laser linewidth tolerance is concerned, 16QAM outperforms 16PSK which has the same spectral efficiency (SE), and has nearly the same performance as 8PSK which has lower SE. Thus, 16-QAM is a promising modulation format for high SE coherent optical communications.

  3. Heritability and molecular genetic basis of antisaccade eye tracking error rate: A genome-wide association study

    PubMed Central

    Vaidyanathan, Uma; Malone, Stephen M.; Donnelly, Jennifer M.; Hammer, Micah A.; Miller, Michael B.; McGue, Matt; Iacono, William G.

    2014-01-01

    Antisaccade deficits reflect abnormalities in executive function linked to various disorders including schizophrenia, externalizing psychopathology, and neurological conditions. We examined the genetic bases of antisaccade error in a sample of community-based twins and parents (N = 4,469). Biometric models showed that about half of the variance in the antisaccade response was due to genetic factors and half due to nonshared environmental factors. Molecular genetic analyses supported these results, showing that the heritability accounted for by common molecular genetic variants approximated biometric estimates. Genome-wide analyses revealed several SNPs as well as two genes—B3GNT7 and NCL—on Chromosome 2 associated with antisaccade error. SNPs and genes hypothesized to be associated with antisaccade error based on prior work, although generating some suggestive findings for MIR137, GRM8, and CACNG2, could not be confirmed. PMID:25387707

  4. Internal Consistency, Test–Retest Reliability and Measurement Error of the Self-Report Version of the Social Skills Rating System in a Sample of Australian Adolescents

    PubMed Central

    Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn

    2013-01-01

    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test–retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID). PMID:24040116

  5. Resident Physicians' Clinical Training and Error Rate: The Roles of Autonomy, Consultation, and Familiarity with the Literature

    ERIC Educational Resources Information Center

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-01-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores…

  6. Estimating the designated use attainment decision error rates of US Environmental Protection Agency's proposed numeric total phosphorus criteria for Florida, USA, colored lakes.

    PubMed

    McLaughlin, Douglas B

    2012-01-01

    The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors.

  7. Error-estimation-guided rebuilding of de novo models increases the success rate of ab initio phasing.

    PubMed

    Shrestha, Rojan; Simoncini, David; Zhang, Kam Y J

    2012-11-01

    Recent advancements in computational methods for protein-structure prediction have made it possible to generate the high-quality de novo models required for ab initio phasing of crystallographic diffraction data using molecular replacement. Despite those encouraging achievements in ab initio phasing using de novo models, its success is limited only to those targets for which high-quality de novo models can be generated. In order to increase the scope of targets to which ab initio phasing with de novo models can be successfully applied, it is necessary to reduce the errors in the de novo models that are used as templates for molecular replacement. Here, an approach is introduced that can identify and rebuild the residues with larger errors, which subsequently reduces the overall C(α) root-mean-square deviation (CA-RMSD) from the native protein structure. The error in a predicted model is estimated from the average pairwise geometric distance per residue computed among selected lowest energy coarse-grained models. This score is subsequently employed to guide a rebuilding process that focuses on more error-prone residues in the coarse-grained models. This rebuilding methodology has been tested on ten protein targets that were unsuccessful using previous methods. The average CA-RMSD of the coarse-grained models was improved from 4.93 to 4.06 Å. For those models with CA-RMSD less than 3.0 Å, the average CA-RMSD was improved from 3.38 to 2.60 Å. These rebuilt coarse-grained models were then converted into all-atom models and refined to produce improved de novo models for molecular replacement. Seven diffraction data sets were successfully phased using rebuilt de novo models, indicating the improved quality of these rebuilt de novo models and the effectiveness of the rebuilding process. Software implementing this method, called MORPHEUS, can be downloaded from http://www.riken.jp/zhangiru/software.html.

  8. Intermittent auscultation of fetal heart rate during labour - a widely accepted technique for low risk pregnancies: but are the current national guidelines robust and practical?

    PubMed

    Sholapurkar, S L

    2010-01-01

    Intermittent auscultation of fetal heart rate is an accepted practice in low risk labours in many countries. National guidelines on intrapartum fetal monitoring were critically reviewed regarding timing and frequency of intermittent auscultation. Hypothetical but plausible examples are presented to illustrate that it may be possible to miss significant fetal distress with strict adherence to current guidelines. Opinion is forwarded that intermittent auscultation should be performed for 60 seconds before and after three contractions over about 10 min every half an hour in the first stage of labour. Reasons are put forward to show how this could be more practical and patient friendly and at the same time could improve detection of fetal distress. The current recommendation of intermittent auscultation every 15 min in the first stage is associated with poor compliance and leads to unnecessary burden, stress and medicolegal liability for birth attendants. Modification of current national guidelines would be desirable.

  9. Acceptance speech.

    PubMed

    Yusuf, C K

    1994-01-01

    I am proud and honored to accept this award on behalf of the Government of Bangladesh, and the millions of Bangladeshi children saved by oral rehydration solution. The Government of Bangladesh is grateful for this recognition of its commitment to international health and population research and cost-effective health care for all. The Government of Bangladesh has already made remarkable strides forward in the health and population sector, and this was recognized in UNICEF's 1993 "State of the World's Children". The national contraceptive prevalence rate, at 40%, is higher than that of many developed countries. It is appropriate that Bangladesh, where ORS was discovered, has the largest ORS production capacity in the world. It was remarkable that after the devastating cyclone in 1991, the country was able to produce enough ORS to meet the needs and remain self-sufficient. Similarly, Bangladesh has one of the most effective, flexible and efficient control of diarrheal disease and epidemic response program in the world. Through the country, doctors have been trained in diarrheal disease management, and stores of ORS are maintained ready for any outbreak. Despite grim predictions after the 1991 cyclone and the 1993 floods, relatively few people died from diarrheal disease. This is indicative of the strength of the national program. I want to take this opportunity to acknowledge the contribution of ICDDR, B and the important role it plays in supporting the Government's efforts in the health and population sector. The partnership between the Government of Bangladesh and ICDDR, B has already borne great fruit, and I hope and believe that it will continue to do so for many years in the future. Thank you.

  10. Characterization of semiconductor-laser phase noise and estimation of bit-error rate performance with low-speed offline digital coherent receivers.

    PubMed

    Kikuchi, Kazuro

    2012-02-27

    We develop a systematic method for characterizing semiconductor-laser phase noise, using a low-speed offline digital coherent receiver. The field spectrum, the FM-noise spectrum, and the phase-error variance measured with such a receiver can completely describe phase-noise characteristics of lasers under test. The sampling rate of the digital coherent receiver should be much higher than the phase-fluctuation speed. However, 1 GS/s is large enough for most of the single-mode semiconductor lasers. In addition to such phase-noise characterization, interpolating the taken data at 1.25 GS/s to form a data stream at 10 GS/s, we can predict the bit-error rate (BER) performance of multi-level modulated optical signals at 10 Gsymbol/s. The BER degradation due to the phase noise is well explained by the result of the phase-noise measurements.

  11. Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment

    NASA Astrophysics Data System (ADS)

    Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.

    2016-11-01

    This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.

  12. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case

    PubMed Central

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275

  13. Sources of error in the estimation of mosquito infection rates used to assess risk of arbovirus transmission.

    PubMed

    Bustamante, Dulce M; Lord, Cynthia C

    2010-06-01

    Infection rate is an estimate of the prevalence of arbovirus infection in a mosquito population. It is assumed that when infection rate increases, the risk of arbovirus transmission to humans and animals also increases. We examined some of the factors that can invalidate this assumption. First, we used a model to illustrate how the proportion of mosquitoes capable of virus transmission, or infectious, is not a constant fraction of the number of infected mosquitoes. Thus, infection rate is not always a straightforward indicator of risk. Second, we used a model that simulated the process of mosquito sampling, pooling, and virus testing and found that mosquito infection rates commonly underestimate the prevalence of arbovirus infection in a mosquito population. Infection rate should always be used in conjunction with other surveillance indicators (mosquito population size, age structure, weather) and historical baseline data when assessing the risk of arbovirus transmission.

  14. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Burr, Tom; Favalli, Andrea; Nicholson, Andrew

    2016-03-01

    The declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar - Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to model the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. We find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters.

  15. Effects of box size, frequency of lifting, and height of lift on maximum acceptable weight of lift and heart rate for male university students in Iran

    PubMed Central

    Abadi, Ali Salehi Sahl; Mazlomi, Adel; Saraji, Gebraeil Nasl; Zeraati, Hojjat; Hadian, Mohammad Reza; Jafari, Amir Homayoun

    2015-01-01

    Introduction In spite of the widespread use of automation in industry, manual material handling (MMH) is still performed in many occupational settings. The emphasis on ergonomics in MMH tasks is due to the potential risks of workplace accidents and injuries. This study aimed to assess the effect of box size, frequency of lift, and height of lift on maximum acceptable weight of lift (MAWL) on the heart rates of male university students in Iran. Methods This experimental study was conducted in 2015 with 15 male students recruited from Tehran University of Medical Sciences. Each participant performed 18 different lifting tasks that involved three lifting frequencies (1lift/min, 4.3 lifts/min and 6.67 lifts/min), three lifting heights (floor to knuckle, knuckle to shoulder, and shoulder to arm reach), and two box sizes. Each set of experiments was conducted during the 20 min work period using the free-style lifting technique. The working heart rates (WHR) were recorded for the entire duration. In this study, we used SPSS version 18 software and descriptive statistical methods, analysis of variance (ANOVA), and the t-test for data analysis. Results The results of the ANOVA showed that there was a significant difference between the mean of MAWL in terms of frequencies of lifts (p = 0.02). Tukey’s post hoc test indicated that there was a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0. 01). There was a significant difference between the mean heart rates in terms of frequencies of lifts (p = 0.006), and Tukey’s post hoc test indicated a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0.004). But, there was no significant difference between the mean of MAWL and the mean heart rate in terms of lifting heights (p > 0.05). The results of the t-test showed that there was a significant difference between the mean of MAWL and the mean heart rate in terms of the sizes of the two boxes (p

  16. Estimates of rates and errors for measurements of direct-. gamma. and direct-. gamma. + jet production by polarized protons at RHIC

    SciTech Connect

    Beddo, M.E.; Spinka, H.; Underwood, D.G.

    1992-08-14

    Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.

  17. The Association between Self-Reported Difficulties in Emotion Regulation and Heart Rate Variability: The Salient Role of Not Accepting Negative Emotions.

    PubMed

    Visted, Endre; Sørensen, Lin; Osnes, Berge; Svendsen, Julie L; Binder, Per-Einar; Schanche, Elisabeth

    2017-01-01

    Difficulties in emotion regulation are associated with development and maintenance of psychopathology. Typically, features of emotion regulation are assessed with self-report questionnaires. Heart rate variability (HRV) is an objective measure proposed as an index of emotional regulation capacity. A limited number of studies have shown that self-reported difficulties in emotion regulation are associated with HRV. However, results from prior studies are inconclusive, and an ecological validation of the association has not yet been tested. Therefore, further exploration of the relation between self-report questionnaires and psychophysiological measures of emotional regulation is needed. The present study investigated the contribution of self-reported emotion regulation difficulties on HRV in a student sample. We expected higher scores on emotion regulation difficulties to be associated with lower vagus-mediated HRV (vmHRV). Sixty-three participants filled out the Difficulties in Emotion Regulation Scale and their resting HRV was assessed. In addition, a subsample of participants provided ambulatory 24-h HRV data, in order to ecologically validate the resting data. Correlation analyses indicated that self-reported difficulties in emotion regulation was negatively associated with vmHRV in both resting HRV and 24-h HRV. Specifically, when exploring the contribution of the different facets of emotion dysregulation, the inability to accept negative emotions showed the strongest association with HRV. The results are discussed and need for future research is described.

  18. The Association between Self-Reported Difficulties in Emotion Regulation and Heart Rate Variability: The Salient Role of Not Accepting Negative Emotions

    PubMed Central

    Visted, Endre; Sørensen, Lin; Osnes, Berge; Svendsen, Julie L.; Binder, Per-Einar; Schanche, Elisabeth

    2017-01-01

    Difficulties in emotion regulation are associated with development and maintenance of psychopathology. Typically, features of emotion regulation are assessed with self-report questionnaires. Heart rate variability (HRV) is an objective measure proposed as an index of emotional regulation capacity. A limited number of studies have shown that self-reported difficulties in emotion regulation are associated with HRV. However, results from prior studies are inconclusive, and an ecological validation of the association has not yet been tested. Therefore, further exploration of the relation between self-report questionnaires and psychophysiological measures of emotional regulation is needed. The present study investigated the contribution of self-reported emotion regulation difficulties on HRV in a student sample. We expected higher scores on emotion regulation difficulties to be associated with lower vagus-mediated HRV (vmHRV). Sixty-three participants filled out the Difficulties in Emotion Regulation Scale and their resting HRV was assessed. In addition, a subsample of participants provided ambulatory 24-h HRV data, in order to ecologically validate the resting data. Correlation analyses indicated that self-reported difficulties in emotion regulation was negatively associated with vmHRV in both resting HRV and 24-h HRV. Specifically, when exploring the contribution of the different facets of emotion dysregulation, the inability to accept negative emotions showed the strongest association with HRV. The results are discussed and need for future research is described. PMID:28337160

  19. Improving the Response Rate to a Street Survey: An Evaluation of the "But You Are Free to Accept or to Refuse" Technique.

    ERIC Educational Resources Information Center

    Gueguen, Nicolas; Pascual, Alexandre

    2005-01-01

    The "but you are free to accept or to refuse" technique is a compliance procedure in which someone is approached with a request by simply telling him/her that he/she is free to accept or to refuse the request. This semantic evocation leads to increased compliance with the request. Furthermore, in most of the studies in which this technique was…

  20. Boosting bit rates and error detection for the classification of fast-paced motor commands based on single-trial EEG analysis.

    PubMed

    Blankertz, Benjamin; Dornhege, Guido; Schäfer, Christin; Krepki, Roman; Kohlmorgen, Jens; Müller, Klaus-Robert; Kunzmann, Volker; Losch, Florian; Curio, Gabriel

    2003-06-01

    Brain-computer interfaces (BCIs) involve two coupled adapting systems--the human subject and the computer. In developing our BCI, our goal was to minimize the need for subject training and to impose the major learning load on the computer. To this end, we use behavioral paradigms that exploit single-trial EEG potentials preceding voluntary finger movements. Here, we report recent results on the basic physiology of such premovement event-related potentials (ERP). 1) We predict the laterality of imminent left- versus right-hand finger movements in a natural keyboard typing condition and demonstrate that a single-trial classification based on the lateralized Bereitschaftspotential (BP) achieves good accuracies even at a pace as fast as 2 taps/s. Results for four out of eight subjects reached a peak information transfer rate of more than 15 b/min; the four other subjects reached 6-10 b/min. 2) We detect cerebral error potentials from single false-response trials in a forced-choice task, reflecting the subject's recognition of an erroneous response. Based on a specifically tailored classification procedure that limits the rate of false positives at, e.g., 2%, the algorithm manages to detect 85% of error trials in seven out of eight subjects. Thus, concatenating a primary single-trial BP-paradigm involving finger classification feedback with such secondary error detection could serve as an efficient online confirmation/correction tool for improvement of bit rates in a future BCI setting. As the present variant of the Berlin BCI is designed to achieve fast classifications in normally behaving subjects, it opens a new perspective for assistance of action control in time-critical behavioral contexts; the potential transfer to paralyzed patients will require further study.

  1. Average bit error rate performance analysis of subcarrier intensity modulated MRC and EGC FSO systems with dual branches over M distribution turbulence channels

    NASA Astrophysics Data System (ADS)

    Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang

    2015-07-01

    Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.

  2. Choice of Reference Sequence and Assembler for Alignment of Listeria monocytogenes Short-Read Sequence Data Greatly Influences Rates of Error in SNP Analyses

    PubMed Central

    Pightling, Arthur W.; Petronella, Nicholas; Pagotto, Franco

    2014-01-01

    The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should

  3. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    NASA Technical Reports Server (NTRS)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  4. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  5. Evaluation of the Effect of Noise on the Rate of Errors and Speed of Work by the Ergonomic Test of Two-Hand Co-Ordination

    PubMed Central

    Habibi, Ehsanollah; Dehghan, Habibollah; Dehkordy, Sina Eshraghy; Maracy, Mohammad Reza

    2013-01-01

    Background: Among the most important and effective factors affecting the efficiency of the human workforce are accuracy, promptness, and ability. In the context of promoting levels and quality of productivity, the aim of this study was to investigate the effects of exposure to noise on the rate of errors, speed of work, and capability in performing manual activities. Methods: This experimental study was conducted on 96 students (52 female and 44 male) of the Isfahan Medical Science University with the average and standard deviations of age, height, and weight of 22.81 (3.04) years, 171.67 (8.51) cm, and 65.05 (13.13) kg, respectively. Sampling was conducted with a randomized block design. Along with controlling for intervening factors, a combination of sound pressure levels [65 dB (A), 85 dB (A), and 95 dB (A)] and exposure times (0, 20, and 40) were used for evaluation of precision and speed of action of the participants, in the ergonomic test of two-hand coordination. Data was analyzed by SPSS18 software using a descriptive and analytical statistical method by analysis of covariance (ANCOVA) repeated measures. Results: The results of this study showed that increasing sound pressure level from 65 to 95 dB in network ‘A’ increased the speed of work (P < 0.05). Increase in the exposure time (0 to 40 min of exposure) and gender showed no significant differences statistically in speed of work (P > 0.05). Male participants got annoyed from the noise more than females. Also, increase in sound pressure level increased the rate of error (P < 0.05). Conclusions: According to the results of this research, increasing the sound pressure level decreased efficiency and increased the errors and in exposure to sounds less than 85 dB in the beginning, the efficiency decreased initially and then increased in a mild slope. PMID:23930164

  6. Determination of the Contamination Rate and the Associated Error for Targets Observed by CoRoT in the Exoplanet Channel

    NASA Astrophysics Data System (ADS)

    Gardes, B.; Chabaud, P.-Y.; Guterman, P.

    2012-09-01

    In the CoRoT exoplanet field of view, photometric measurements are obtained by aperture integration using a generic collection of masks. The total flux held within the photometric mask may be split in two parts, the target flux itself and the flux due to the nearest neighbours considered as contaminants. So far ExoDat (http://cesam.oamp.fr/exodat) gives a rough estimate of the contamination rate for all potential exoplanet targets (level-0) based on generic PSF shapes built before CoRoT launch. Here, we present the updated estimate of the contamination rate (level-1) with its associated error. This estimate is done for each target observed by CoRoT in the exoplanet channel using a new catalog of PSF built from the first available flight images and taking into account the line of sight of the satellite (i.e. the satellite orientation).

  7. Quantification of in vivo progenitor mutation accrual with ultra-low error rate and minimal input DNA using SIP-HAVA-seq.

    PubMed

    Taylor, Pete H; Cinquin, Amanda; Cinquin, Olivier

    2016-11-01

    Assaying in vivo accrual of DNA damage and DNA mutations by stem cells and pinpointing sources of damage and mutations would further our understanding of aging and carcinogenesis. Two main hurdles must be overcome. First, in vivo mutation rates are orders of magnitude lower than raw sequencing error rates. Second, stem cells are vastly outnumbered by differentiated cells, which have a higher mutation rate-quantification of stem cell DNA damage and DNA mutations is thus best performed from small, well-defined cell populations. Here we report a mutation detection technique, based on the "duplex sequencing" principle, with an error rate below ∼10(-10) and that can start from as little as 50 pg DNA. We validate this technique, which we call SIP-HAVA-seq, by characterizing Caenorhabditis elegans germline stem cell mutation accrual and asking how mating affects that accrual. We find that a moderate mating-induced increase in cell cycling correlates with a dramatic increase in accrual of mutations. Intriguingly, these mutations consist chiefly of deletions in nonexpressed genes. This contrasts with results derived from mutation accumulation lines and suggests that mutation spectrum and genome distribution change with replicative age, chronological age, cell differentiation state, and/or overall worm physiological state. We also identify single-stranded gaps as plausible deletion precursors, providing a starting point to identify the molecular mechanisms of mutagenesis that are most active. SIP-HAVA-seq provides the first direct, genome-wide measurements of in vivo mutation accrual in stem cells and will enable further characterization of underlying mechanisms and their dependence on age and cell state.

  8. UGV acceptance testing

    NASA Astrophysics Data System (ADS)

    Kramer, Jeffrey A.; Murphy, Robin R.

    2006-05-01

    With over 100 models of unmanned vehicles now available for military and civilian safety, security or rescue applications, it is important to for agencies to establish acceptance testing. However, there appears to be no general guidelines for what constitutes a reasonable acceptance test. This paper describes i) a preliminary method for acceptance testing by a customer of the mechanical and electrical components of an unmanned ground vehicle system, ii) how it has been applied to a man-packable micro-robot, and iii) discusses the value of testing both to ensure that the customer has a workable system and to improve design. The test method automated the operation of the robot to repeatedly exercise all aspects and combinations of components on the robot for 6 hours. The acceptance testing process uncovered many failures consistent with those shown to occur in the field, showing that testing by the user does predict failures. The process also demonstrated that the testing by the manufacturer can provide important design data that can be used to identify, diagnose, and prevent long-term problems. Also, the structured testing environment showed that sensor systems can be used to predict errors and changes in performance, as well as uncovering unmodeled behavior in subsystems.

  9. Dissociation rate of cognate peptidyl-tRNA from the A-site of hyper-accurate and error-prone ribosomes.

    PubMed

    Karimi, R; Ehrenberg, M

    1994-12-01

    The binding stability of the aminoacyl-tRNA site (A-site), estimated from the dissociation rate constant kd, of AcPhe-Phe-tRNA(Phe) has been studied for wild-type (wt), for hyperaccurate ribosomes altered in S12 [streptomycin-dependent (SmD) and streptomycin-pseudodependent (SmP) phenotypes], for error-prone ribosomes altered in S4 (Ram phenotype), and for ribosomes in complex with the error-inducing aminoglycosides streptomycin and neomycin. The AcPhe2-tRNA stability is slightly and identically reduced for SmD and SmP phenotypes in relation to wt ribosomes. The stability is increased (kd is reduced) for Ram ribosomes to about the same extent as the proof-reading accuracy is decreased for this phenotype. kd is also reduced by the action of streptomycin and neomycin, but much less than the reduction in proof-reading accuracy induced by streptomycin. Similar kd values for SmD and SmP ribosomes indicate that the cause of streptomycin dependence is not excessive drop-off of peptidyl-tRNAs from the A-site.

  10. Attenuation and bit error rate for four co-propagating spatially multiplexed optical communication channels of exactly same wavelength in step index multimode fibers

    NASA Astrophysics Data System (ADS)

    Murshid, Syed H.; Chakravarty, Abhijit

    2011-06-01

    Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.

  11. ALTIMETER ERRORS,

    DTIC Science & Technology

    CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.

  12. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  13. Predicting sex offender recidivism. I. Correcting for item overselection and accuracy overestimation in scale development. II. Sampling error-induced attenuation of predictive validity over base rate information.

    PubMed

    Vrieze, Scott I; Grove, William M

    2008-06-01

    The authors demonstrate a statistical bootstrapping method for obtaining unbiased item selection and predictive validity estimates from a scale development sample, using data (N = 256) of Epperson et al. [2003 Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) technical paper: Development, validation, and recommended risk level cut scores. Retrieved November 18, 2006 from Iowa State University Department of Psychology web site: http://www.psychology.iastate.edu/ approximately dle/mnsost_download.htm] from which the Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) was developed. Validity (area under receiver operating characteristic curve) reported by Epperson et al. was .77 with 16 items selected. The present analysis yielded an asymptotically unbiased estimator AUC = .58. The present article also focused on the degree to which sampling error renders estimated cutting scores (appropriate to local [varying] recidivism base rates) nonoptimal, so that the long-run performance (measured by correct fraction, the total proportion of correct classifications) of these estimated cutting scores is poor, when they are applied to their parent populations (having assumed values for AUC and recidivism rate). This was investigated by Monte Carlo simulation over a range of AUC and recidivism rate values. Results indicate that, except for the AUC values higher than have ever been cross-validated, in combination with recidivism base rates severalfold higher than the literature average [Hanson and Morton-Bourgon, 2004, Predictors of sexual recidivism: An updated meta-analysis. (User report 2004-02.). Ottawa: Public Safety and Emergency Preparedness Canada], the user of an instrument similar in performance to the MnSOST-R cannot expect to achieve correct fraction performance notably in excess of what is achievable from knowing the population recidivism rate alone. The authors discuss the legal implications of their findings for procedural and substantive due process in

  14. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  15. Acceptance of tinnitus: validation of the tinnitus acceptance questionnaire.

    PubMed

    Weise, Cornelia; Kleinstäuber, Maria; Hesser, Hugo; Westin, Vendela Zetterqvist; Andersson, Gerhard

    2013-01-01

    The concept of acceptance has recently received growing attention within tinnitus research due to the fact that tinnitus acceptance is one of the major targets of psychotherapeutic treatments. Accordingly, acceptance-based treatments will most likely be increasingly offered to tinnitus patients and assessments of acceptance-related behaviours will thus be needed. The current study investigated the factorial structure of the Tinnitus Acceptance Questionnaire (TAQ) and the role of tinnitus acceptance as mediating link between sound perception (i.e. subjective loudness of tinnitus) and tinnitus distress. In total, 424 patients with chronic tinnitus completed the TAQ and validated measures of tinnitus distress, anxiety, and depression online. Confirmatory factor analysis provided support to a good fit of the data to the hypothesised bifactor model (root-mean-square-error of approximation = .065; Comparative Fit Index = .974; Tucker-Lewis Index = .958; standardised root mean square residual = .032). In addition, mediation analysis, using a non-parametric joint coefficient approach, revealed that tinnitus-specific acceptance partially mediated the relation between subjective tinnitus loudness and tinnitus distress (path ab = 5.96; 95% CI: 4.49, 7.69). In a multiple mediator model, tinnitus acceptance had a significantly stronger indirect effect than anxiety. The results confirm the factorial structure of the TAQ and suggest the importance of a general acceptance factor that contributes important unique variance beyond that of the first-order factors activity engagement and tinnitus suppression. Tinnitus acceptance as measured with the TAQ is proposed to be a key construct in tinnitus research and should be further implemented into treatment concepts to reduce tinnitus distress.

  16. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  17. Medication Errors

    MedlinePlus

    ... common links HHS U.S. Department of Health and Human Services U.S. Food and Drug Administration A to Z Index Follow ... Practices National Patient Safety Foundation To Err is Human: ... Errors: Quality Chasm Series National Coordinating Council for Medication Error ...

  18. Sensory evaluation ratings and moisture contents show that soy is acceptable as a partial replacement for all-purpose wheat flour in peanut butter graham crackers.

    PubMed

    Romanchik-Cerpovicz, Joelle E; Abbott, Amy E; Dent, Laura A

    2011-12-01

    Fortification can help individuals achieve adequate nutritional intake. Foods may be fortified with soy flour as a source of protein for individuals limiting their intake of animal products, either due to personal dietary preference or to reduce their intake of saturated fat, a known risk factor for heart disease. This study determined the feasibility of fortifying peanut butter graham crackers by substituting soy flour for all-purpose wheat flour at 25%, 50%, 75%, or 100% weight/weight. Graham crackers fortified with soy flour were compared to similarly prepared nonfortified peanut butter graham crackers. Moisture contents of all graham crackers were similar. Consumers (n=102) evaluated each graham cracker using a hedonic scale and reported liking the color, smell, and texture of all products. However, unlike peanut butter graham crackers fortified with lower levels of soy, graham crackers fortified with 100% weight/weight soy flour had less than desirable flavor, aftertaste, and overall acceptability. Overall, this study shows that fortification of peanut butter graham crackers up to 75% weight/weight with soy flour for all-purpose wheat flour is acceptable.

  19. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  20. Effect of the Transcendental Meditation Program on Graduation, College Acceptance and Dropout Rates for Students Attending an Urban Public High School

    ERIC Educational Resources Information Center

    Colbert, Robert D.

    2013-01-01

    High school graduation rates nationally have declined in recent years, despite public and private efforts. The purpose of the current study was to determine whether practice of the Quiet Time/Transcendental Meditation® program at a medium-size urban school results in higher school graduation rates compared to students who do not receive training…

  1. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    PubMed

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is.

  2. Error detection for genetic data, using likelihood methods

    SciTech Connect

    Ehm, M.G.; Kimmel, M.; Cottingham, R.W. Jr.

    1996-01-01

    As genetic maps become denser, the effect of laboratory typing errors becomes more serious. We review a general method for detecting errors in pedigree genotyping data that is a variant of the likelihood-ratio test statistic. It pinpoints individuals and loci with relatively unlikely genotypes. Power and significance studies using Monte Carlo methods are shown by using simulated data with pedigree structures similar to the CEPH pedigrees and a larger experimental pedigree used in the study of idiopathic dilated cardiomyopathy (DCM). The studies show the index detects errors for small values of {theta} with high power and an acceptable false positive rate. The method was also used to check for errors in DCM laboratory pedigree data and to estimate the error rate in CEPH chromosome 6 data. The errors flagged by our method in the DCM pedigree were confirmed by the laboratory. The results are consistent with estimated false-positive and false-negative rates obtained using simulation. 21 refs., 5 figs., 2 tabs.

  3. Dose error from deviation of dwell time and source position for high dose-rate 192Ir in remote afterloading system

    PubMed Central

    Okamoto, Hiroyuki; Aikawa, Ako; Wakita, Akihisa; Yoshio, Kotaro; Murakami, Naoya; Nakamura, Satoshi; Hamada, Minoru; Abe, Yoshihisa; Itami, Jun

    2014-01-01

    The influence of deviations in dwell times and source positions for 192Ir HDR-RALS was investigated. The potential dose errors for various kinds of brachytherapy procedures were evaluated. The deviations of dwell time ΔT of a 192Ir HDR source for the various dwell times were measured with a well-type ionization chamber. The deviations of source position ΔP were measured with two methods. One is to measure actual source position using a check ruler device. The other is to analyze peak distances from radiographic film irradiated with 20 mm gap between the dwell positions. The composite dose errors were calculated using Gaussian distribution with ΔT and ΔP as 1σ of the measurements. Dose errors depend on dwell time and distance from the point of interest to the dwell position. To evaluate the dose error in clinical practice, dwell times and point of interest distances were obtained from actual treatment plans involving cylinder, tandem-ovoid, tandem-ovoid with interstitial needles, multiple interstitial needles, and surface-mold applicators. The ΔT and ΔP were 32 ms (maximum for various dwell times) and 0.12 mm (ruler), 0.11 mm (radiographic film). The multiple interstitial needles represent the highest dose error of 2%, while the others represent less than approximately 1%. Potential dose error due to dwell time and source position deviation can depend on kinds of brachytherapy techniques. In all cases, the multiple interstitial needles is most susceptible. PMID:24566719

  4. System Performance, Error Rates, and Training Time for Recent FAA Academy Nonradar Graduates, Community Persons, and Handicapped Persons on the Radar Training Facility Pilot Position,

    DTIC Science & Technology

    1980-05-01

    acceptable limits. Feedback gives this knowledge. Negative reinforcement may have some useful purpose in training. On the other hand, it is doubtful that...it can serve usefully as the only means of shaping behavior. If negative reinforcement is used, it should be used in conjunction with opportunities

  5. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    ERIC Educational Resources Information Center

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  6. Development of a Regression Model for Estimating the Effects of Assumption Violations on Type I Error Rates in the Student's T-Test: Implications for Practitioners.

    ERIC Educational Resources Information Center

    Newman, Isadore; Hall, Rosalie J.; Fraas, John

    Multiple linear regression is used to model the effects of violating statistical assumptions on the likelihood of making a Type I error. This procedure is illustrated for the student's t-test (for independent groups) using data from previous Monte Carlo studies in which the actual alpha levels associated with violations of the normality…

  7. Grazing function g and collimation angular acceptance

    SciTech Connect

    Peggs, S.G.; Previtali, V.

    2009-11-02

    The grazing function g is introduced - a synchrobetatron optical quantity that is analogous (and closely connected) to the Twiss and dispersion functions {beta}, {alpha}, {eta}, and {eta}'. It parametrizes the rate of change of total angle with respect to synchrotron amplitude for grazing particles, which just touch the surface of an aperture when their synchrotron and betatron oscillations are simultaneously (in time) at their extreme displacements. The grazing function can be important at collimators with limited acceptance angles. For example, it is important in both modes of crystal collimation operation - in channeling and in volume reflection. The grazing function is independent of the collimator type - crystal or amorphous - but can depend strongly on its azimuthal location. The rigorous synchrobetatron condition g = 0 is solved, by invoking the close connection between the grazing function and the slope of the normalized dispersion. Propagation of the grazing function is described, through drifts, dipoles, and quadrupoles. Analytic expressions are developed for g in perfectly matched periodic FODO cells, and in the presence of {beta} or {eta} error waves. These analytic approximations are shown to be, in general, in good agreement with realistic numerical examples. The grazing function is shown to scale linearly with FODO cell bend angle, but to be independent of FODO cell length. The ideal value is g = 0 at the collimator, but finite nonzero values are acceptable. Practically achievable grazing functions are described and evaluated, for both amorphous and crystal primary collimators, at RHIC, the SPS (UA9), the Tevatron (T-980), and the LHC.

  8. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  9. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  10. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  11. Offer/Acceptance Ratio.

    ERIC Educational Resources Information Center

    Collins, Mimi

    1997-01-01

    Explores how human resource professionals, with above average offer/acceptance ratios, streamline their recruitment efforts. Profiles company strategies with internships, internal promotion, cooperative education programs, and how to get candidates to accept offers. Also discusses how to use the offer/acceptance ratio as a measure of program…

  12. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  13. The Roles of Verb Semantics, Entrenchment, and Morphophonology in the Retreat from Dative Argument-Structure Overgeneralization Errors

    ERIC Educational Resources Information Center

    Ambridge, Ben; Pine, Julian M.; Rowland, Caroline F.; Chang, Franklin

    2012-01-01

    Children (aged five-to-six and nine-to-ten years) and adults rated the acceptability of well-formed sentences and argument-structure overgeneralization errors involving the prepositional-object and double-object dative constructions (e.g. "Marge pulled the box to Homer/*Marge pulled Homer the box"). In support of the entrenchment hypothesis, a…

  14. Self-acceptance, acceptance of others, and SYMLOG: equivalent measures of the two central interpersonal dimensions?

    PubMed

    Hurley, J R

    1991-07-01

    After 50 hours of small group participation during 9 weeks, 91 young adults rated each same-group member's conduct on SYMLOG's dimensions of dominance, friendliness, and task-orientedness. Earlier, they made similar ratings twice, several weeks apart, on separate measures of self-acceptance and acceptance of others. Individuals' mean SYMLOG dominance ratings by group peers correlated much more highly with aggregated ratings for self-acceptance (.83) than for other-acceptance (.02), while SYMLOG friendliness correlated more positively with acceptance of others (.85) than with self-acceptance (.05). Self-ratings yielded parallel, but weaker associations. After attenuation corrections, these divergent approaches to assessing the interpersonal domain's central dimensions yielded empirically equivalent results. Both methods provide measures relevant to small group processes.

  15. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  16. Improved Error Thresholds for Measurement-Free Error Correction.

    PubMed

    Crow, Daniel; Joynt, Robert; Saffman, M

    2016-09-23

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10^{-3} to 10^{-4}-comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  17. Study of Uncertainties of Predicting Space Shuttle Thermal Environment. [impact of heating rate prediction errors on weight of thermal protection system

    NASA Technical Reports Server (NTRS)

    Fehrman, A. L.; Masek, R. V.

    1972-01-01

    Quantitative estimates of the uncertainty in predicting aerodynamic heating rates for a fully reusable space shuttle system are developed and the impact of these uncertainties on Thermal Protection System (TPS) weight are discussed. The study approach consisted of statistical evaluations of the scatter of heating data on shuttle configurations about state-of-the-art heating prediction methods to define the uncertainty in these heating predictions. The uncertainties were then applied as heating rate increments to the nominal predicted heating rate to define the uncertainty in TPS weight. Separate evaluations were made for the booster and orbiter, for trajectories which included boost through reentry and touchdown. For purposes of analysis, the vehicle configuration is divided into areas in which a given prediction method is expected to apply, and separate uncertainty factors and corresponding uncertainty in TPS weight derived for each area.

  18. Estimates of rates and errors for measurements of direct-{gamma} and direct-{gamma} + jet production by polarized protons at RHIC

    SciTech Connect

    Beddo, M.E.; Spinka, H.; Underwood, D.G.

    1992-08-14

    Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.

  19. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi) Logic edit errors. (vii) Data entry errors. (viii) Managed care rate cell errors. (ix) Managed care...) Insufficient documentation. (iii) Procedure coding errors. (iv) Diagnosis coding errors. (v) Unbundling....

  20. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi) Logic edit errors. (vii) Data entry errors. (viii) Managed care rate cell errors. (ix) Managed care...) Insufficient documentation. (iii) Procedure coding errors. (iv) Diagnosis coding errors. (v) Unbundling....

  1. LIMS user acceptance testing.

    PubMed

    Klein, Corbett S

    2003-01-01

    Laboratory Information Management Systems (LIMS) play a key role in the pharmaceutical industry. Thorough and accurate validation of such systems is critical and is a regulatory requirement. LIMS user acceptance testing is one aspect of this testing and enables the user to make a decision to accept or reject implementation of the system. This paper discusses key elements in facilitating the development and execution of a LIMS User Acceptance Test Plan (UATP).

  2. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  3. Biasing errors and corrections

    NASA Technical Reports Server (NTRS)

    Meyers, James F.

    1991-01-01

    The dependence of laser velocimeter measurement rate on flow velocity is discussed. Investigations outlining that any dependence is purely statistical, and is nonstationary both spatially and temporally, are described. Main conclusions drawn are that the times between successive particle arrivals should be routinely measured and the calculation of the velocity data rate correlation coefficient should be performed to determine if a dependency exists. If none is found, accept the data ensemble as an independent sample of the flow. If a dependency is found, the data should be modified to obtain an independent sample. Universal correcting procedures should never be applied because their underlying assumptions are not valid.

  4. On Maximum FODO Acceptance

    SciTech Connect

    Batygin, Yuri Konstantinovich

    2014-12-24

    This note illustrates maximum acceptance of FODO quadrupole focusing channel. Acceptance is the largest Floquet ellipse of a matched beam: A = $\\frac{a^2}{β}$$_{max}$ where a is the aperture of the channel and βmax is the largest value of beta-function in the channel. If aperture of the channel is restricted by a circle of radius a, the s-s acceptance is available for particles oscillating at median plane, y=0. Particles outside median plane will occupy smaller phase space area. In x-y plane, cross section of the accepted beam has a shape of ellipse with truncated boundaries.

  5. A general aviation simulator evaluation of a rate-enhanced instrument landing system display

    NASA Technical Reports Server (NTRS)

    Hinton, D. A.

    1981-01-01

    A piloted-simulation study was conducted to evaluate the effect on instrument landing system tracking performance of integrating localizer-error rate with raw localizer and glide-slope error. The display was named the pseudocommand tracking indicator (PCTI) because it provides an indication of the change of heading required to track the localizer center line. Eight instrument-rated pilots each flew five instrument approaches with the PCTI and five instrument approaches with a conventional course deviation indicator. The results show good overall pilot acceptance of the display, a significant improvement in localizer tracking error, and no significant changes in glide-slope tracking error or pilot workload.

  6. Discretization error-free estimate of low temperature statistical dissociation rates in gas phase: Applications to Lennard-Jones clusters X13-nYn (n=0-3)

    NASA Astrophysics Data System (ADS)

    Mella, Massimo

    2008-06-01

    In this work, an improved approach for computing cluster dissociation rates using Monte Carlo (MC) simulations is proposed and a discussion is provided on its applicability as a function of environmental variables (e.g., temperature). With an analytical transformation of the integrals required to compute variational transition state theory (vTST) dissociation rates, MC estimates of the expectation value for the Dirac delta δ(qrc-qc) have been made free of the discretization error that is present when a prelimit form for δ is used. As a by-product of this transformation, the statistical error associated with <δ(qrc-qc)> is reduced making this step in the calculation of vTST rates substantially more efficient (by a factor of 4-2500, roughly). The improved MC procedure is subsequently employed to compute the dissociation rate for Lennard-Jones clusters X13-nYn (n=0-3) as a function of temperature (T), composition, and X-Y interaction strength. The X13-nYn family has been previously studied as prototypical set of systems for which it may be possible to select and stabilize structures different from the icosahedral global minimum of X13. It was found that both the dissociation rate and the dissociation mechanism, as suggested by the statistical simulations, present a marked dependence on n, T, and the nature of Y. In particular, it was found that a vacancy is preferentially formed close to a surface impurity when the X-Y interaction is weaker than the X-X one whatever the temperature. Differently, the mechanism was found to depend on T for stronger X-Y interactions, with vacancies being formed opposite to surface impurities at higher temperature. These behaviors are a reflex of the important role played by the surface fluctuations in defining the properties of clusters.

  7. An investigation of error correcting techniques for OMV data

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1992-01-01

    Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).

  8. Fabrication of a magnetic-tunnel-junction-based nonvolatile logic-in-memory LSI with content-aware write error masking scheme achieving 92% storage capacity and 79% power reduction

    NASA Astrophysics Data System (ADS)

    Natsui, Masanori; Tamakoshi, Akira; Endoh, Tetsuo; Ohno, Hideo; Hanyu, Takahiro

    2017-04-01

    A magnetic-tunnel-junction (MTJ)-based video coding hardware with an MTJ-write-error-rate relaxation scheme as well as a nonvolatile storage capacity reduction technique is designed and fabricated in a 90 nm MOS and 75 nm perpendicular MTJ process. The proposed MTJ-oriented dynamic error masking scheme suppresses the effect of write operation errors on the operation result of LSI, which results in the increase in an acceptable MTJ write error rate up to 7.8 times with less than 6% area overhead, while achieving 79% power reduction compared with that of the static-random-access-memory-based one.

  9. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  10. Newbery Medal Acceptance.

    ERIC Educational Resources Information Center

    Freedman, Russell

    1988-01-01

    Presents the Newbery Medal acceptance speech of Russell Freedman, writer of children's nonfiction. Discusses the place of nonfiction in the world of children's literature, the evolution of children's biographies, and the author's work on "Lincoln." (ARH)

  11. Effectiveness and relevance of MR acceptance testing: results of an 8 year audit.

    PubMed

    McRobbie, D W; Quest, R A

    2002-06-01

    The effectiveness and relevance of independent acceptance testing was assessed by means of an audit of acceptance procedures for 17 MRI systems, with field strengths in the range 0.5-1.5 T, acquired over 8 years. Signal-to-noise ratio and geometric linearity were found to be the image quality parameters most likely to fall below acceptable or expected standards. These received confirmed successful corrective action in 69% of instances. Non-uniformity, ghosting and poor fat suppression were the next most common non-compliant parameters, but yielded less satisfactory outcomes. Spatial resolution was not found to be a sensitive parameter in determining acceptability. 49% of all non-compliant parameters received verifiable corrective attention. A schedule of actual acceptance criteria is presented and shown to be reasonable. Parameter failure rates were shown not to have improved with time. A safety audit of 11 of the installations revealed the most common failings to be inadequate suite layout and poor use of signs. The mean number of safety issues per installation identified as requiring attention was 5, from a questionnaire of 100 points. A number of anecdotal errors and omissions are reported. The data support the importance of an appropriate acceptance procedure for new clinical MRI equipment and for the involvement of a suitably qualified safety adviser on the project team from the outset.

  12. Error estimates of elastic components in stress-dependent VTI media

    NASA Astrophysics Data System (ADS)

    Spikes, Kyle T.

    2014-09-01

    This work examines the ranges of physically acceptable elastic components for a vertical transversely isotropic (VTI) laboratory shale data set. A stochastic rock-physics approach combined with physically based acceptance and rejection criteria determined the ranges. The importance of this work is to demonstrate that multiple constrained models explain independently calculated measurement error bars. The data set consisted of pressure- and directional-dependent velocity measurements conducted on a low porosity, brine-saturated hard shale. Error bars were calculated for all five elastic stiffnesses and compliances as a function of pressure. The rock physics model is pressure dependent and represents simultaneously five elastic compliances for a VTI medium. A non-linear least squares fitting routine established a best-fit model to the five compliances at all pressures. Perturbations of the best-fit model provided the statistical parameter space. Twelve physical constraints or data-set-specific conditions comprised the acceptance/rejection criteria. These constraints and conditions included strain-energy requirements, inequalities among stiffnesses and anisotropy parameters, and rates of change of moduli with pressure. The largest number of rejected models resulted from violating a criterion relating a compressional and shear stiffness. Minimum misfits between the accepted models and the data illustrate that a fraction of the accepted models best explain the data. The misfits between these accepted models and data explain the error in the data and/or inhomogeneities at the measurement scale. The ranges of acceptable elastic component values and the corresponding uncertainty estimates could be incorporated into seismic-inversion, imaging, and velocity-modeling schemes.

  13. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  14. Experimental demonstration of topological error correction.

    PubMed

    Yao, Xing-Can; Wang, Tian-Xiong; Chen, Hao-Ze; Gao, Wei-Bo; Fowler, Austin G; Raussendorf, Robert; Chen, Zeng-Bing; Liu, Nai-Le; Lu, Chao-Yang; Deng, You-Jin; Chen, Yu-Ao; Pan, Jian-Wei

    2012-02-22

    Scalable quantum computing can be achieved only if quantum bits are manipulated in a fault-tolerant fashion. Topological error correction--a method that combines topological quantum computation with quantum error correction--has the highest known tolerable error rate for a local architecture. The technique makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the experimental demonstration of topological error correction with an eight-photon cluster state. We show that a correlation can be protected against a single error on any quantum bit. Also, when all quantum bits are simultaneously subjected to errors with equal probability, the effective error rate can be significantly reduced. Our work demonstrates the viability of topological error correction for fault-tolerant quantum information processing.

  15. Errors associated with outpatient computerized prescribing systems

    PubMed Central

    Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G

    2011-01-01

    Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428

  16. A cascaded coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Shu, L.; Kasami, T.

    1985-01-01

    A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.

  17. The Cline of Errors in the Writing of Japanese University Students

    ERIC Educational Resources Information Center

    French, Gary

    2005-01-01

    In this study, errors in the English writing of students in the College of World Englishes at Chukyo University, Japan are examined to determine if there is a level of acceptance among teachers. If there is, are these errors becoming part of an accepted, standardized Japanese English Results show there is little acceptance of third person…

  18. Dependence of the bit error rate on the signal power and length of a single-channel coherent single-span communication line (100 Gbit s{sup -1}) with polarisation division multiplexing

    SciTech Connect

    Gurkin, N V; Konyshev, V A; Novikov, A G; Treshchikov, V N; Ubaydullaev, R R

    2015-01-31

    We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s{sup -1} DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on the optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 – 50 km up to a maximum length of 250 km. (optical transmission of information)

  19. Dependence of the bit error rate on the signal power and length of a single-channel coherent single-span communication line (100 Gbit s-1) with polarisation division multiplexing

    NASA Astrophysics Data System (ADS)

    Gurkin, N. V.; Konyshev, V. A.; Nanii, O. E.; Novikov, A. G.; Treshchikov, V. N.; Ubaydullaev, R. R.

    2015-01-01

    We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s-1 DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on the optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 - 50 km up to a maximum length of 250 km.

  20. Accepting space radiation risks.

    PubMed

    Schimmerling, Walter

    2010-08-01

    The human exploration of space inevitably involves exposure to radiation. Associated with this exposure are multiple risks, i.e., probabilities that certain aspects of an astronaut's health or performance will be degraded. The management of these risks requires that such probabilities be accurately predicted, that the actual exposures be verified, and that comprehensive records be maintained. Implicit in these actions is the fact that, at some point, a decision has been made to accept a certain level of risk. This paper examines ethical and practical considerations involved in arriving at a determination that risks are acceptable, roles that the parties involved may play, and obligations arising out of reliance on the informed consent paradigm seen as the basis for ethical radiation risk acceptance in space.

  1. Specific Impulse and Mass Flow Rate Error

    NASA Technical Reports Server (NTRS)

    Gregory, Don A.

    2005-01-01

    Specific impulse is defined in words in many ways. Very early in any text on rocket propulsion a phrase similar to .specific impulse is the thrust force per unit propellant weight flow per second. will be found.(2) It is only after seeing the mathematics written down does the definition mean something physically to scientists and engineers responsible for either measuring it or using someone.s value for it.

  2. Errors in CT colonography.

    PubMed

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  3. Why was Relativity Accepted?

    NASA Astrophysics Data System (ADS)

    Brush, S. G.

    Historians of science have published many studies of the reception of Einstein's special and general theories of relativity. Based on a review of these studies, and my own research on the role of the light-bending prediction in the reception of general relativity, I discuss the role of three kinds of reasons for accepting relativity (1) empirical predictions and explanations; (2) social-psychological factors; and (3) aesthetic-mathematical factors. According to the historical studies, acceptance was a three-stage process. First, a few leading scientists adopted the special theory for aesthetic-mathematical reasons. In the second stage, their enthusiastic advocacy persuaded other scientists to work on the theory and apply it to problems currently of interest in atomic physics. The special theory was accepted by many German physicists by 1910 and had begun to attract some interest in other countries. In the third stage, the confirmation of Einstein's light-bending prediction attracted much public attention and forced all physicists to take the general theory of relativity seriously. In addition to light-bending, the explanation of the advance of Mercury's perihelion was considered strong evidence by theoretical physicists. The American astronomers who conducted successful tests of general relativity became defenders of the theory. There is little evidence that relativity was `socially constructed' but its initial acceptance was facilitated by the prestige and resources of its advocates.

  4. Approaches to acceptable risk

    SciTech Connect

    Whipple, C.

    1997-04-30

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, in a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.

  5. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  6. Error correction maintains post-error adjustments after one night of total sleep deprivation.

    PubMed

    Hsieh, Shulan; Tsai, Cheng-Yin; Tsai, Ling-Ling

    2009-06-01

    Previous behavioral and electrophysiologic evidence indicates that one night of total sleep deprivation (TSD) impairs error monitoring, including error detection, error correction, and posterror adjustments (PEAs). This study examined the hypothesis that error correction, manifesting as an overtly expressed self-generated performance feedback to errors, can effectively prevent TSD-induced impairment in the PEAs. Sixteen healthy right-handed adults (seven women and nine men) aged 19-23 years were instructed to respond to a target arrow flanked by four distracted arrows and to correct their errors immediately after committing errors. Task performance and electroencephalogram (EEG) data were collected after normal sleep (NS) and after one night of TSD in a counterbalanced repeated-measures design. With the demand of error correction, the participants maintained the same level of PEAs in reducing the error rate for trial N + 1 after TSD as after NS. Corrective behavior further affected the PEAs for trial N + 1 in the omission rate and response speed, which decreased and speeded up following corrected errors, particularly after TSD. These results show that error correction effectively maintains posterror reduction in both committed and omitted errors after TSD. A cerebral mechanism might be involved in the effect of error correction as EEG beta (17-24 Hz) activity was increased after erroneous responses compared to after correct responses. The practical application of error correction to increasing work safety, which can be jeopardized by repeated errors, is suggested for workers who are involved in monotonous but attention-demanding monitoring tasks.

  7. Correlates of Halo Error in Teacher Evaluation.

    ERIC Educational Resources Information Center

    Moritsch, Brian G.; Suter, W. Newton

    1988-01-01

    An analysis of 300 undergraduate psychology student ratings of teachers was undertaken to assess the magnitude of halo error and a variety of rater, ratee, and course characteristics. The raters' halo errors were significantly related to student effort in the course, previous experience with the instructor, and class level. (TJH)

  8. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  9. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds

    NASA Astrophysics Data System (ADS)

    Xiong, B.; Oude Elberink, S.; Vosselman, G.

    2014-07-01

    In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.

  10. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  11. Measurement of diffusion coefficients from solution rates of bubbles

    NASA Technical Reports Server (NTRS)

    Krieger, I. M.

    1979-01-01

    The rate of solution of a stationary bubble is limited by the diffusion of dissolved gas molecules away from the bubble surface. Diffusion coefficients computed from measured rates of solution give mean values higher than accepted literature values, with standard errors as high as 10% for a single observation. Better accuracy is achieved with sparingly soluble gases, small bubbles, and highly viscous liquids. Accuracy correlates with the Grashof number, indicating that free convection is the major source of error. Accuracy should, therefore, be greatly increased in a gravity-free environment. The fact that the bubble will need no support is an additional important advantage of Spacelab for this measurement.

  12. Acceptability of human risk.

    PubMed

    Kasperson, R E

    1983-10-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility.

  13. Acceptability of human risk.

    PubMed Central

    Kasperson, R E

    1983-01-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility. PMID:6418541

  14. Acceptance Test Plan.

    DTIC Science & Technology

    2014-09-26

    7 RD-Ai507 154 CCEPTANCE TEST PLN(U) WESTINGHOUSE DEFENSE ND i/i ELECTRO ICS CENTER BALTIMORE MD DEVELOPMENT AND OPERATIONS DIY D C KRRiJS 28 JUN...Ln ACCEPTANCE TEST PLAN FOR SPECIAL RELIABILITY TESTS FOR BROADBAND MICROWAVE AMPLIFIER PANEL David C. Kraus, Reliability Engineer WESTINGHOUSE ...ORGANIZATION b. OFFICE SYMBOL 7g& NAME OF MONITORING ORGANIZATION tIf appdeg ble) WESTINGHOUSE ELECTRIC CORP. - NAVAL RESEARCH LABORATORY e. AOORES$ (Ci7t

  15. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue.

  16. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  17. Guidelines for the assessment and acceptance of potential brain-dead organ donors

    PubMed Central

    Westphal, Glauco Adrieno; Garcia, Valter Duro; de Souza, Rafael Lisboa; Franke, Cristiano Augusto; Vieira, Kalinca Daberkow; Birckholz, Viviane Renata Zaclikevis; Machado, Miriam Cristine; de Almeida, Eliana Régia Barbosa; Machado, Fernando Osni; Sardinha, Luiz Antônio da Costa; Wanzuita, Raquel; Silvado, Carlos Eduardo Soares; Costa, Gerson; Braatz, Vera; Caldeira Filho, Milton; Furtado, Rodrigo; Tannous, Luana Alves; de Albuquerque, André Gustavo Neves; Abdala, Edson; Gonçalves, Anderson Ricardo Roman; Pacheco-Moreira, Lúcio Filgueiras; Dias, Fernando Suparregui; Fernandes, Rogério; Giovanni, Frederico Di; de Carvalho, Frederico Bruzzi; Fiorelli, Alfredo; Teixeira, Cassiano; Feijó, Cristiano; Camargo, Spencer Marcantonio; de Oliveira, Neymar Elias; David, André Ibrahim; Prinz, Rafael Augusto Dantas; Herranz, Laura Brasil; de Andrade, Joel

    2016-01-01

    Organ transplantation is the only alternative for many patients with terminal diseases. The increasing disproportion between the high demand for organ transplants and the low rate of transplants actually performed is worrisome. Some of the causes of this disproportion are errors in the identification of potential organ donors and in the determination of contraindications by the attending staff. Therefore, the aim of the present document is to provide guidelines for intensive care multi-professional staffs for the recognition, assessment and acceptance of potential organ donors. PMID:27737418

  18. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  19. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    NASA Technical Reports Server (NTRS)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-01-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  20. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  1. High acceptance recoil polarimeter

    SciTech Connect

    The HARP Collaboration

    1992-12-05

    In order to detect neutrons and protons in the 50 to 600 MeV energy range and measure their polarization, an efficient, low-noise, self-calibrating device is being designed. This detector, known as the High Acceptance Recoil Polarimeter (HARP), is based on the recoil principle of proton detection from np[r arrow]n[prime]p[prime] or pp[r arrow]p[prime]p[prime] scattering (detected particles are underlined) which intrinsically yields polarization information on the incoming particle. HARP will be commissioned to carry out experiments in 1994.

  2. Baby-Crying Acceptance

    NASA Astrophysics Data System (ADS)

    Martins, Tiago; de Magalhães, Sérgio Tenreiro

    The baby's crying is his most important mean of communication. The crying monitoring performed by devices that have been developed doesn't ensure the complete safety of the child. It is necessary to join, to these technological resources, means of communicating the results to the responsible, which would involve the digital processing of information available from crying. The survey carried out, enabled to understand the level of adoption, in the continental territory of Portugal, of a technology that will be able to do such a digital processing. It was used the TAM as the theoretical referential. The statistical analysis showed that there is a good probability of acceptance of such a system.

  3. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  4. Acceptability of Emission Offsets

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  5. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  6. The Reliability of Pedalling Rates Employed in Work Tests on the Bicycle Ergometer.

    ERIC Educational Resources Information Center

    Bolonchuk, W. W.

    The purpose of this study was to determine whether a group of volunteer subjects could produce and maintain a pedalling cadence within an acceptable range of error. This, in turn, would aid in determining the reliability of pedalling rates employed in work tests on the bicycle ergometer. Forty male college students were randomly given four…

  7. ERROR CORRECTION IN HIGH SPEED ARITHMETIC,

    DTIC Science & Technology

    The errors due to a faulty high speed multiplier are shown to be iterative in nature. These errors are analyzed in various aspects. The arithmetic coding technique is suggested for the improvement of high speed multiplier reliability. Through a number theoretic investigation, a large class of arithmetic codes for single iterative error correction are developed. The codes are shown to have near-optimal rates and to render a simple decoding method. The implementation of these codes seems highly practical. (Author)

  8. Acceptance threshold theory can explain occurrence of homosexual behaviour

    PubMed Central

    Engel, Katharina C.; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors. PMID:25631226

  9. Acceptance threshold theory can explain occurrence of homosexual behaviour.

    PubMed

    Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors.

  10. Emperical Tests of Acceptance Sampling Plans

    NASA Technical Reports Server (NTRS)

    White, K. Preston, Jr.; Johnson, Kenneth L.

    2012-01-01

    Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).

  11. Error analysis using organizational simulation.

    PubMed Central

    Fridsma, D. B.

    2000-01-01

    Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885

  12. Empathy and error processing.

    PubMed

    Larson, Michael J; Fair, Joseph E; Good, Daniel A; Baldwin, Scott A

    2010-05-01

    Recent research suggests a relationship between empathy and error processing. Error processing is an evaluative control function that can be measured using post-error response time slowing and the error-related negativity (ERN) and post-error positivity (Pe) components of the event-related potential (ERP). Thirty healthy participants completed two measures of empathy, the Interpersonal Reactivity Index (IRI) and the Empathy Quotient (EQ), and a modified Stroop task. Post-error slowing was associated with increased empathic personal distress on the IRI. ERN amplitude was related to overall empathy score on the EQ and the fantasy subscale of the IRI. The Pe and measures of empathy were not related. Results remained consistent when negative affect was controlled via partial correlation, with an additional relationship between ERN amplitude and empathic concern on the IRI. Findings support a connection between empathy and error processing mechanisms.

  13. Likelihood-based genetic mark-recapture estimates when genotype samples are incomplete and contain typing errors.

    PubMed

    Macbeth, Gilbert M; Broderick, Damien; Ovenden, Jennifer R; Buckworth, Rik C

    2011-11-01

    Genotypes produced from samples collected non-invasively in harsh field conditions often lack the full complement of data from the selected microsatellite loci. The application to genetic mark-recapture methodology in wildlife species can therefore be prone to misidentifications leading to both 'true non-recaptures' being falsely accepted as recaptures (Type I errors) and 'true recaptures' being undetected (Type II errors). Here we present a new likelihood method that allows every pairwise genotype comparison to be evaluated independently. We apply this method to determine the total number of recaptures by estimating and optimising the balance between Type I errors and Type II errors. We show through simulation that the standard error of recapture estimates can be minimised through our algorithms. Interestingly, the precision of our recapture estimates actually improved when we included individuals with missing genotypes, as this increased the number of pairwise comparisons potentially uncovering more recaptures. Simulations suggest that the method is tolerant to per locus error rates of up to 5% per locus and can theoretically work in datasets with as little as 60% of loci genotyped. Our methods can be implemented in datasets where standard mismatch analyses fail to distinguish recaptures. Finally, we show that by assigning a low Type I error rate to our matching algorithms we can generate a dataset of individuals of known capture histories that is suitable for the downstream analysis with traditional mark-recapture methods.

  14. Embedded wavelet video coding with error concealment

    NASA Astrophysics Data System (ADS)

    Chang, Pao-Chi; Chen, Hsiao-Ching; Lu, Ta-Te

    2000-04-01

    We present an error-concealed embedded wavelet (ECEW) video coding system for transmission over Internet or wireless networks. This system consists of two types of frames: intra (I) frames and inter, or predicted (P), frames. Inter frames are constructed by the residual frames formed by variable block-size multiresolution motion estimation (MRME). Motion vectors are compressed by arithmetic coding. The image data of intra frames and residual frames are coded by error-resilient embedded zerotree wavelet (ER-EZW) coding. The ER-EZW coding partitions the wavelet coefficients into several groups and each group is coded independently. Therefore, the error propagation effect resulting from an error is only confined in a group. In EZW coding any single error may result in a totally undecodable bitstream. To further reduce the error damage, we use the error concealment at the decoding end. In intra frames, the erroneous wavelet coefficients are replaced by neighbors. In inter frames, erroneous blocks of wavelet coefficients are replaced by data from the previous frame. Simulations show that the performance of ECEW is superior to ECEW without error concealment by 7 to approximately 8 dB at the error-rate of 10-3 in intra frames. The improvement still has 2 to approximately 3 dB at a higher error-rate of 10-2 in inter frames.

  15. TU-C-BRE-07: Quantifying the Clinical Impact of VMAT Delivery Errors Relative to Prior Patients’ Plans and Adjusted for Anatomical Differences

    SciTech Connect

    Stanhope, C; Wu, Q; Yuan, L; Liu, J; Hood, R; Yin, F; Adamson, J

    2014-06-15

    Purpose: There is increased interest in the Radiation Oncology Physics community regarding sensitivity of pre-treatment IMRT/VMAT QA to delivery errors. Consequently, tools mapping pre-treatment QA to the patient DVH have been developed. However, the quantity of plan degradation that is acceptable remains uncertain. Using DVHs adapted from prior patients’ plans, we developed a technique to determine the magnitude of various delivery errors required to degrade a treatment plan to outside the clinically accepted range. Methods: DVHs for relevant organs at risk were adapted from a population of prior patients’ plans using a machine learning algorithm to establish the clinically acceptable DVH range specific to the patient’s anatomy. We applied this technique to six low-risk prostate cancer patients treated with single-arc VMAT and compared error-induced DVH changes to the adapted DVHs to determine the magnitude of error required to push the plan outside of the acceptable range. The procedure follows: (1) Errors (systematic ' random shift of MLCs, gantry-MLC desynchronization, dose rate fluctuations, etc.) were simulated and degraded DVHs calculated using the Varian Eclipse TPS. (2) Adapted DVHs and acceptable ranges for DVHs were established. (3) Relevant dosimetric indices and corresponding acceptable ranges were calculated from the DVHs. Key indices included NTCP (Lyman-Kutcher-Burman Model) and QUANTEC’s dose-volume Objectives: s of V75Gy≤0.15 for the rectum and V75Gy≤0.25 for the bladder. Results: Degradations to the clinical plan became “unacceptable” for 19±29mm and 1.9±2.0mm systematic outward shifts of a single leaf and leaf bank, respectively. All other simulated errors fell within the acceptable range. Conclusion: Utilizing machine learning and prior patients’ plans one can predict a clinically acceptable range of DVH degradation for a specific patient. Comparing error-induced DVH degradations to this range, it is shown that single

  16. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  17. Burst error correction extensions for large Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Owsley, P.

    1990-01-01

    Reed Solomon codes are powerful error correcting codes that include some of the best random and burst correcting codes currently known. It is well known that an (n,k) Reed Solomon code can correct up to (n - k)/2 errors. Many applications utilizing Reed Solomon codes require corrections of errors consisting primarily of bursts. In this paper, it is shown that the burst correcting ability of Reed Solomon codes can be increased beyond (n - k)/2 with an acceptable probability of miscorrect.

  18. Slowing after Observed Error Transfers across Tasks

    PubMed Central

    Wang, Lijun; Pan, Weigang; Tan, Jinfeng; Liu, Congcong; Chen, Antao

    2016-01-01

    After committing an error, participants tend to perform more slowly. This phenomenon is called post-error slowing (PES). Although previous studies have explored the PES effect in the context of observed errors, the issue as to whether the slowing effect generalizes across tasksets remains unclear. Further, the generation mechanisms of PES following observed errors must be examined. To address the above issues, we employed an observation-execution task in three experiments. During each trial, participants were required to mentally observe the outcomes of their partners in the observation task and then to perform their own key-press according to the mapping rules in the execution task. In Experiment 1, the same tasksets were utilized in the observation task and the execution task, and three error rate conditions (20%, 50% and 80%) were established in the observation task. The results revealed that the PES effect after observed errors was obtained in all three error rate conditions, replicating and extending previous studies. In Experiment 2, distinct stimuli and response rules were utilized in the observation task and the execution task. The result pattern was the same as that in Experiment 1, suggesting that the PES effect after observed errors was a generic adjustment process. In Experiment 3, the response deadline was shortened in the execution task to rule out the ceiling effect, and two error rate conditions (50% and 80%) were established in the observation task. The PES effect after observed errors was still obtained in the 50% and 80% error rate conditions. However, the accuracy in the post-observed error trials was comparable to that in the post-observed correct trials, suggesting that the slowing effect and improved accuracy did not rely on the same underlying mechanism. Current findings indicate that the occurrence of PES after observed errors is not dependent on the probability of observed errors, consistent with the assumption of cognitive control account

  19. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  20. Children acceptance of laser dental treatment

    NASA Astrophysics Data System (ADS)

    Lazea, Andreea; Todea, Carmen

    2016-03-01

    Objectives: To evaluate the dental anxiety level and the degree of acceptance of laser assisted pedodontic treatments from the children part. Also, we want to underline the advantages of laser use in pediatric dentistry, to make this technology widely used in treating dental problems of our children patients. Methods: Thirty pediatric dental patients presented in the Department of Pedodontics, University of Medicine and Pharmacy "Victor Babeş", Timişoara were evaluated using the Wong-Baker pain rating scale, wich was administered postoperatory to all patients, to assess their level of laser therapy acceptance. Results: Wong-Baker faces pain rating scale (WBFPS) has good validity and high specificity; generally it's easy for children to use, easy to compare and has good feasibility. Laser treatment has been accepted and tolerated by pediatric patients for its ability to reduce or eliminate pain. Around 70% of the total sample showed an excellent acceptance of laser dental treatment. Conclusions: Laser technology is useful and effective in many clinical situations encountered in pediatric dentistry and a good level of pacient acceptance is reported during all laser procedures on hard and soft tissues.

  1. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  2. [CIRRNET® - learning from errors, a success story].

    PubMed

    Frank, O; Hochreutener, M; Wiederkehr, P; Staender, S

    2012-06-01

    CIRRNET® is the network of local error-reporting systems of the Swiss Patient Safety Foundation. The network has been running since 2006 together with the Swiss Society for Anaesthesiology and Resuscitation (SGAR), and network participants currently include 39 healthcare institutions from all four different language regions of Switzerland. Further institutions can join at any time. Local error reports in CIRRNET® are bundled at a supraregional level, categorised in accordance with the WHO classification, and analysed by medical experts. The CIRRNET® database offers a solid pool of data with error reports from a wide range of medical specialist's areas and provides the basis for identifying relevant problem areas in patient safety. These problem areas are then processed in cooperation with specialists with extremely varied areas of expertise, and recommendations for avoiding these errors are developed by changing care processes (Quick-Alerts®). Having been approved by medical associations and professional medical societies, Quick-Alerts® are widely supported and well accepted in professional circles. The CIRRNET® database also enables any affiliated CIRRNET® participant to access all error reports in the 'closed user area' of the CIRRNET® homepage and to use these error reports for in-house training. A healthcare institution does not have to make every mistake itself - it can learn from the errors of others, compare notes with other healthcare institutions, and use existing knowledge to advance its own patient safety.

  3. Multipoint Lods Provide Reliable Linkage Evidence Despite Unknown Limiting Distribution: Type I Error Probabilities Decrease with Sample Size for Multipoint Lods and Mods

    PubMed Central

    Hodge, Susan E.; Rodriguez-Murillo, Laura; Strug, Lisa J.; Greenberg, David A.

    2009-01-01

    We investigate the behavior of type I error rates in model-based multipoint (MP) linkage analysis, as a function of sample size (N). We consider both MP lods (i.e., MP linkage analysis that uses the correct genetic model) and MP mods (maximizing MP lods over 18 dominant and recessive models). Following Xing & Elston [2006], we first consider MP linkage analysis limited to a single position; then we enlarge the scope and maximize the lods and mods over a span of positions. In all situations we examined, type I error rates decrease with increasing sample size, apparently approaching zero. We show: (a) For MP lods analyzed only at a single position, well-known statistical theory predicts that type I error rates approach zero. (b) For MP lods and mods maximized over position, this result has a different explanation, related to the fact that one maximizes the scores over only a finite portion of the parameter range. The implications of these findings may be far-reaching: Although it is widely accepted that fixed nominal critical values for MP lods and mods are not known, this study shows that whatever the nominal error rates are, the actual error rates appear to decrease with increasing sample size. Moreover, the actual (observed) type I error rate may be quite small for any given study. We conclude that multipoint lod and mod scores provide reliable linkage evidence for complex diseases, despite the unknown limiting distributions of these multipoint scores. PMID:18613118

  4. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  5. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  6. Modeling Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    NASA Astrophysics Data System (ADS)

    Stephenson, Edward; Imig, Astrid

    2009-10-01

    The Storage Ring EDM Collaboration has obtained a set of measurements detailing the sensitivity of a storage ring polarimeter for deuterons to small geometrical and rate changes. Various schemes, such as the calculation of the cross ratio [1], can cancel effects due to detector acceptance differences and luminosity differences for states of opposite polarization. Such schemes fail at second-order in the errors, becoming sensitive to geometrical changes, polarization magnitude differences between opposite polarization states, and changes to the detector response with changing data rates. An expansion of the polarimeter response in a Taylor series based on small errors about the polarimeter operating point can parametrize such effects, primarily in terms of the logarithmic derivatives of the cross section and analyzing power. A comparison will be made to measurements obtained with the EDDA detector at COSY-J"ulich. [4pt] [1] G.G. Ohlsen and P.W. Keaton, Jr., NIM 109, 41 (1973).

  7. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  8. Sonic boom acceptability studies

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; Mccurdy, David A.

    1992-01-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  9. Spaceborne scanner imaging system errors

    NASA Technical Reports Server (NTRS)

    Prakash, A.

    1982-01-01

    The individual sensor system design elements which are the priori components in the registration and rectification process, and the potential impact of error budgets on multitemporal registration and side-lap registration are analyzed. The properties of scanner, MLA, and SAR imaging systems are reviewed. Each sensor displays internal distortion properties which to varying degrees make it difficult to generate on orthophoto projection of the data acceptable for multiple pass registration or meeting national map accuracy standards and is also affected to varying degrees by relief displacements in moderate to hilly terrain. Nonsensor related distortions, associated with the accuracy of ephemeris determination and platform stability, have a major impact on local geometric distortions. Platform stability improvements expected from the new multi mission spacecraft series and improved ephemeris and ground control point determination from the NAVSTAR/global positioning satellite systems are reviewed.

  10. Error latency measurements in symbolic architectures

    NASA Technical Reports Server (NTRS)

    Young, L. T.; Iyer, R. K.

    1991-01-01

    Error latency, the time that elapses between the occurrence of an error and its detection, has a significant effect on reliability. In computer systems, failure rates can be elevated during a burst of system activity due to increased detection of latent errors. A hybrid monitoring environment is developed to measure the error latency distribution of errors occurring in main memory. The objective of this study is to develop a methodology for gauging the dependability of individual data categories within a real-time application. The hybrid monitoring technique is novel in that it selects and categorizes a specific subset of the available blocks of memory to monitor. The precise times of reads and writes are collected, so no actual faults need be injected. Unlike previous monitoring studies that rely on a periodic sampling approach or on statistical approximation, this new approach permits continuous monitoring of referencing activity and precise measurement of error latency.

  11. A review of major factors contributing to errors in human hair association by microscopy.

    PubMed

    Smith, S L; Linch, C A

    1999-09-01

    Forensic hair examiners using traditional microscopic comparison techniques cannot state with certainty, except in extremely rare cases, that a found hair originated from a particular individual. They also cannot provide a statistical likelihood that a hair came from a certain individual and not another. There is no data available regarding the frequency of a specific microscopic hair characteristic (i.e., microtype) or trait in a particular population. Microtype is a term we use to describe certain internal characteristics and features expressed when observing hairs with unpolarized transmitted light. Courts seem to be sympathetic to lawyer's concerns that there are no accepted probability standards for human hair identification. Under Daubert, microscopic hair analysis testimony (or other scientific testimony) is allowed if the technique can be shown to have testability, peer review, general acceptance, and a known error rate. As with other forensic disciplines, laboratory error rate determination for a specific hair comparison case is not possible. Polymerase chain reaction (PCR)-based typing of hair roots offer hair examiners an opportunity to begin cataloging data with regard to microscopic hair association error rates. This is certainly a realistic manner in which to ascertain which hair microtypes and case circumstances repeatedly cause difficulty in association. Two cases are presented in which PCR typing revealed an incorrect inclusion in one and an incorrect exclusion in another. This paper does not suggest that such limited observations define a rate of occurrence. These cases illustrate evidentiary conditions or case circumstances which may potentially contribute to microscopic hair association errors. Issues discussed in this review paper address the potential questions an expert witness may expect in a Daubert hair analysis admissibility hearing.

  12. High response rate and acceptable toxicity of a combination of rituximab, vinorelbine, ifosfamide, mitoxantrone and prednisone for the treatment of diffuse large B-cell lymphoma in first relapse: results of the R-NIMP GOELAMS study.

    PubMed

    Gyan, Emmanuel; Damotte, Diane; Courby, Stéphane; Sénécal, Delphine; Quittet, Philippe; Schmidt-Tanguy, Aline; Banos, Anne; Le Gouill, Steven; Lamy, Thierry; Fontan, Jean; Maisonneuve, Hervé; Alexis, Magda; Dreyfus, Francois; Tournilhac, Olivier; Laribi, Kamel; Solal-Céligny, Philippe; Arakelyan, Nina; Cartron, Guillaume; Gressin, Remy

    2013-07-01

    The optimal management of relapsed diffuse large B-cell lymphoma (DLBCL) is not standardized. The Groupe Ouest Est des Leucémies et aAutres Maladies du Sang developed a combination of vinorelbine, ifosfamide, mitoxantrone and prednisone (NIMP) for the treatment of relapsed DLBCL, and assessed its efficacy and safety in association with rituximab (R). This multicentric phase II study included 50 patients with DLBCL in first relapse, aged 18-75 years. Patients received rituximab 375 mg/m² day 1, ifosfamide 1000 mg/m² days 1-5, vinorelbine 25 mg/m² days 1 and 15, mitoxantrone 10 mg/m² day 1, and prednisone 1 mg/kg days 1-5, every 28 days for three cycles. Responding patients underwent autologous transplantation or received three additional R-NIMP cycles. All patients were evaluable for toxicity and 49 for response. Centralized pathology review confirmed DLBCL in all cases. Toxicities were mainly haematological with infectious events needing hospitalization in nine cases. Two toxic deaths were observed. After three cycles, 22 patients (44%) achieved complete response/unconfirmed complete response, 11 achieved partial response (24%), 2 had stable disease and 13 progressed. The non-germinal centre B immunophenotype was associated with shorter progression-free survival. in conclusion, the R-NIMP regimen displayed significant activity in relapsed DLBCL, with acceptable toxicity and should be considered a candidate for combination with new agents.

  13. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    PubMed

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  14. Twenty Questions about Student Errors.

    ERIC Educational Resources Information Center

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    1986-01-01

    Discusses the value of studying errors made by students in the process of learning science. Addresses 20 research questions dealing with student learning errors. Attempts to characterize errors made by students and clarify some terms used in error research. (TW)

  15. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  16. Teacher-Induced Errors.

    ERIC Educational Resources Information Center

    Richmond, Kent C.

    Students of English as a second language (ESL) often come to the classroom with little or no experience in writing in any language and with inaccurate assumptions about writing. Rather than correct these assumptions, teachers often seem to unwittingly reinforce them, actually inducing errors into their students' work. Teacher-induced errors occur…

  17. A survey of physicians' acceptance of telemedicine.

    PubMed

    Sheng, O R; Hu, P J; Chau, P Y; Hjelm, N M; Tam, K Y; Wei, C P; Tse, J

    1998-01-01

    Physicians' acceptance of telemedicine is an important managerial issue facing health-care organizations that have adopted, or are about to adopt, telemedicine. Most previous investigations of the acceptance of telemedicine have lacked theoretical foundation and been of limited scope. We examined technology acceptance and usage among physicians and specialists from 49 clinical departments at eight public tertiary hospitals in Hong Kong. Out of the 1021 questionnaires distributed, 310 were completed and returned, a 30% response rate. The preliminary findings suggested that use of telemedicine among clinicians in Hong Kong was moderate. While 18% of the respondents were using some form of telemedicine for patient care and management, it accounted for only 6.3% of the services provided. The intensity of their technology usage was also low, accounting for only 6.8% of a typical telemedicine-assisted service. These preliminary findings have managerial implications.

  18. Message passing in fault-tolerant quantum error correction

    NASA Astrophysics Data System (ADS)

    Evans, Zachary W. E.; Stephens, Ashley M.

    2008-12-01

    Inspired by Knill’s scheme for message passing error detection, here we develop a scheme for message passing error correction for the nine-qubit Bacon-Shor code. We show that for two levels of concatenated error correction, where classical information obtained at the first level is used to help interpret the syndrome at the second level, our scheme will correct all cases with four physical errors. This results in a reduction of the logical failure rate relative to conventional error correction by a factor proportional to the reciprocal of the physical error rate.

  19. Performing repetitive error detection in a superconducting quantum circuit

    NASA Astrophysics Data System (ADS)

    Kelly, J.; Barends, R.; Fowler, A.; Megrant, A.; Jeffrey, E.; White, T.; Sank, D.; Mutus, J.; Campbell, B.; Chen, Y.; Chen, Z.; Chiaro, B.; Dunsworth, A.; Hoi, I.-C.; Neill, C.; O'Malley, P. J. J.; Roushan, P.; Quintana, C.; Vainsencher, A.; Wenner, J.; Cleland, A. N.; Martinis, J. M.

    2015-03-01

    Recently, there has been a large interest in the surface code error correction scheme, as gate and measurement fidelities are near the threshold. If error rates are sufficiently low, increased systems size leads to suppression of logical error. We have combined high fidelity gate and measurements in a single nine qubit device, and use it to perform up to eight rounds of repetitive bit error detection. We demonstrate suppression of environmentally-induced error as compared to a single physical qubit, as well as reduced logical error rates with increasing system size.

  20. Reduction in Hospital-Wide Clinical Laboratory Specimen Identification Errors following Process Interventions: A 10-Year Retrospective Observational Study

    PubMed Central

    Ning, Hsiao-Chen; Lin, Chia-Ni; Chiu, Daniel Tsun-Yee; Chang, Yung-Ta; Wen, Chiao-Ni; Peng, Shu-Yu; Chu, Tsung-Lan; Yu, Hsin-Ming; Wu, Tsu-Lan

    2016-01-01

    Background Accurate patient identification and specimen labeling at the time of collection are crucial steps in the prevention of medical errors, thereby improving patient safety. Methods All patient specimen identification errors that occurred in the outpatient department (OPD), emergency department (ED), and inpatient department (IPD) of a 3,800-bed academic medical center in Taiwan were documented and analyzed retrospectively from 2005 to 2014. To reduce such errors, the following series of strategies were implemented: a restrictive specimen acceptance policy for the ED and IPD in 2006; a computer-assisted barcode positive patient identification system for the ED and IPD in 2007 and 2010, and automated sample labeling combined with electronic identification systems introduced to the OPD in 2009. Results Of the 2000345 specimens collected in 2005, 1023 (0.0511%) were identified as having patient identification errors, compared with 58 errors (0.0015%) among 3761238 specimens collected in 2014, after serial interventions; this represents a 97% relative reduction. The total number (rate) of institutional identification errors contributed from the ED, IPD, and OPD over a 10-year period were 423 (0.1058%), 556 (0.0587%), and 44 (0.0067%) errors before the interventions, and 3 (0.0007%), 52 (0.0045%) and 3 (0.0001%) after interventions, representing relative 99%, 92% and 98% reductions, respectively. Conclusions Accurate patient identification is a challenge of patient safety in different health settings. The data collected in our study indicate that a restrictive specimen acceptance policy, computer-generated positive identification systems, and interdisciplinary cooperation can significantly reduce patient identification errors. PMID:27494020

  1. Changes in liver acceptance patterns after implementation of Share 35.

    PubMed

    Washburn, Kenneth; Harper, Ann; Baker, Timothy; Edwards, Erick

    2016-02-01

    The Share 35 policy was implemented June 2013. We sought to evaluate liver offer acceptance patterns of centers under this policy. We compared three 1-year eras (1, 2, and 3) before and 1 era (4) after the implementation date of the Share 35 policy (June 18, 2013). We evaluated all offers for liver-only recipients including only those offers for livers that were ultimately transplanted. Logistic regression was used to develop a liver acceptance model. In era 3, there were 4809 offers for Model for End-Stage Liver Disease (MELD) score ≥ 35 patients with 1071 acceptances (22.3%) and 10,141 offers and 1652 acceptances (16.3%) in era 4 (P < 0.001). In era 3, there were 42,954 offers for MELD score < 35 patients with 4181 acceptances (9.7%) and 44,137 offers and 3882 acceptances (8.8%) in era 4 (P < 0.001). The lower acceptance rate persisted across all United Network for Organ Sharing regions and was significantly less in regions 2, 3, 4, 5, and 7. Mean donor risk index was the same (1.3) for all eras for MELD scores ≥ 35 acceptances and the same (1.4) for MELD score < 35 acceptances. Refusal reasons did not vary throughout the eras. The adjusted odds ratio of accepting a liver for a MELD score of 35 + compared to a MELD score < 35 patient was 1.289 before the policy and 0.960 after policy implementation. In conclusion, the Share 35 policy has resulted in more offers to patients with MELD scores ≥ 35. Overall acceptance rates were significantly less compared to the same patient group before the policy implementation. Centers are less likely to accept a liver for a patient with a MELD score of 35 + after the policy change. Decreased donor acceptance rates could reflect more programmatic selectivity and ongoing donor and recipient matching.

  2. Motivation and semantic context affect brain error-monitoring activity: an event-related brain potentials study.

    PubMed

    Ganushchak, Lesya Y; Schiller, Niels O

    2008-01-01

    During speech production, we continuously monitor what we say. In situations in which speech errors potentially have more severe consequences, e.g. during a public presentation, our verbal self-monitoring system may pay special attention to prevent errors than in situations in which speech errors are more acceptable, such as a casual conversation. In an event-related potential study, we investigated whether or not motivation affected participants' performance using a picture naming task in a semantic blocking paradigm. Semantic context of to-be-named pictures was manipulated; blocks were semantically related (e.g., cat, dog, horse, etc.) or semantically unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated independently by monetary reward. The motivation manipulation did not affect error rate during picture naming. However, the high-motivation condition yielded increased amplitude and latency values of the error-related negativity (ERN) compared to the low-motivation condition, presumably indicating higher monitoring activity. Furthermore, participants showed semantic interference effects in reaction times and error rates. The ERN amplitude was also larger during semantically related than unrelated blocks, presumably indicating that semantic relatedness induces more conflict between possible verbal responses.

  3. Error effects in anterior cingulate cortex reverse when error likelihood is high

    PubMed Central

    Jessup, Ryan K.; Busemeyer, Jerome R.; Brown, Joshua W.

    2010-01-01

    Strong error-related activity in medial prefrontal cortex (mPFC) has been shown repeatedly with neuroimaging and event-related potential studies for the last several decades. Multiple theories have been proposed to account for error effects, including comparator models and conflict detection models, but the neural mechanisms that generate error signals remain in dispute. Typical studies use relatively low error rates, confounding the expectedness and the desirability of an error. Here we show with a gambling task and fMRI that when losses are more frequent than wins, the mPFC error effect disappears, and moreover, exhibits the opposite pattern by responding more strongly to unexpected wins than losses. These findings provide perspective on recent ERP studies and suggest that mPFC error effects result from a comparison between actual and expected outcomes. PMID:20203206

  4. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  5. Identifying subset errors in multiple sequence alignments.

    PubMed

    Roy, Aparna; Taddese, Bruck; Vohra, Shabana; Thimmaraju, Phani K; Illingworth, Christopher J R; Simpson, Lisa M; Mukherjee, Keya; Reynolds, Christopher A; Chintapalli, Sree V

    2014-01-01

    Multiple sequence alignment (MSA) accuracy is important, but there is no widely accepted method of judging the accuracy that different alignment algorithms give. We present a simple approach to detecting two types of error, namely block shifts and the misplacement of residues within a gap. Given a MSA, subsets of very similar sequences are generated through the use of a redundancy filter, typically using a 70-90% sequence identity cut-off. Subsets thus produced are typically small and degenerate, and errors can be easily detected even by manual examination. The errors, albeit minor, are inevitably associated with gaps in the alignment, and so the procedure is particularly relevant to homology modelling of protein loop regions. The usefulness of the approach is illustrated in the context of the universal but little known [K/R]KLH motif that occurs in intracellular loop 1 of G protein coupled receptors (GPCR); other issues relevant to GPCR modelling are also discussed.

  6. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  7. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  8. Prescribing Errors Involving Medication Dosage Forms

    PubMed Central

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  9. Inconsistent reporting of minimally invasive surgery errors

    PubMed Central

    White, AD; Skelton, M; Mushtaq, F; Pike, TW; Mon-Williams, M; Lodge, JPA; Wilkie, RM

    2015-01-01

    Introduction Minimally invasive surgery (MIS) is a complex task requiring dexterity and high level cognitive function. Unlike surgical ‘never events’, potentially important (and frequent) manual or cognitive slips (‘technical errors’) are underresearched. Little is known about the occurrence of routine errors in MIS, their relationship to patient outcome, and whether they are reported accurately and/or consistently. Methods An electronic survey was sent to all members of the Association of Surgeons of Great Britain and Ireland, gathering demographic information, experience and reporting of MIS errors, and a rating of factors affecting error prevalence. Results Of 249 responses, 203 completed more than 80% of the questions regarding the surgery they had performed in the preceding 12 months. Of these, 47% reported a significant error in their own performance and 75% were aware of a colleague experiencing error. Technical skill, knowledge, situational awareness and decision making were all identified as particularly important for avoiding errors in MIS. Reporting of errors was variable: 15% did not necessarily report an intraoperative error to a patient while 50% did not consistently report at an institutional level. Critically, 12% of surgeons were unaware of the procedure for reporting a technical error and 59% felt guidance is needed. Overall, 40% believed a confidential reporting system would increase their likelihood of reporting an error. Conclusion These data indicate inconsistent reporting of operative errors, and highlight the need to better understand how and why technical errors occur in MIS. A confidential ‘no blame’ reporting system might help improve patient outcomes and avoid a closed culture that can undermine public confidence. PMID:26492908

  10. Dose error analysis for a scanned proton beam delivery system

    NASA Astrophysics Data System (ADS)

    Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  11. Cone penetrometer acceptance test report

    SciTech Connect

    Boechler, G.N.

    1996-09-19

    This Acceptance Test Report (ATR) documents the results of acceptance test procedure WHC-SD-WM-ATR-151. Included in this report is a summary of the tests, the results and issues, the signature and sign- off ATP pages, and a summarized table of the specification vs. ATP section that satisfied the specification.

  12. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  13. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm.

  14. Drug Administration Errors in Hospital Inpatients: A Systematic Review

    PubMed Central

    Berdot, Sarah; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre; Sabatier, Brigitte

    2013-01-01

    Context Drug administration in the hospital setting is the last barrier before a possible error reaches the patient. Objectives We aimed to analyze the prevalence and nature of administration error rate detected by the observation method. Data Sources Embase, MEDLINE, Cochrane Library from 1966 to December 2011 and reference lists of included studies. Study Selection Observational studies, cross-sectional studies, before-and-after studies, and randomized controlled trials that measured the rate of administration errors in inpatients were included. Data Extraction Two reviewers (senior pharmacists) independently identified studies for inclusion. One reviewer extracted the data; the second reviewer checked the data. The main outcome was the error rate calculated as being the number of errors without wrong time errors divided by the Total Opportunity for Errors (TOE, sum of the total number of doses ordered plus the unordered doses given), and multiplied by 100. For studies that reported it, clinical impact was reclassified into four categories from fatal to minor or no impact. Due to a large heterogeneity, results were expressed as median values (interquartile range, IQR), according to their study design. Results Among 2088 studies, a total of 52 reported TOE. Most of the studies were cross-sectional studies (N=46). The median error rate without wrong time errors for the cross-sectional studies using TOE was 10.5% [IQR: 7.3%-21.7%]. No fatal error was observed and most errors were classified as minor in the 18 studies in which clinical impact was analyzed. We did not find any evidence of publication bias. Conclusions Administration errors are frequent among inpatients. The median error rate without wrong time errors for the cross-sectional studies using TOE was about 10%. A standardization of administration error rate using the same denominator (TOE), numerator and types of errors is essential for further publications. PMID:23818992

  15. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  16. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-04-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.

  17. Using Errors by Guard Honeybees (Apis mellifera) to Gain New Insights into Nestmate Recognition Signals.

    PubMed

    Pradella, Duccio; Martin, Stephen J; Dani, Francesca R

    2015-11-01

    Although the honeybee (Apis mellifera) is one of the world most studied insects, the chemical compounds used in nestmate recognition, remains an open question. By exploiting the error prone recognition system of the honeybee, coupled with genotyping, we studied the correlation between cuticular hydrocarbon (CHC) profile of returning foragers and acceptance or rejection behavior by guards. We revealed an average recognition error rate of 14% across 3 study colonies, that is, allowing a non-nestmate colony entry, or preventing a nestmate from entry, which is lower than reported in previous studies. By analyzing CHCs, we found that CHC profile of returning foragers correlates with acceptance or rejection by guarding bees. Although several CHC were identified as potential recognition cues, only a subset of 4 differed consistently for their relative amount between accepted and rejected individuals in the 3 studied colonies. These include a unique group of 2 positional alkene isomers (Z-8 and Z-10), which are almost exclusively produced by the bees Bombus and Apis spp, and may be candidate compounds for further study.

  18. Freeform solar concentrator with a highly asymmetric acceptance cone

    NASA Astrophysics Data System (ADS)

    Wheelwright, Brian; Angel, J. Roger P.; Coughenour, Blake; Hammer, Kimberly

    2014-10-01

    A solar concentrator with a highly asymmetric acceptance cone is investigated. Concentrating photovoltaic systems require dual-axis sun tracking to maintain nominal concentration throughout the day. In addition to collecting direct rays from the solar disk, which subtends ~0.53 degrees, concentrating optics must allow for in-field tracking errors due to mechanical misalignment of the module, wind loading, and control loop biases. The angular range over which the concentrator maintains <90% of on-axis throughput is defined as the optical acceptance angle. Concentrators with substantial rotational symmetry likewise exhibit rotationally symmetric acceptance angles. In the field, this is sometimes a poor match with azimuth-elevation trackers, which have inherently asymmetric tracking performance. Pedestal-mounted trackers with low torsional stiffness about the vertical axis have better elevation tracking than azimuthal tracking. Conversely, trackers which rotate on large-footprint circular tracks are often limited by elevation tracking performance. We show that a line-focus concentrator, composed of a parabolic trough primary reflector and freeform refractive secondary, can be tailored to have a highly asymmetric acceptance angle. The design is suitable for a tracker with excellent tracking accuracy in the elevation direction, and poor accuracy in the azimuthal direction. In the 1000X design given, when trough optical errors (2mrad rms slope deviation) are accounted for, the azimuthal acceptance angle is +/- 1.65°, while the elevation acceptance angle is only +/-0.29°. This acceptance angle does not include the angular width of the sun, which consumes nearly all of the elevation tolerance at this concentration level. By decreasing the average concentration, the elevation acceptance angle can be increased. This is well-suited for a pedestal alt-azimuth tracker with a low cost slew bearing (without anti-backlash features).

  19. Determination and Modeling of Error Densities in Ephemeris Prediction

    SciTech Connect

    Jones, J.P.; Beckerman, M.

    1999-02-07

    The authors determined error densities of ephemeris predictions for 14 LEO satellites. The empirical distributions are not inconsistent with the hypothesis of a Gaussian distribution. The growth rate of radial errors are most highly correlated with eccentricity ({vert_bar}r{vert_bar} = 0.63, {alpha} < 0.05). The growth rate of along-track errors is most highly correlated with the decay rate of the semimajor axis ({vert_bar}r{vert_bar} = 0.97; {alpha} < 0.01).

  20. The influence of the IMRT QA set-up error on the 2D and 3D gamma evaluation method as obtained by using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kim, Kyeong-Hyeon; Kim, Dong-Su; Kim, Tae-Ho; Kang, Seong-Hee; Cho, Min-Seok; Suh, Tae Suk

    2015-11-01

    The phantom-alignment error is one of the factors affecting delivery quality assurance (QA) accuracy in intensity-modulated radiation therapy (IMRT). Accordingly, a possibility of inadequate use of spatial information in gamma evaluation may exist for patient-specific IMRT QA. The influence of the phantom-alignment error on gamma evaluation can be demonstrated experimentally by using the gamma passing rate and the gamma value. However, such experimental methods have a limitation regarding the intrinsic verification of the influence of the phantom set-up error because experimentally measuring the phantom-alignment error accurately is impossible. To overcome this limitation, we aimed to verify the effect of the phantom set-up error within the gamma evaluation formula by using a Monte Carlo simulation. Artificial phantom set-up errors were simulated, and the concept of the true point (TP) was used to represent the actual coordinates of the measurement point for the mathematical modeling of these effects on the gamma. Using dose distributions acquired from the Monte Carlo simulation, performed gamma evaluations in 2D and 3D. The results of the gamma evaluations and the dose difference at the TP were classified to verify the degrees of dose reflection at the TP. The 2D and the 3D gamma errors were defined by comparing gamma values between the case of the imposed phantom set-up error and the TP in order to investigate the effect of the set-up error on the gamma value. According to the results for gamma errors, the 3D gamma evaluation reflected the dose at the TP better than the 2D one. Moreover, the gamma passing rates were higher for 3D than for 2D, as is widely known. Thus, the 3D gamma evaluation can increase the precision of patient-specific IMRT QA by applying stringent acceptance criteria and setting a reasonable action level for the 3D gamma passing rate.

  1. Error monitoring in musicians

    PubMed Central

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  2. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  3. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  4. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  5. The Nature of Error in Adolescent Student Writing

    ERIC Educational Resources Information Center

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  6. Native Speaker Judgment and the Evaluation of Errors in German.

    ERIC Educational Resources Information Center

    Delisle, Helga H.

    1982-01-01

    Presents and analyzes two studies designed to test native speaker reaction to certain types of errors that speakers of English make when learning German. Aim was to establish the role of the medium, spoken or written, in evaluation of errors. Results show overall ratings of errors in written and spoken language are similar, although with…

  7. Extending the Technology Acceptance Model: Policy Acceptance Model (PAM)

    NASA Astrophysics Data System (ADS)

    Pierce, Tamra

    There has been extensive research on how new ideas and technologies are accepted in society. This has resulted in the creation of many models that are used to discover and assess the contributing factors. The Technology Acceptance Model (TAM) is one that is a widely accepted model. This model examines people's acceptance of new technologies based on variables that directly correlate to how the end user views the product. This paper introduces the Policy Acceptance Model (PAM), an expansion of TAM, which is designed for the analysis and evaluation of acceptance of new policy implementation. PAM includes the traditional constructs of TAM and adds the variables of age, ethnicity, and family. The model is demonstrated using a survey of people's attitude toward the upcoming healthcare reform in the United States (US) from 72 survey respondents. The aim is that the theory behind this model can be used as a framework that will be applicable to studies looking at the introduction of any new or modified policies.

  8. Superdense coding interleaved with forward error correction

    SciTech Connect

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful method to improve the performance in near-term demonstrations of superdense coding.

  9. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  10. An error control system with multiple-stage forward error corrections

    NASA Technical Reports Server (NTRS)

    Takata, Toyoo; Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1990-01-01

    A robust error-control coding system is presented. This system is a cascaded FEC (forward error control) scheme supported by parity retransmissions for further error correction in the erroneous data words. The error performance and throughput efficiency of the system are analyzed. Two specific examples of the error-control system are studied. The first example does not use an inner code, and the outer code, which is not interleaved, is a shortened code of the NASA standard RS code over GF(28). The second example, as proposed for NASA, uses the same shortened RS code as the base outer code C2, except that it is interleaved to a depth of 2. It is shown that both examples provide high reliability and throughput efficiency even for high channel bit-error rates in the range of 0.01.

  11. Soft Error Vulnerability of Iterative Linear Algebra Methods

    SciTech Connect

    Bronevetsky, G; de Supinski, B

    2008-01-19

    Devices are increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft error rates were significant primarily in space and high-atmospheric computing. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming important even at terrestrial altitudes. Due to their large number of components, supercomputers are particularly susceptible to soft errors. Since many large scale parallel scientific applications use iterative linear algebra methods, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. Many users consider these methods invulnerable to most soft errors since they converge from an imprecise solution to a precise one. However, we show in this paper that iterative methods are vulnerable to soft errors, exhibiting both silent data corruptions and poor ability to detect errors. Further, we evaluate a variety of soft error detection and tolerance techniques, including checkpointing, linear matrix encodings, and residual tracking techniques.

  12. Market Acceptance of Smart Growth

    EPA Pesticide Factsheets

    This report finds that smart growth developments enjoy market acceptance because of stability in prices over time. Housing resales in smart growth developments often have greater appreciation than their conventional suburban counterparts.

  13. L-286 Acceptance Test Record

    SciTech Connect

    HARMON, B.C.

    2000-01-14

    This document provides a detailed account of how the acceptance testing was conducted for Project L-286, ''200E Area Sanitary Water Plant Effluent Stream Reduction''. The testing of the L-286 instrumentation system was conducted under the direct supervision

  14. Acceptance criteria for urban dispersion model evaluation

    NASA Astrophysics Data System (ADS)

    Hanna, Steven; Chang, Joseph

    2012-05-01

    The authors suggested acceptance criteria for rural dispersion models' performance measures in this journal in 2004. The current paper suggests modified values of acceptance criteria for urban applications and tests them with tracer data from four urban field experiments. For the arc-maximum concentrations, the fractional bias should have a magnitude <0.67 (i.e., the relative mean bias is less than a factor of 2); the normalized mean-square error should be <6 (i.e., the random scatter is less than about 2.4 times the mean); and the fraction of predictions that are within a factor of two of the observations (FAC2) should be >0.3. For all data paired in space, for which a threshold concentration must always be defined, the normalized absolute difference should be <0.50, when the threshold is three times the instrument's limit of quantification (LOQ). An overall criterion is then applied that the total set of acceptance criteria should be satisfied in at least half of the field experiments. These acceptance criteria are applied to evaluations of the US Department of Defense's Joint Effects Model (JEM) with tracer data from US urban field experiments in Salt Lake City (U2000), Oklahoma City (JU2003), and Manhattan (MSG05 and MID05). JEM includes the SCIPUFF dispersion model with the urban canopy option and the urban dispersion model (UDM) option. In each set of evaluations, three or four likely options are tested for meteorological inputs (e.g., a local building top wind speed, the closest National Weather Service airport observations, or outputs from numerical weather prediction models). It is found that, due to large natural variability in the urban data, there is not a large difference between the performance measures for the two model options and the three or four meteorological input options. The more detailed UDM and the state-of-the-art numerical weather models do provide a slight improvement over the other options. The proposed urban dispersion model acceptance

  15. Error Sensitivity Model.

    DTIC Science & Technology

    1980-04-01

    Philosophy The Positioning/Error Model has been defined in three dis- tinct phases: I - Error Sensitivity Model II - Operonal Positioning Model III...X inv VH,’itat NX*YImpY -IY+X 364: mat AX+R 365: ara R+L+R 366: if NC1,1J-N[2,2)=O and N[1,2<135+T;j, 6 367: if NC1,1]-N2,2J=6 and NCI2=;0.T;jmp 5

  16. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  17. Errors in particle tracking velocimetry with high-speed cameras.

    PubMed

    Feng, Yan; Goree, J; Liu, Bin

    2011-05-01

    Velocity errors in particle tracking velocimetry (PTV) are studied. When using high-speed video cameras, the velocity error may increase at a high camera frame rate. This increase in velocity error is due to particle-position uncertainty, which is one of the two sources of velocity errors studied here. The other source of error is particle acceleration, which has the opposite trend of diminishing at higher frame rates. Both kinds of errors can propagate into quantities calculated from velocity, such as the kinetic temperature of particles or correlation functions. As demonstrated in a dusty plasma experiment, the kinetic temperature of particles has no unique value when measured using PTV, but depends on the sampling time interval or frame rate. It is also shown that an artifact appears in an autocorrelation function computed from particle positions and velocities, and it becomes more severe when a small sampling-time interval is used. Schemes to reduce these errors are demonstrated.

  18. Evaluation of drug administration errors in a teaching hospital

    PubMed Central

    2012-01-01

    Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions. PMID:22409837

  19. Acceptability of two spermicides in five countries.

    PubMed

    Raymond, E; Alvarado, G; Ledesma, L; Diaz, S; Bassol, S; Morales, E; Fernandez, V; Carlos, G

    1999-07-01

    Data from a large, international, multicenter, randomized trial were analyzed to compare the acceptability of two nonoxynol-9 spermicide preparations. Women who wished to use a spermicide for contraception were randomly assigned to use either a foaming tablet (n = 383) or a nonoxynol-9 film (n = 382) for 28 weeks as their only method of contraception. Participants completed questionnaires about acceptability of the assigned product 4 weeks after admission and at discontinuation. Women in both groups had very favorable opinions of the spermicide. The proportion of women who said that they liked their assigned product very much was 50% in the tablet group and 55% in the film group. Significantly more women in the film group rated the spermicide difficult to insert and stated that the product stuck to the finger during insertion. More women in the tablet group said that the product was messy and that, at least once, it did not dissolve. In both groups, liking the product was significantly associated with consistency of use, but not with subsequent pregnancy. Participants' male partners had little influence on participants' opinions about, or use of, the spermicides. Although previous analyses showed that both spermicides are associated with high pregnancy rates, they are both highly acceptable to most women.

  20. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  1. Continuation rates, bleeding profile acceptability, and satisfaction of women using an oral contraceptive pill containing estradiol valerate and dienogest versus a progestogen-only pill after switching from an ethinylestradiol-containing pill in a real-life setting: results of the CONTENT study

    PubMed Central

    Briggs, Paula; Serrani, Marco; Vogtländer, Kai; Parke, Susanne

    2016-01-01

    Background Oral contraceptives are still associated with high discontinuation rates, despite their efficacy. There is a wide choice of oral contraceptives available, and the aim of this study was to assess continuation rates, bleeding profile acceptability, and the satisfaction of women in the first year of using a contraceptive pill containing estradiol valerate and dienogest (E2V/DNG) versus a progestogen-only pill (POP) in a real-life setting after discontinuing an ethinylestradiol-containing pill. Methods and results In this prospective, noninterventional, observational study, 3,152 patients were included for the efficacy analyses (n=2,558 women in the E2V/DNG group and n=592 in the POP group (two patients fulfilled the criteria of the efficacy population, but the used product was not known). Women had been taking an ethinylestradiol-containing pill ≥3 months before deciding to switch to the E2V/DNG pill or a POP. Overall, 19.8% (n=506) of E2V/DNG users and 25.8% (n=153) of POP users discontinued their prescribed pill. The median time to discontinuation was 157.0 days and 127.5 days, respectively. Time to discontinuation due to bleeding (P<0.0001) or other reasons (P=0.022) was significantly longer in the E2V/DNG group versus the POP group. The E2V/DNG pill was also associated with shorter (48.7% vs 44.1%), lighter (54% vs 46.1%), and less painful bleeding (91.1% vs 73.7%) and greater user satisfaction (80.7% vs 64.6%) than POP use, within 3–5 months after switch. Conclusion The E2V/DNG pill was associated with higher rates of continuation, bleeding profile acceptability, and user satisfaction than POP use and may be an alternative option for women who are dissatisfied with their current pill. PMID:27695365

  2. Spin glasses and error-correcting codes

    NASA Technical Reports Server (NTRS)

    Belongie, M. L.

    1994-01-01

    In this article, we study a model for error-correcting codes that comes from spin glass theory and leads to both new codes and a new decoding technique. Using the theory of spin glasses, it has been proven that a simple construction yields a family of binary codes whose performance asymptotically approaches the Shannon bound for the Gaussian channel. The limit is approached as the number of information bits per codeword approaches infinity while the rate of the code approaches zero. Thus, the codes rapidly become impractical. We present simulation results that show the performance of a few manageable examples of these codes. In the correspondence that exists between spin glasses and error-correcting codes, the concept of a thermal average leads to a method of decoding that differs from the standard method of finding the most likely information sequence for a given received codeword. Whereas the standard method corresponds to calculating the thermal average at temperature zero, calculating the thermal average at a certain optimum temperature results instead in the sequence of most likely information bits. Since linear block codes and convolutional codes can be viewed as examples of spin glasses, this new decoding method can be used to decode these codes in a way that minimizes the bit error rate instead of the codeword error rate. We present simulation results that show a small improvement in bit error rate by using the thermal average technique.

  3. Report of the Subpanel on Error Characterization and Error Budgets

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The state of knowledge of both user positioning requirements and error models of current and proposed satellite systems is reviewed. In particular the error analysis models for LANDSAT D are described. Recommendations are given concerning the geometric error model for the thematic mapper; interactive user involvement in system error budgeting and modeling and verification on real data sets; and the identification of a strawman mission for modeling key error sources.

  4. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  5. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  6. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  7. Foliated Quantum Error-Correcting Codes

    NASA Astrophysics Data System (ADS)

    Bolt, A.; Duclos-Cianci, G.; Poulin, D.; Stace, T. M.

    2016-08-01

    We show how to construct a large class of quantum error-correcting codes, known as Calderbank-Steane-Shor codes, from highly entangled cluster states. This becomes a primitive in a protocol that foliates a series of such cluster states into a much larger cluster state, implementing foliated quantum error correction. We exemplify this construction with several familiar quantum error-correction codes and propose a generic method for decoding foliated codes. We numerically evaluate the error-correction performance of a family of finite-rate Calderbank-Steane-Shor codes known as turbo codes, finding that they perform well over moderate depth foliations. Foliated codes have applications for quantum repeaters and fault-tolerant measurement-based quantum computation.

  8. Foliated Quantum Error-Correcting Codes.

    PubMed

    Bolt, A; Duclos-Cianci, G; Poulin, D; Stace, T M

    2016-08-12

    We show how to construct a large class of quantum error-correcting codes, known as Calderbank-Steane-Shor codes, from highly entangled cluster states. This becomes a primitive in a protocol that foliates a series of such cluster states into a much larger cluster state, implementing foliated quantum error correction. We exemplify this construction with several familiar quantum error-correction codes and propose a generic method for decoding foliated codes. We numerically evaluate the error-correction performance of a family of finite-rate Calderbank-Steane-Shor codes known as turbo codes, finding that they perform well over moderate depth foliations. Foliated codes have applications for quantum repeaters and fault-tolerant measurement-based quantum computation.

  9. Cognitive Errors in Reconciling Complex Medication Lists

    PubMed Central

    Horsky, Jan; Ramelson, Harley Z.

    2016-01-01

    Discrepancies between multiple electronic versions of patient medication records contribute to adverse drug events. Regular reconciliation increases their accuracy but is often inadequately supported by EHRs. We evaluated two systems with conceptually different interface designs for their effectiveness in resolving discrepancies. Eleven clinicians reconciled a complex list of 16 medications using both EHRs in the same standardized scenario. Errors such as omissions to add or discontinue a drug or to update a dose were analyzed. Clinicians made three times as many errors working with an EHR with lists arranged in a single column than when using a system with side-by-side lists. Excessive cognitive effort and reliance on memory was likely a strong contributing factor for lower accuracy of reconciliation. As errors increase with task difficulty, evaluations of reconciliation tools need to focus on complex prescribing scenarios to accurately assess effectiveness, error rate and whether they reduce risk to patient safety. PMID:28269860

  10. Type I Error Control for Tree Classification

    PubMed Central

    Jung, Sin-Ho; Chen, Yong; Ahn, Hongshik

    2014-01-01

    Binary tree classification has been useful for classifying the whole population based on the levels of outcome variable that is associated with chosen predictors. Often we start a classification with a large number of candidate predictors, and each predictor takes a number of different cutoff values. Because of these types of multiplicity, binary tree classification method is subject to severe type I error probability. Nonetheless, there have not been many publications to address this issue. In this paper, we propose a binary tree classification method to control the probability to accept a predictor below certain level, say 5%. PMID:25452689

  11. From requirements to acceptance tests

    NASA Technical Reports Server (NTRS)

    Baize, Lionel; Pasquier, Helene

    1993-01-01

    From user requirements definition to accepted software system, the software project management wants to be sure that the system will meet the requirements. For the development of a telecommunication satellites Control Centre, C.N.E.S. has used new rules to make the use of tracing matrix easier. From Requirements to Acceptance Tests, each item of a document must have an identifier. A unique matrix traces the system and allows the tracking of the consequences of a change in the requirements. A tool has been developed, to import documents into a relational data base. Each record of the data base corresponds to an item of a document, the access key is the item identifier. Tracing matrix is also processed, providing automatically links between the different documents. It enables the reading on the same screen of traced items. For example one can read simultaneously the User Requirements items, the corresponding Software Requirements items and the Acceptance Tests.

  12. Defining acceptable conditions in wilderness

    NASA Astrophysics Data System (ADS)

    Roggenbuck, J. W.; Williams, D. R.; Watson, A. E.

    1993-03-01

    The limits of acceptable change (LAC) planning framework recognizes that forest managers must decide what indicators of wilderness conditions best represent resource naturalness and high-quality visitor experiences and how much change from the pristine is acceptable for each indicator. Visitor opinions on the aspects of the wilderness that have great impact on their experience can provide valuable input to selection of indicators. Cohutta, Georgia; Caney Creek, Arkansas; Upland Island, Texas; and Rattlesnake, Montana, wilderness visitors have high shared agreement that littering and damage to trees in campsites, noise, and seeing wildlife are very important influences on wilderness experiences. Camping within sight or sound of other people influences experience quality more than do encounters on the trails. Visitors’ standards of acceptable conditions within wilderness vary considerably, suggesting a potential need to manage different zones within wilderness for different clientele groups and experiences. Standards across wildernesses, however, are remarkably similar.

  13. Using Bit Errors To Diagnose Fiber-Optic Links

    NASA Technical Reports Server (NTRS)

    Bergman, L. A.; Hartmayer, R.; Marelid, S.

    1989-01-01

    Technique for diagnosis of fiber-optic digital communication link in local-area network of computers based on measurement of bit-error rates. Variable optical attenuator inserted in optical fiber to vary power of received signal. Bit-error rate depends on ratio of peak signal power to root-mean-square noise in receiver. For optimum measurements, one selects bit-error rate between 10 to negative 8th power and 10 to negative 4th power. Greater rates result in low accuracy in determination of signal-to-noise ratios, while lesser rates require impractically long measurement times.

  14. Hyponatremia: management errors.

    PubMed

    Seo, Jang Won; Park, Tae Jin

    2006-11-01

    Rapid correction of hyponatremia is frequently associated with increased morbidity and mortality. Therefore, it is important to estimate the proper volume and type of infusate required to increase the serum sodium concentration predictably. The major common management errors during the treatment of hyponatremia are inadequate investigation, treatment with fluid restriction for diuretic-induced hyponatremia and treatment with fluid restriction plus intravenous isotonic saline simultaneously. We present two cases of management errors. One is about the problem of rapid correction of hyponatremia in a patient with sepsis and acute renal failure during continuous renal replacement therapy in the intensive care unit. The other is the case of hypothyroidism in which hyponatremia was aggravated by intravenous infusion of dextrose water and isotonic saline infusion was erroneously used to increase serum sodium concentration.

  15. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  16. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  17. Surface temperature measurement errors

    SciTech Connect

    Keltner, N.R.; Beck, J.V.

    1983-05-01

    Mathematical models are developed for the response of surface mounted thermocouples on a thick wall. These models account for the significant causes of errors in both the transient and steady-state response to changes in the wall temperature. In many cases, closed form analytical expressions are given for the response. The cases for which analytical expressions are not obtained can be easily evaluated on a programmable calculator or a small computer.

  18. Medical error and human factors engineering: where are we now?

    PubMed

    Gawron, Valerie J; Drury, Colin G; Fairbanks, Rollin J; Berger, Roseanne C

    2006-01-01

    The goal of human factors engineering is to optimize the relationship between humans and systems by studying human behavior, abilities, and limitations and using this knowledge to design systems for safe and effective human use. With the assumption that the human component of any system will inevitably produce errors, human factors engineers design systems and human/machine interfaces that are robust enough to reduce error rates and the effect of the inevitable error within the system. In this article, we review the extent and nature of medical error and then discuss human factors engineering tools that have potential applicability. These tools include taxonomies of human and system error and error data collection and analysis methods. Finally, we describe studies that have examined medical error, and on the basis of these studies, present conclusions about how human factors engineering can significantly reduce medical errors and their effects.

  19. The Location of Error: Reflections on a Research Project

    ERIC Educational Resources Information Center

    Cook, Devan

    2010-01-01

    Andrea Lunsford and Karen Lunsford conclude "Mistakes Are a Fact of Life: A National Comparative Study," a discussion of their research project exploring patterns of formal grammar and usage error in first-year writing, with an invitation to "conduct a local version of this study." The author was eager to accept their invitation; learning and…

  20. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    EPA Science Inventory

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  1. Acceptability of contraception for men: a review.

    PubMed

    Glasier, Anna

    2010-11-01

    Methods of contraception for use by men include condoms, withdrawal and vasectomy. Prevalence of use of a method and continuation rates are indirect measures of acceptability. Worldwide, none of these "male methods" accounts for more than 7% of contraceptive use although uptake varies considerably between countries. Acceptability can be assessed directly by asking about intended (hypothetical) use and assessing satisfaction during/after use. Since they have been around for a very long time, there are very few data of this nature on condoms (as contraceptives rather than for prevention of infection), withdrawal or vasectomy. There are direct data on the acceptability of hormonal methods for men but from relatively small clinical trials which undoubtedly do not represent the real world. Surveys undertaken among the male general public demonstrate that, whatever the setting, at least 25% of men - and in most countries substantially more - would consider using hormonal contraception. Although probably an overestimate of the number of potential users when such a method becomes available, it would appear that hormonal contraceptives for men may have an important place on the contraceptive menu. Despite commonly expressed views to the contrary, most women would trust their male partner to use a hormonal method.

  2. Error-related negativity reflects detection of negative reward prediction error.

    PubMed

    Yasuda, Asako; Sato, Atsushi; Miyawaki, Kaori; Kumano, Hiroaki; Kuboki, Tomifusa

    2004-11-15

    Error-related negativity (ERN) is a negative deflection in the event-related potential elicited in error trials. To examine the function of ERN, we performed an experiment in which two within-participants factors were manipulated: outcome uncertainty and content of feedback. The ERN was largest when participants expected correct feedback but received error feedback. There were significant positive correlations between the ERN amplitude and the rate of response switching in the subsequent trial, and between the ERN amplitude and the trait version score on negative affect scale. These results suggest that ERN reflects detection of a negative reward prediction error and promotes subsequent response switching, and that individuals with high negative affect are hypersensitive to a negative reward prediction error.

  3. Further Conceptualization of Treatment Acceptability

    ERIC Educational Resources Information Center

    Carter, Stacy L.

    2008-01-01

    A review and extension of previous conceptualizations of treatment acceptability is provided in light of progress within the area of behavior treatment development and implementation. Factors including legislation, advances in research, and service delivery models are examined as to their relationship with a comprehensive conceptualization of…

  4. Acceptance and Commitment Therapy: Introduction

    ERIC Educational Resources Information Center

    Twohig, Michael P.

    2012-01-01

    This is the introductory article to a special series in Cognitive and Behavioral Practice on Acceptance and Commitment Therapy (ACT). Instead of each article herein reviewing the basics of ACT, this article contains that review. This article provides a description of where ACT fits within the larger category of cognitive behavior therapy (CBT):…

  5. Nitrogen trailer acceptance test report

    SciTech Connect

    Kostelnik, A.J.

    1996-02-12

    This Acceptance Test Report documents compliance with the requirements of specification WHC-S-0249. The equipment was tested according to WHC-SD-WM-ATP-108 Rev.0. The equipment being tested is a portable contained nitrogen supply. The test was conducted at Norco`s facility.

  6. Imaginary Companions and Peer Acceptance

    ERIC Educational Resources Information Center

    Gleason, Tracy R.

    2004-01-01

    Early research on imaginary companions suggests that children who create them do so to compensate for poor social relationships. Consequently, the peer acceptance of children with imaginary companions was compared to that of their peers. Sociometrics were conducted on 88 preschool-aged children; 11 had invisible companions, 16 had personified…

  7. Euthanasia Acceptance: An Attitudinal Inquiry.

    ERIC Educational Resources Information Center

    Klopfer, Fredrick J.; Price, William F.

    The study presented was conducted to examine potential relationships between attitudes regarding the dying process, including acceptance of euthanasia, and other attitudinal or demographic attributes. The data of the survey was comprised of responses given by 331 respondents to a door-to-door interview. Results are discussed in terms of preferred…

  8. Helping Our Children Accept Themselves.

    ERIC Educational Resources Information Center

    Gamble, Mae

    1984-01-01

    Parents of a child with muscular dystrophy recount their reactions to learning of the diagnosis, their gradual acceptance, and their son's resistance, which was gradually lessened when he was provided with more information and treated more normally as a member of the family. (CL)

  9. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  10. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  11. Rotating waveplates as polarization modulators for Stokes polarimetry of the sun - Evaluation of seeing-induced crosstalk errors

    NASA Astrophysics Data System (ADS)

    Lites, Bruce W.

    1987-09-01

    A formalism for estimating the crosstalk error among Stokes I,Q,U,V introduced by seeing-induced image motion is presented. This formalism is applied to several modulation schemes for polarization involving rotating waveplates, and it is evaluated using an observed power spectrum of image motion obtained from the Vacuum Tower Telescope at the National Solar Observatory/Sunspot. It is shown that rotating waveplates offer an acceptable alternative for measurements of absorption line polarization of features observed on the solar disk, provided the detection can be carried out at video frame rates or faster.

  12. Error Detection in Mechanized Classification Systems

    ERIC Educational Resources Information Center

    Hoyle, W. G.

    1976-01-01

    When documentary material is indexed by a mechanized classification system, and the results judged by trained professionals, the number of documents in disagreement, after suitable adjustment, defines the error rate of the system. In a test case disagreement was 22 percent and, of this 22 percent, the computer correctly identified two-thirds of…

  13. Paying less but harvesting more: the effect of unconscious acceptance in regulating frustrating emotion.

    PubMed

    Ding, NanXiang; Yang, JieMin; Liu, YingYing; Yuan, JiaJin

    2015-08-01

    Previous studies indicate that emotion regulation may occur unconsciously, without the cost of cognitive effort, while conscious acceptance may enhance negative experiences despite having potential long-term health benefits. Thus, it is important to overcome this weakness to boost the efficacy of the acceptance strategy in negative emotion regulation. As unconscious regulation occurs with little cost of cognitive resources, the current study hypothesizes that unconscious acceptance regulates the emotional consequence of negative events more effectively than does conscious acceptance. Subjects were randomly assigned to conscious acceptance, unconscious acceptance and no-regulation conditions. A frustrating arithmetic task was used to induce negative emotion. Emotional experiences were assessed on the Positive Affect and Negative Affect Scale while emotion- related physiological activation was assessed by heart-rate reactivity. Results showed that conscious acceptance had a significant negative affective consequence, which was absent during unconscious acceptance. That is, unconscious acceptance was linked with little reduction of positive affect during the experience of frustration, while this reduction was prominent in the control and conscious acceptance groups. Instructed, conscious acceptance resulted in a greater reduction of positive affect than found for the control group. In addition, both conscious and unconscious acceptance strategies significantly decreased emotion-related heart-rate activity (to a similar extent) in comparison with the control condition. Moreover, heart-rate reactivity was positively correlated with negative affect and negatively correlated with positive affect during the frustration phase relative to the baseline phase, in both the control and unconscious acceptance groups. Thus, unconscious acceptance not only reduces emotion-related physiological activity but also better protects mood stability compared with conscious acceptance. This

  14. Wavefront error sensing

    NASA Technical Reports Server (NTRS)

    Tubbs, Eldred F.

    1986-01-01

    A two-step approach to wavefront sensing for the Large Deployable Reflector (LDR) was examined as part of an effort to define wavefront-sensing requirements and to determine particular areas for more detailed study. A Hartmann test for coarse alignment, particularly segment tilt, seems feasible if LDR can operate at 5 microns or less. The direct measurement of the point spread function in the diffraction limited region may be a way to determine piston error, but this can only be answered by a detailed software model of the optical system. The question of suitable astronomical sources for either test must also be addressed.

  15. Detecting Errors in Programs

    DTIC Science & Technology

    1979-02-01

    unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 DETECTING ERRORS IN PROGRAMS* Lloyd D. Fosdick...from a finite set of tests [35,36]a Recently Howden [37] presented a result showing that for a particular class of Lindenmayer grammars it was possible...Diego, CA. 37o Howden, W.E.: Lindenmayer grammars and symbolic testing. Information Processing Letters 7,1 (Jano 1978), 36-39. 38~ Fitzsimmons, Ann

  16. Errors of measurement by laser goniometer

    NASA Astrophysics Data System (ADS)

    Agapov, Mikhail Y.; Bournashev, Milhail N.

    2000-11-01

    The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.

  17. Horizon sensor errors calculated by computer models compared with errors measured in orbit

    NASA Technical Reports Server (NTRS)

    Ward, K. A.; Hogan, R.; Andary, J.

    1982-01-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.

  18. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    Promote Health and Well-being Among Middle School Educators. 20. A Systematic Review of Yoga-based Interventions for Objective and Subjective Balance Measures. 21. Disparities in Yoga Use: A Multivariate Analysis of 2007 National Health Interview Survey Data. 22. Implementing Yoga Therapy Adapted for Older Veterans Who Are Cancer Survivors. 23. Randomized, Controlled Trial of Yoga for Women With Major Depressive Disorder: Decreased Ruminations as Potential Mechanism for Effects on Depression? 24. Yoga Beyond the Metropolis: A Yoga Telehealth Program for Veterans. 25. Yoga Practice Frequency, Relationship Maintenance Behaviors, and the Potential Mediating Role of Relationally Interdependent Cognition. 26. Effects of Medical Yoga in Quality of Life, Blood Pressure, and Heart Rate in Patients With Paroxysmal Atrial Fibrillation. 27. Yoga During School May Promote Emotion Regulation Capacity in Adolescents: A Group Randomized, Controlled Study. 28. Integrated Yoga Therapy in a Single Session as a Stress Management Technique in Comparison With Other Techniques. 29. Effects of a Classroom-based Yoga Intervention on Stress and Attention in Second and Third Grade Students. 30. Improving Memory, Attention, and Executive Function in Older Adults with Yoga Therapy. 31. Reasons for Starting and Continuing Yoga. 32. Yoga and Stress Management May Buffer Against Sexual Risk-Taking Behavior Increases in College Freshmen. 33. Whole-systems Ayurveda and Yoga Therapy for Obesity: Outcomes of a Pilot Study. 34. Women�s Phenomenological Experiences of Exercise, Breathing, and the Body During Yoga for Smoking Cessation Treatment. 35. Mindfulness as a Tool for Trauma Recovery: Examination of a Gender-responsive Trauma-informed Integrative Mindfulness Program for Female Inmates. 36. Yoga After Stroke Leads to Multiple Physical Improvements. 37. Tele-Yoga in Patients With Chronic Obstructive Pulmonary Disease and Heart Failure: A Mixed-methods Study of Feasibility, Acceptability, and Safety

  19. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  20. Entanglment assisted zero-error codes

    NASA Astrophysics Data System (ADS)

    Matthews, William; Mancinska, Laura; Leung, Debbie; Ozols, Maris; Roy, Aidan

    2011-03-01

    Zero-error information theory studies the transmission of data over noisy communication channels with strictly zero error probability. For classical channels and data, much of the theory can be studied in terms of combinatorial graph properties and is a source of hard open problems in that domain. In recent work, we investigated how entanglement between sender and receiver can be used in this task. We found that entanglement-assisted zero-error codes (which are still naturally studied in terms of graphs) sometimes offer an increased bit rate of zero-error communication even in the large block length limit. The assisted codes that we have constructed are closely related to Kochen-Specker proofs of non-contextuality as studied in the context of foundational physics, and our results on asymptotic rates of assisted zero-error communication yield non-contextuality proofs which are particularly `strong' in a certain quantitive sense. I will also describe formal connections to the multi-prover games known as pseudo-telepathy games.

  1. EVALUATING THE ACCEPTABILITY AND FEASIBILITY OF PROJECT ACCEPT: AN INTERVENTION FOR YOUTH NEWLY DIAGNOSED WITH HIV

    PubMed Central

    Hosek, Sybil G.; Lemos, Diana; Harper, Gary W.; Telander, Kyle

    2012-01-01

    Given the potential for negative psychosocial and medical outcomes following an HIV diagnosis, Project ACCEPT, a 12-session behavioral intervention, was developed and pilot-tested for youth (aged 16–24) newly diagnosed with HIV. Fifty participants recently diagnosed with HIV were enrolled from 4 sites selected through the Adolescent Medicine Trials Network (ATN). The majority of participants identified as African American (78%). Feasibility and acceptability data demonstrated high rates of participation and high levels of satisfaction with the intervention program from both participants and staff. Exploratory outcome data demonstrated improved levels of HIV knowledge that were sustained over time (Cohen’s effect [d] d = .52) and improvements in peer (d = .35) and formal (d = .20) social support immediately postintervention. Gender differences emerged over time in the areas of depressive symptoms, family social support, self-efficacy for sexual discussion, and personalized stigma. Project ACCEPT appears to be an acceptable and feasible intervention to implement in clinical settings for youth newly diagnosed with HIV. PMID:21517662

  2. Inborn Errors in Immunity

    PubMed Central

    Lionakis, M.S.; Hajishengallis, G.

    2015-01-01

    In recent years, the study of genetic defects arising from inborn errors in immunity has resulted in the discovery of new genes involved in the function of the immune system and in the elucidation of the roles of known genes whose importance was previously unappreciated. With the recent explosion in the field of genomics and the increasing number of genetic defects identified, the study of naturally occurring mutations has become a powerful tool for gaining mechanistic insight into the functions of the human immune system. In this concise perspective, we discuss emerging evidence that inborn errors in immunity constitute real-life models that are indispensable both for the in-depth understanding of human biology and for obtaining critical insights into common diseases, such as those affecting oral health. In the field of oral mucosal immunity, through the study of patients with select gene disruptions, the interleukin-17 (IL-17) pathway has emerged as a critical element in oral immune surveillance and susceptibility to inflammatory disease, with disruptions in the IL-17 axis now strongly linked to mucosal fungal susceptibility, whereas overactivation of the same pathways is linked to inflammatory periodontitis. PMID:25900229

  3. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-01-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it does not appear that reactors add measurably to the risk associated with the Space Transportation System.

  4. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-04-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it dies not appear that reactors add measurably to the risk associated with the Space Transportation System.

  5. Error Analysis in Mathematics Education.

    ERIC Educational Resources Information Center

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  6. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  7. Prospective issues for error detection.

    PubMed

    Blavier, Adélaïde; Rouy, Emmanuelle; Nyssen, Anne-Sophie; de Keyser, Véronique

    2005-06-10

    From the literature on error detection, the authors select several concepts relating error detection mechanisms and prospective memory features. They emphasize the central role of intention in the classification of the errors into slips/lapses/mistakes, in the error handling process and in the usual distinction between action-based and outcome-based detection. Intention is again a core concept in their investigation of prospective memory theory, where they point out the contribution of intention retrievals, intention persistence and output monitoring in the individual's possibilities for detecting their errors. The involvement of the frontal lobes in prospective memory and in error detection is also analysed. From the chronology of a prospective memory task, the authors finally suggest a model for error detection also accounting for neural mechanisms highlighted by studies on error-related brain activity.

  8. At least some errors are randomly generated (Freud was wrong)

    NASA Technical Reports Server (NTRS)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  9. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  10. Error Patterns in Problem Solving.

    ERIC Educational Resources Information Center

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  11. Measurement Error. For Good Measure....

    ERIC Educational Resources Information Center

    Johnson, Stephen; Dulaney, Chuck; Banks, Karen

    No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…

  12. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  13. Feature Referenced Error Correction Apparatus.

    DTIC Science & Technology

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  14. Transcription errors induce proteotoxic stress and shorten cellular lifespan.

    PubMed

    Vermulst, Marc; Denney, Ashley S; Lang, Michael J; Hung, Chao-Wei; Moore, Stephanie; Moseley, M Arthur; Mosely, Arthur M; Thompson, J Will; Thompson, William J; Madden, Victoria; Gauer, Jacob; Wolfe, Katie J; Summers, Daniel W; Schleit, Jennifer; Sutphin, George L; Haroon, Suraiya; Holczbauer, Agnes; Caine, Joanne; Jorgenson, James; Cyr, Douglas; Kaeberlein, Matt; Strathern, Jeffrey N; Duncan, Mara C; Erie, Dorothy A

    2015-08-25

    Transcription errors occur in all living cells; however, it is unknown how these errors affect cellular health. To answer this question, we monitor yeast cells that are genetically engineered to display error-prone transcription. We discover that these cells suffer from a profound loss in proteostasis, which sensitizes them to the expression of genes that are associated with protein-folding diseases in humans; thus, transcription errors represent a new molecular mechanism by which cells can acquire disease phenotypes. We further find that the error rate of transcription increases as cells age, suggesting that transcription errors affect proteostasis particularly in aging cells. Accordingly, transcription errors accelerate the aggregation of a peptide that is implicated in Alzheimer's disease, and shorten the lifespan of cells. These experiments reveal a previously unappreciated role for transcriptional fidelity in cellular health and aging.

  15. On typographical errors.

    PubMed

    Hamilton, J W

    1993-09-01

    In his overall assessment of parapraxes in 1901, Freud included typographical mistakes but did not elaborate on or study this subject nor did he have anything to say about it in his later writings. This paper lists textual errors from a variety of current literary sources and explores the dynamic importance of their execution and the failure to make necessary corrections during the editorial process. While there has been a deemphasis of the role of unconscious determinants in the genesis of all slips as a result of recent findings in cognitive psychology, the examples offered suggest that, with respect to motivation, lapses in compulsivity contribute to their original commission while thematic compliance and voyeuristic issues are important in their not being discovered prior to publication.

  16. Correction of subtle refractive error in aviators.

    PubMed

    Rabin, J

    1996-02-01

    Optimal visual acuity is a requirement for piloting aircraft in military and civilian settings. While acuity can be corrected with glasses, spectacle wear can limit or even prohibit use of certain devices such as night vision goggles, helmet mounted displays, and/or chemical protective masks. Although current Army policy is directed toward selection of pilots who do not require spectacle correction for acceptable vision, refractive error can become manifest over time, making optical correction necessary. In such cases, contact lenses have been used quite successfully. Another approach is to neglect small amounts of refractive error, provided that vision is at least 20/20 without correction. This report describes visual findings in an aviator who was fitted with a contact lens to correct moderate astigmatism in one eye, while the other eye, with lesser refractive error, was left uncorrected. Advanced methods of testing visual resolution, including high and low contrast visual acuity and small letter contrast sensitivity, were used to compare vision achieved with full spectacle correction to that attained with the habitual, contact lens correction. Although the patient was pleased with his habitual correction, vision was significantly better with full spectacle correction, particularly on the small letter contrast test. Implications of these findings are considered.

  17. How perioperative nurses define, attribute causes of, and react to intraoperative nursing errors.

    PubMed

    Chard, Robin

    2010-01-01

    Errors in nursing practice pose a continuing threat to patient safety. A descriptive, correlational study was conducted to examine the definitions, circumstances, and perceived causes of intraoperative nursing errors; reactions of perioperative nurses to intraoperative nursing errors; and the relationships among coping with intraoperative nursing errors, emotional distress, and changes in practice made as a result of error. The results indicate that strategies of accepting responsibility and using self-control are significant predictors of emotional distress. Seeking social support and planful problem solving emerged as significant predictors of constructive changes in practice. Most predictive of defensive changes was the strategy of escape/avoidance.

  18. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  19. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  20. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  1. Three-dimensional error correcting with matched interleaving for holographic data storage

    NASA Astrophysics Data System (ADS)

    Gu, Huarong; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2011-10-01

    For applying to various error patterns, including random errors, burst errors, and inhomogeneously distributed errors, in the holographic data storage (HDS) channel, a three-dimensional error correcting with matched interleaving (3DEC-MI) scheme is proposed in this paper. The 3DEC-MI scheme combines the advantages of the three-dimensional error correcting scheme and the matched interleaving scheme, makes full use of the priori knowledge of the error patterns in the HDS channel, distributes errors more uniformly, and decodes data iteratively in three dimensions. It is able to eliminate the influences of non-uniform distribution of errors within a page and across pages, overcome the effects of burst errors, correct random errors, and effectively reduce the symbol error rate (SER) of the HDS channel.

  2. Superdense coding interleaved with forward error correction

    DOE PAGES

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  3. Towards error-free interaction.

    PubMed

    Tsoneva, Tsvetomira; Bieger, Jordi; Garcia-Molina, Gary

    2010-01-01

    Human-machine interaction (HMI) relies on pat- tern recognition algorithms that are not perfect. To improve the performance and usability of these systems we can utilize the neural mechanisms in the human brain dealing with error awareness. This study aims at designing a practical error detection algorithm using electroencephalogram signals that can be integrated in an HMI system. Thus, real-time operation, customization, and operation convenience are important. We address these requirements in an experimental framework simulating machine errors. Our results confirm the presence of brain potentials related to processing of machine errors. These are used to implement an error detection algorithm emphasizing the differences in error processing on a per subject basis. The proposed algorithm uses the individual best bipolar combination of electrode sites and requires short calibration. The single-trial error detection performance on six subjects, characterized by the area under the ROC curve ranges from 0.75 to 0.98.

  4. Error types and error positions in neglect dyslexia: comparative analyses in neglect patients and healthy controls.

    PubMed

    Weinzierl, Christiane; Kerkhoff, Georg; van Eimeren, Lucia; Keller, Ingo; Stenneken, Prisca

    2012-10-01

    Unilateral spatial neglect frequently involves a lateralised reading disorder, neglect dyslexia (ND). Reading of single words in ND is characterised by left-sided omissions and substitutions of letters. However, it is unclear whether the distribution of error types and positions within a word shows a unique pattern of ND when directly compared to healthy controls. This question has been difficult to answer so far, given the usually low number of reading errors in healthy controls. Therefore, the present study compared single word reading of 18 patients with left-sided neglect, due to right-hemisphere stroke, and 11 age-matched healthy controls, and adjusted individual task difficulty (by varying stimulus presentation times in participants) in order to reach approximately equal error rates between neglect patients and controls. Results showed that, while both omission and substitution errors were frequently produced in neglect patients and controls, only omissions appeared neglect-specific when task difficulty was adapted between groups. Analyses of individual letter positions within words revealed that the spatial distribution of reading errors in the neglect dyslexic patients followed an almost linear increase from the end to the beginning of the word (right-to-left-gradient). Both, the gradient in error positions and the predominance of omission errors presented a neglect-specific pattern. Consistent with current models of visual word processing, these findings suggest that ND reflects sublexical, visuospatial attentional mechanisms in letter string encoding.

  5. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  6. Axelrod model: accepting or discussing

    NASA Astrophysics Data System (ADS)

    Dybiec, Bartlomiej; Mitarai, Namiko; Sneppen, Kim

    2012-10-01

    Agents building social systems are characterized by complex states, and interactions among individuals can align their opinions. The Axelrod model describes how local interactions can result in emergence of cultural domains. We propose two variants of the Axelrod model where local consensus is reached either by listening and accepting one of neighbors' opinion or two agents discuss their opinion and achieve an agreement with mixed opinions. We show that the local agreement rule affects the character of the transition between the single culture and the multiculture regimes.

  7. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  8. Investigating the incidence of type I errors for chronic whole effluent toxicity testing using Ceriodaphnia dubia

    SciTech Connect

    Moore, T.F.; Canton, S.P.; Grimes, M.

    2000-01-01

    The risk of Type I error (false positives) is thought to be controlled directly by the selection of a critical p value for conducting statistical analyses. The critical value for whole effluent toxicity (WET) tests is routinely set to 0.05, thereby establishing a 95% confidence level about the statistical inferences. In order to estimate the incidence of Type I errors in chronic WET testing, a method blank-type study was performed. A number of municipal wastewater discharges contracted 16 laboratories to conduct chronic WET tests using the standard test organism Ceriodaphnia dubia. Unbeknown to the laboratories, the samples they received from the wastewater discharges were comprised only of moderately hard water, using the US Environmental Protection Agency's standard dilution water formula. Because there was functionally no difference between the sample water and the laboratory control/dilution water, the test results were expected to be less than or equal to 1 TUc (toxic unit). Of the 16 tests completed by the biomonitoring laboratories, two did not meet control performance criteria. Six of the remaining 14 valid tests indicated toxicity in the sample. This incidence of false positives was six times higher than expected when the critical value was set to 0.05. No plausible causes for this discrepancy were found. Various alternatives for reducing the rate of Type I errors are recommended, including greater reliance on survival endpoints and use of additional test acceptance criteria.

  9. Error, signal, and the placement of Ctenophora sister to all other animals.

    PubMed

    Whelan, Nathan V; Kocot, Kevin M; Moroz, Leonid L; Halanych, Kenneth M

    2015-05-05

    Elucidating relationships among early animal lineages has been difficult, and recent phylogenomic analyses place Ctenophora sister to all other extant animals, contrary to the traditional view of Porifera as the earliest-branching animal lineage. To date, phylogenetic support for either ctenophores or sponges as sister to other animals has been limited and inconsistent among studies. Lack of agreement among phylogenomic analyses using different data and methods obscures how complex traits, such as epithelia, neurons, and muscles evolved. A consensus view of animal evolution will not be accepted until datasets and methods converge on a single hypothesis of early metazoan relationships and putative sources of systematic error (e.g., long-branch attraction, compositional bias, poor model choice) are assessed. Here, we investigate possible causes of systematic error by expanding taxon sampling with eight novel transcriptomes, strictly enforcing orthology inference criteria, and progressively examining potential causes of systematic error while using both maximum-likelihood with robust data partitioning and Bayesian inference with a site-heterogeneous model. We identified ribosomal protein genes as possessing a conflicting signal compared with other genes, which caused some past studies to infer ctenophores and cnidarians as sister. Importantly, biases resulting from elevated compositional heterogeneity or elevated substitution rates are ruled out. Placement of ctenophores as sister to all other animals, and sponge monophyly, are strongly supported under multiple analyses, herein.

  10. Detection and integration of genotyping errors in statistical genetics.

    PubMed

    Sobel, Eric; Papp, Jeanette C; Lange, Kenneth

    2002-02-01

    Detection of genotyping errors and integration of such errors in statistical analysis are relatively neglected topics, given their importance in gene mapping. A few inopportunely placed errors, if ignored, can tremendously affect evidence for linkage. The present study takes a fresh look at the calculation of pedigree likelihoods in the presence of genotyping error. To accommodate genotyping error, we present extensions to the Lander-Green-Kruglyak deterministic algorithm for small pedigrees and to the Markov-chain Monte Carlo stochastic algorithm for large pedigrees. These extensions can accommodate a variety of error models and refrain from simplifying assumptions, such as allowing, at most, one error per pedigree. In principle, almost any statistical genetic analysis can be performed taking errors into account, without actually correcting or deleting suspect genotypes. Three examples illustrate the possibilities. These examples make use of the full pedigree data, multiple linked markers, and a prior error model. The first example is the estimation of genotyping error rates from pedigree data. The second-and currently most useful-example is the computation of posterior mistyping probabilities. These probabilities cover both Mendelian-consistent and Mendelian-inconsistent errors. The third example is the selection of the true pedigree structure connecting a group of people from among several competing pedigree structures. Paternity testing and twin zygosity testing are typical applications.

  11. Error and attack tolerance of complex networks

    NASA Astrophysics Data System (ADS)

    Albert, Réka; Jeong, Hawoong; Barabási, Albert-László

    2000-07-01

    Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network. Complex communication networks display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web, the Internet, social networks and cells. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.

  12. Sexual Education In Malaysia: Accepted Or Rejected?

    PubMed Central

    Mohd Mutalip, Siti Syairah; Mohamed, Ruzianisra

    2012-01-01

    Background: Introduction to sexual education in schools was suggested by the Malaysian government as one of the effort taken in the aim to reduce the sexual-related social problems among Malaysian teenagers nowadays. This study was proposed in the aim to determine the rate of acceptance among adolescents on the implementation of sexual education in schools. Methods: This study was conducted using questionnaires distributed to 152 pre-degree students in Faculty of Pharmacy, Universiti Teknologi MARA (UiTM), Kampus Puncak Alam, Selangor, Malaysia. Obtained data were statistically analyzed. Results: Almost half (49.3%) of the respondents agreed that sexual education might help to overcome the social illness among school teenagers. Besides, a large number (77.6%) of respondents also agreed that this module should be incorporated with other core subjects compare to the feedback received on the implementation of this module on its own (28.9%). Conclusion: These results have provided some insight towards the perception of sexual education among the teenagers. Since most of the respondents agreed with this idea, so it might be a sign that the implementation of sexual education is almost accepted by the adolescents. PMID:23113207

  13. Steam generator tube integrity flaw acceptance criteria

    SciTech Connect

    Cochet, B.

    1997-02-01

    The author discusses the establishment of a flaw acceptance criteria with respect to flaws in steam generator tubing. The problem is complicated because different countries take different approaches to the problem. The objectives in general are grouped in three broad areas: to avoid the unscheduled shutdown of the reactor during normal operation; to avoid tube bursts; to avoid excessive leak rates in the event of an accidental overpressure event. For each degradation mechanism in the tubes it is necessary to know answers to an array of questions, including: how well does NDT testing perform against this problem; how rapidly does such degradation develop; how well is this degradation mechanism understood. Based on the above information it is then possible to come up with a policy to look at flaw acceptance. Part of this criteria is a schedule for the frequency of in-service inspection and also a policy for when to plug flawed tubes. The author goes into a broad discussion of each of these points in his paper.

  14. Les Erreurs en Traduction (Errors in Translation). Melanges Pedagogiques, 1970.

    ERIC Educational Resources Information Center

    Billant, J.

    An experiment was carried out to investigate errors in translation exercises done by French students studying English as a second language. A code was devised to rate errors as being: (1) lexical or grammatical, and (2) related to the signifier or the signified, with further subdivisions within these groups. While this method has the advantage…

  15. 7 CFR 6.35 - Correction of errors.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 1 2011-01-01 2011-01-01 false Correction of errors. 6.35 Section 6.35 Agriculture Office of the Secretary of Agriculture IMPORT QUOTAS AND FEES Dairy Tariff-Rate Import Quota Licensing § 6.35 Correction of errors. (a) If a person demonstrates, to the satisfaction of the...

  16. Spell Checkers: Aids in Identifying and Correcting Spelling Errors.

    ERIC Educational Resources Information Center

    Jinkerson, Lorana; Baggett, Patricia

    1993-01-01

    Two groups of 9- to 11-year-olds were asked to find and correct spelling errors, one using a typewritten story and a dictionary, the other viewing the story on a computer monitor and utilizing a spell-checker program. Although error detection rates were higher in the spell checker group than in the dictionary group, efficiency and spelling…

  17. 7 CFR 6.35 - Correction of errors.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Correction of errors. 6.35 Section 6.35 Agriculture Office of the Secretary of Agriculture IMPORT QUOTAS AND FEES Dairy Tariff-Rate Import Quota Licensing § 6.35 Correction of errors. (a) If a person demonstrates, to the satisfaction of the...

  18. The locative alternation: distinguishing linguistic processing cost from error signals in Broca's region.

    PubMed

    Christensen, Ken Ramshøj; Wallentin, Mikkel

    2011-06-01

    The left inferior frontal gyrus (LIFG) is known to be involved in the processing of syntactic complexity, such as word order variation. It is also known to be involved in semantic interpretation in studies of various types of semantic and pragmatic anomalies. Across neuroimaging studies of language processing, two main approaches can be found, one that contrasts anomalous and well-formed words or sentences in order to yield an error response and one that contrasts two well-formed syntactic structures differing in complexity, investigating effects of increased integration costs. The present fMRI study aimed at disentangling the error signal from the processing cost signal in LIFG. To do so, we examined the so-called Locative Alternation, which involves the contrast between the Content-Locative construction, e.g. He sprays paint on the wall, and the Container-Locative construction, e.g. He sprays the wall with paint, which have been argued to differ in processing. By including asymmetric verbs, e.g. He blocks the road with rocks vs. *He blocks rocks on the road, we were able to study the contrast between well formed and anomalous constructions. Participants performed an acceptability judgment task during fMRI. The results showed that increased syntactic integration costs yielded both increased response time as well as LIFG activation. Anomalous sentences yielded low acceptability rating but no increase in response time, yet they also evoked increased LIFG activation. Thus, the processing cost and the error signal were found to be functionally independent, but spatially overlapping in the brain.

  19. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  20. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  1. Comparison of analytical error and sampling error for contaminated soil.

    PubMed

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  2. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  3. Acceptability of Using Electronic Vending Machines to Deliver Oral Rapid HIV Self-Testing Kits: A Qualitative Study

    PubMed Central

    Young, Sean D.; Daniels, Joseph; Chiu, ChingChe J.; Bolan, Robert K.; Flynn, Risa P.; Kwok, Justin; Klausner, Jeffrey D.

    2014-01-01

    Introduction Rates of unrecognized HIV infection are significantly higher among Latino and Black men who have sex with men (MSM). Policy makers have proposed that HIV self-testing kits and new methods for delivering self-testing could improve testing uptake among minority MSM. This study sought to conduct qualitative assessments with MSM of color to determine the acceptability of using electronic vending machines to dispense HIV self-testing kits. Materials and Methods African American and Latino MSM were recruited using a participant pool from an existing HIV prevention trial on Facebook. If participants expressed interest in using a vending machine to receive an HIV self-testing kit, they were emailed a 4-digit personal identification number (PIN) code to retrieve the test from the machine. We followed up with those who had tested to assess their willingness to participate in an interview about their experience. Results Twelve kits were dispensed and 8 interviews were conducted. In general, participants expressed that the vending machine was an acceptable HIV test delivery method due to its novelty and convenience. Discussion Acceptability of this delivery model for HIV testing kits was closely associated with three main factors: credibility, confidentiality, and convenience. Future research is needed to address issues, such as user-induced errors and costs, before scaling up the dispensing method. PMID:25076208

  4. Some mathematical refinements concerning error minimization in the genetic code.

    PubMed

    Buhrman, Harry; van der Gulik, Peter T S; Kelk, Steven M; Koolen, Wouter M; Stougie, Leen

    2011-01-01

    The genetic code is known to have a high level of error robustness and has been shown to be very error robust compared to randomly selected codes, but to be significantly less error robust than a certain code found by a heuristic algorithm. We formulate this optimization problem as a Quadratic Assignment Problem and use this to formally verify that the code found by the heuristic algorithm is the global optimum. We also argue that it is strongly misleading to compare the genetic code only with codes sampled from the fixed block model, because the real code space is orders of magnitude larger. We thus enlarge the space from which random codes can be sampled from approximately 2.433 × 10(18) codes to approximately 5.908 × 10(45) codes. We do this by leaving the fixed block model, and using the wobble rules to formulate the characteristics acceptable for a genetic code. By relaxing more constraints, three larger spaces are also constructed. Using a modified error function, the genetic code is found to be more error robust compared to a background of randomly generated codes with increasing space size. We point out that these results do not necessarily imply that the code was optimized during evolution for error minimization, but that other mechanisms could be the reason for this error robustness.

  5. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  6. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  7. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  8. Error robustness evaluation of H.264/MPEG-4 AVC

    NASA Astrophysics Data System (ADS)

    Halbach, Till; Olsen, Steffen

    2004-01-01

    The robustness of the recently ratified video compression standard H.264/MPEG-4 AVC against channel errors is evaluated with the focus on rate distortion matters. After a brief introduction of the standard and an explanation of its error-resistant features, it is investigated how the error resilience tools of H.264 can be deployed best for packet-wise transmission as in ATM, H.323, and IP-based services. Further, the performances of two error concealment strategies for use in an H.264-conform decoder are compared to each other.

  9. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  10. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    ERIC Educational Resources Information Center

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  11. Processor register error correction management

    SciTech Connect

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  12. Knowledge of healthcare professionals about medication errors in hospitals

    PubMed Central

    Abdel-Latif, Mohamed M. M.

    2016-01-01

    Context: Medication errors are the most common types of medical errors in hospitals and leading cause of morbidity and mortality among patients. Aims: The aim of the present study was to assess the knowledge of healthcare professionals about medication errors in hospitals. Settings and Design: A self-administered questionnaire was distributed to randomly selected healthcare professionals in eight hospitals in Madinah, Saudi Arabia. Subjects and Methods: An 18-item survey was designed and comprised questions on demographic data, knowledge of medication errors, availability of reporting systems in hospitals, attitudes toward error reporting, causes of medication errors. Statistical Analysis Used: Data were analyzed with Statistical Package for the Social Sciences software Version 17. Results: A total of 323 of healthcare professionals completed the questionnaire with 64.6% response rate of 138 (42.72%) physicians, 34 (10.53%) pharmacists, and 151 (46.75%) nurses. A majority of the participants had a good knowledge about medication errors concept and their dangers on patients. Only 68.7% of them were aware of reporting systems in hospitals. Healthcare professionals revealed that there was no clear mechanism available for reporting of errors in most hospitals. Prescribing (46.5%) and administration (29%) errors were the main causes of errors. The most frequently encountered medication errors were anti-hypertensives, antidiabetics, antibiotics, digoxin, and insulin. Conclusions: This study revealed differences in the awareness among healthcare professionals toward medication errors in hospitals. The poor knowledge about medication errors emphasized the urgent necessity to adopt appropriate measures to raise awareness about medication errors in Saudi hospitals. PMID:27330261

  13. Effects of Professional Group Membership, Intervention Type, and Diagnostic Label on Treatment Acceptability.

    ERIC Educational Resources Information Center

    Fairbanks, Larry D.; Stinnett, Terry A.

    1997-01-01

    Investigates professionals' ratings of treatment acceptability for two interventions. Teachers, school psychologists, and school social workers (N=97) viewed a vignette of a student exhibiting disruptive behavior and then rated the intervention's acceptability. Results show that professional group membership produced a significant interaction…

  14. Applications of Reaction Rate

    ERIC Educational Resources Information Center

    Cunningham, Kevin

    2007-01-01

    This article presents an assignment in which students are to research and report on a chemical reaction whose increased or decreased rate is of practical importance. Specifically, students are asked to represent the reaction they have chosen with an acceptable chemical equation, identify a factor that influences its rate and explain how and why it…

  15. Studying Student Teachers' Acceptance of Role Responsibility.

    ERIC Educational Resources Information Center

    Davis, Michael D.; Davis, Concetta M.

    1980-01-01

    There is variance in the way in which student teachers accept responsibility for the teaching act. This study explains why some variables may affect student teachers' acceptance of role responsibilities. (CM)

  16. [Subjective well-being and self acceptance].

    PubMed

    Makino, Y; Tagami, F

    1998-06-01

    The purpose of the present study was to examine the relationship between subjective well-being and self acceptance, and to design a happiness self-writing program to increase self acceptance and subjective well-being of adolescents. In study 1, we examined the relationship between social interaction and self acceptance. In study 2, we created a happiness self-writing program in cognitive behavioral approach, and examined whether the program promoted self acceptance and subjective well-being. Results indicated that acceptance of self-openness, an aspect of self acceptance, was related to subjective well-being. The happiness self-writing program increased subjective well-being, but it was not found to have increased self acceptance. It was discussed why the program could promote subjective well-being, but not self acceptance.

  17. Compensating For GPS Ephemeris Error

    NASA Technical Reports Server (NTRS)

    Wu, Jiun-Tsong

    1992-01-01

    Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.

  18. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  19. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  20. Wheat products as acceptable substitutes for rice.

    PubMed

    Yu, B H; Kies, C

    1993-07-01

    The objective of the study was to compare the acceptability to semi-trained US American and Asian palatability panelist, of four wheat products processed to be possible replacers of rice in human diets. Products evaluated using rice as the control standard of excellence were steamed whole wheat, couscous (steamed, extracted wheat flour semolina), rosamarina (rice shaped, extracted wheat flour pasta), and bulgar (steamed, pre-cooked partly debranned, cracked wheat). Using a ten point hedonic rating scale, both groups of panelists gave rosamarina closely followed by couscous, most favorable ratings although these ratings were somewhat lower than that of the positive control, steamed polished rice. Bulgar wheat was given the lowest evaluation and was, in general, found to be an unacceptable replacement for rice by both American and Asian judges because of its dark, 'greasy' color and distinctive flavor. In their personal dietaries, judges included rice from 0.25 to 18 times per week with the Asian judges consuming rice significantly more times per week than did the American judges (10.8 +/- 4.71 vs 1.75 +/- 1.65, p < 0.01). However, rice consumption patterns, nationality, race, or sex of the judges was not demonstrated to affect scoring of the wheat products as rice replacers.