Science.gov

Sample records for high error rates

  1. Efficient ARQ scheme for high error rate channels

    NASA Astrophysics Data System (ADS)

    Moeneclaey, M.; Bruneel, H.

    1984-11-01

    A continuous error detection and retransmission (ARQ) protocol which preserves the order of the data block is presented. Data blocks are retransmitted continuously until a positive acknowledgement is received. The next block is then transmitted. It is shown that in high error rate conditions, i.e., the probability of message error is greater than 0.5, the scheme is more efficient than go-back-N and selective repeat schemes. The high error rates increase with the length of propagation delays.

  2. Some new hybrid ARQ techniques for high error rate conditions

    NASA Astrophysics Data System (ADS)

    Benelli, G.

    1989-06-01

    New hybrid ARQ schemes for error control in communication systems are presented in which redundancy, achieved by retransmission of a code word, is exploited to facilitate correct code-word recovery. In the first two techniques, applicable to cases in which a code word detected in error is retransmitted several times consecutively, an error-correcting code is used in conjuction with a normal ARQ code to enhance performance even at high error rates.

  3. A forward error correction technique using a high-speed, high-rate single chip codec

    NASA Technical Reports Server (NTRS)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    1989-01-01

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  4. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors

    PubMed Central

    Bányai, László; Patthy, László

    2016-01-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation. PMID:27476717

  5. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors.

    PubMed

    Bányai, László; Patthy, László

    2016-08-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation.

  6. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    PubMed

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  7. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  8. Error analysis of high-rate GNSS precise point positioning for seismic wave measurement

    NASA Astrophysics Data System (ADS)

    Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan

    2017-06-01

    High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is

  9. Minimization of noise-induced bit error rate in a high Tc superconducting dc/single flux quantum converter

    NASA Astrophysics Data System (ADS)

    Ortlepp, Thomas; Toepfer, Hannes; Uhlmann, Hermann F.

    2001-02-01

    The thermally induced bit error rate of a rapid single flux quantum logic circuit is theoretically examined using the Fokker-Planck equation. The error rate versus design parameters of a high Tc dc/single flux quantum converter is derived. In comparison with other design methodologies, a vanishingly small error rate at optimal parameters can be achieved.

  10. High-speed communication detector characterization by bit error rate measurements

    NASA Technical Reports Server (NTRS)

    Green, S. I.

    1978-01-01

    Performance data taken on several candidate high data rate laser communications photodetectors is presented. Measurements of bit error rate versus signal level were made in both a 1064 nm system at 400 Mbps and a 532 nm system at 500 Mbps. RCA silicon avalanche photodiodes are superior at 1064 nm, but the Rockwell hybrid 3-5 avalanche photodiode preamplifiers offer potentially superior performance. Varian dynamic crossed field photomultipliers are superior at 532 nm, however, the RCA silicon avalanche photodiode is a close contender.

  11. Unacceptably High Error Rates in Vitek 2 Testing of Cefepime Susceptibility in Extended-Spectrum-β-Lactamase-Producing Escherichia coli

    PubMed Central

    Rhodes, Nathaniel J.; Richardson, Chad L.; Heraty, Ryan; Liu, Jiajun; Malczynski, Michael; Qi, Chao

    2014-01-01

    While a lack of concordance is known between gold standard MIC determinations and Vitek 2, the magnitude of the discrepancy and its impact on treatment decisions for extended-spectrum-β-lactamase (ESBL)-producing Escherichia coli are not. Clinical isolates of ESBL-producing E. coli were collected from blood, tissue, and body fluid samples from January 2003 to July 2009. Resistance genotypes were identified by PCR. Primary analyses evaluated the discordance between Vitek 2 and gold standard methods using cefepime susceptibility breakpoint cutoff values of 8, 4, and 2 μg/ml. The discrepancies in MICs between the methods were classified per convention as very major, major, and minor errors. Sensitivity, specificity, and positive and negative predictive values for susceptibility classifications were calculated. A total of 304 isolates were identified; 59% (179) of the isolates carried blaCTX-M, 47% (143) carried blaTEM, and 4% (12) carried blaSHV. At a breakpoint MIC of 8 μg/ml, Vitek 2 produced a categorical agreement of 66.8% and exhibited very major, major, and minor error rates of 23% (20/87 isolates), 5.1% (8/157 isolates), and 24% (73/304), respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 8 μg/ml were 94.9%, 61.2%, 72.3%, and 91.8%, respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 2 μg/ml were 83.8%, 65.3%, 41%, and 93.3%, respectively. Vitek 2 results in unacceptably high error rates for cefepime compared to those of agar dilution for ESBL-producing E. coli. Clinicians should be wary of making treatment decisions on the basis of Vitek 2 susceptibility results for ESBL-producing E. coli. PMID:24752253

  12. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities

    PubMed Central

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-01-01

    Introduction Sound is among the significant environmental factors for people’s health, and it has an important role in both physical and psychological injuries, and it also affects individuals’ performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. Methods This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Results Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant’s performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). Conclusion This study found that a sound level of 110 dB had an important effect on the individuals’ performances, i.e., the performances were decreased. PMID:27123216

  13. Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.

    1987-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  14. Bit-error-rate testing of high-power 30-GHz traveling wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Fujikawa, Gene

    1986-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30 GHz, 200 W, coupled-cavity traveling wave tubes (TWTs). The transmission effects of each TWT were investigated on a band-limited, 220 Mb/sec SMSK signal. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20 GHz technology development program. The approach taken to test the 30 GHz tubes is described and the resultant test data are discussed. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  15. ERROR CORRECTION IN HIGH SPEED ARITHMETIC,

    DTIC Science & Technology

    The errors due to a faulty high speed multiplier are shown to be iterative in nature. These errors are analyzed in various aspects. The arithmetic coding technique is suggested for the improvement of high speed multiplier reliability. Through a number theoretic investigation, a large class of arithmetic codes for single iterative error correction are developed. The codes are shown to have near-optimal rates and to render a simple decoding method. The implementation of these codes seems highly practical. (Author)

  16. Correcting the optimal resampling-based error rate by estimating the error rate of wrapper algorithms.

    PubMed

    Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure

    2013-09-01

    High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.

  17. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  18. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore » rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  19. Paracetamol-associated acute liver failure in Australian and New Zealand children: high rate of medication errors.

    PubMed

    Rajanayagam, J; Bishop, J R; Lewindon, P J; Evans, Helen M

    2015-01-01

    In children, paracetamol overdose due to deliberate self-poisoning, accidental exposure or medication errors can lead to paediatric acute liver failure and death. In Australia and New Zealand, the nature of ingestion and outcomes of paracetamol-associated paediatric acute liver failure have not been described. To describe the nature and outcomes of paracetamol-associated paediatric acute liver failure. Retrospective analysis of paracetamol-associated paediatric acute liver failure cases presenting 2002-2012. New Zealand and Queensland Paediatric Liver Transplant Services. 14 of 54 cases of paediatric acute liver failure were attributed to paracetamol, the majority were secondary to medication errors. 12 of the 14 children were under the age of 5 years. Seven children received doses in excess of 120 mg/kg/day. Many of the other children received either a double dose, too frequent administration, coadministration of other medicines containing paracetamol or regular paracetamol for up to 24 days. Three children underwent transplant. One of these and one other child died. In Australia and New Zealand, paracetamol overdose secondary to medication errors is the leading cause of paediatric acute liver failure. A review of regional safety practices surrounding paracetamol use in children is indicated. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. Monitoring Error Rates In Illumina Sequencing

    PubMed Central

    Manley, Leigh J.; Ma, Duanduan; Levine, Stuart S.

    2016-01-01

    Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR’s unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted. PMID:27672352

  1. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories.

  2. Multicenter Assessment of Gram Stain Error Rates

    PubMed Central

    Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-01-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. PMID:26888900

  3. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  4. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  5. Post-manufacturing, 17-times acceptable raw bit error rate enhancement, dynamic codeword transition ECC scheme for highly reliable solid-state drives, SSDs

    NASA Astrophysics Data System (ADS)

    Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken

    2011-04-01

    A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.

  6. Detecting imipenem resistance in Acinetobacter baumannii by automated systems (BD Phoenix, Microscan WalkAway, Vitek 2); high error rates with Microscan WalkAway

    PubMed Central

    2009-01-01

    Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to

  7. Controlling type-1 error rates in whole effluent toxicity testing

    SciTech Connect

    Smith, R.; Johnson, S.C.

    1995-12-31

    A form of variability, called the dose x test interaction, has been found to affect the variability of the mean differences from control in the statistical tests used to evaluate Whole Effluent Toxicity Tests for compliance purposes. Since the dose x test interaction is not included in these statistical tests, the assumed type-1 and type-2 error rates can be incorrect. The accepted type-1 error rate for these tests is 5%. Analysis of over 100 Ceriodaphnia, fathead minnow and sea urchin fertilization tests showed that when the test x dose interaction term was not included in the calculations the type-1 error rate was inflated to as high as 20%. In a compliance setting, this problem may lead to incorrect regulatory decisions. Statistical tests are proposed that properly incorporate the dose x test interaction variance.

  8. Scalable quantum computing in the presence of large detected-error rates

    SciTech Connect

    Knill, E.

    2005-04-01

    The theoretically tolerable erasure error rate for scalable quantum computing is shown to be well above 0.1, given standard scalability assumptions. This bound is obtained by implementing computations with generic stabilizer code teleportation steps that combine the necessary operations with error correction. An interesting consequence of the technique is that the only errors that affect the maximum tolerable error rate are storage and Bell measurement errors. If storage errors are negligible, then any detected Bell measurement error below 1/2 is permissible. For practical computation with high detected error rates, the implementation overheads need to be improved.

  9. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  10. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  11. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  12. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  13. Case study: error rates and paperwork design.

    PubMed

    Drury, C G

    1998-01-01

    A job instruction document, or workcard, for civil aircraft maintenance produced a number of paperwork errors when used operationally. The design of the workcard was compared to the guidelines of Patel et al [1994, Applied Ergonomics, 25 (5), 286-293]. All of the errors occurred in work instructions which did not meet these guidelines, demonstrating that the design of documentation does affect operational performance.

  14. High-threshold topological quantum error correction against biased noise

    NASA Astrophysics Data System (ADS)

    Stephens, Ashley M.; Munro, William J.; Nemoto, Kae

    2013-12-01

    Quantum information can be protected from decoherence and other errors, but only if these errors are sufficiently rare. For quantum computation to become a scalable technology, practical schemes for quantum error correction that can tolerate realistically high error rates will be necessary. In some physical systems, errors may exhibit a characteristic structure that can be carefully exploited to improve the efficacy of error correction. Here we describe a scheme for topological quantum error correction to protect quantum information from a dephasing-biased error model, where we combine a repetition code with a topological cluster state. We find that the scheme tolerates error rates of up to 1.37%-1.83% per gate, requiring only short-range interactions in a two-dimensional array.

  15. Scaling and technology issues for soft error rates

    NASA Technical Reports Server (NTRS)

    Johnston, A. H.

    2000-01-01

    Th effects of device technology and scaling on soft error rates are discussed, using information obtained from both the device and space communities as a guide to determine the net effect on soft errors.

  16. Errors in particle tracking velocimetry with high-speed cameras.

    PubMed

    Feng, Yan; Goree, J; Liu, Bin

    2011-05-01

    Velocity errors in particle tracking velocimetry (PTV) are studied. When using high-speed video cameras, the velocity error may increase at a high camera frame rate. This increase in velocity error is due to particle-position uncertainty, which is one of the two sources of velocity errors studied here. The other source of error is particle acceleration, which has the opposite trend of diminishing at higher frame rates. Both kinds of errors can propagate into quantities calculated from velocity, such as the kinetic temperature of particles or correlation functions. As demonstrated in a dusty plasma experiment, the kinetic temperature of particles has no unique value when measured using PTV, but depends on the sampling time interval or frame rate. It is also shown that an artifact appears in an autocorrelation function computed from particle positions and velocities, and it becomes more severe when a small sampling-time interval is used. Schemes to reduce these errors are demonstrated.

  17. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    PubMed

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  18. Improved bit error rate estimation over experimental optical wireless channels

    NASA Astrophysics Data System (ADS)

    El Tabach, Mamdouh; Saoudi, Samir; Tortelier, Patrick; Bouchet, Olivier; Pyndiah, Ramesh

    2009-02-01

    As a part of the EU-FP7 R&D programme, the OMEGA project (hOME Gigabit Access) aims at bridging the gap between wireless terminals and wired backbone network in homes, providing high bit rate connectivity to users. Beside radio frequencies, the wireless links will use Optical Wireless (OW) communications. To guarantee high performance and quality of service in real-time, our system needs techniques to approximate the Bit Error Probability (BEP) with a reasonable training sequence. Traditionally, the BEP is approximated by the Bit Error Rate (BER) measured by counting the number of errors within a given sequence of bits. For small BERs, required sequences are huge and may prevent real-time estimation. In this paper, methods to estimate BER using Probability Density Function (PDF) estimation are presented. Two a posteriori techniques based on Parzen estimator or constrained Gram-Charlier series expansion are adapted and applied to OW communications. Aided by simulations, comparison is done over experimental optical channels. We show that, for different scenarios, such as optical multipath distortion or a well designed Code Division Multiple Access (CDMA) system, this approach outperforms the counting method and yields to better results with a relatively small training sequence.

  19. Approximation of Bit Error Rates in Digital Communications

    DTIC Science & Technology

    2007-06-01

    and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase

  20. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  1. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  2. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  3. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  4. Technological Advancements and Error Rates in Radiation Therapy Delivery

    SciTech Connect

    Margalit, Danielle N.

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There

  5. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Johnson, Sarah J.; Lance, Andrew M.; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Ralph, T. C.; Symul, Thomas

    2017-02-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates.

  6. Approximate Minimum Bit Error Rate Equalization for Fading Channels

    NASA Astrophysics Data System (ADS)

    Kovacs, Lorant; Levendovszky, Janos; Olah, Andras; Treplan, Gergely

    2010-12-01

    A novel channel equalizer algorithm is introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithm is based on minimizing the bit error rate (BER) using a fast approximation of its gradient with respect to the equalizer coefficients. This approximation is obtained by estimating the exponential summation in the gradient with only some carefully chosen dominant terms. The paper derives an algorithm to calculate these dominant terms in real-time. Summing only these dominant terms provides a highly accurate approximation of the true gradient. Combined with a fast adaptive channel state estimator, the new equalization algorithm yields better performance than the traditional zero forcing (ZF) or minimum mean square error (MMSE) equalizers. The performance of the new method is tested by simulations performed on standard wireless channels. From the performance analysis one can infer that the new equalizer is capable of efficient channel equalization and maintaining a relatively low bit error probability in the case of channels corrupted by frequency selectivity. Hence, the new algorithm can contribute to ensuring QoS communication over highly distorted channels.

  7. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  8. Hypercorrection of High Confidence Errors in Children

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2012-01-01

    Three experiments investigated whether the hypercorrection effect--the finding that errors committed with high confidence are easier, rather than more difficult, to correct than are errors committed with low confidence--occurs in grade school children as it does in young adults. All three experiments showed that Grade 3-6 children hypercorrected…

  9. Hypercorrection of High Confidence Errors in Children

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2012-01-01

    Three experiments investigated whether the hypercorrection effect--the finding that errors committed with high confidence are easier, rather than more difficult, to correct than are errors committed with low confidence--occurs in grade school children as it does in young adults. All three experiments showed that Grade 3-6 children hypercorrected…

  10. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  11. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    PubMed

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  12. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  13. Dose error from deviation of dwell time and source position for high dose-rate 192Ir in remote afterloading system

    PubMed Central

    Okamoto, Hiroyuki; Aikawa, Ako; Wakita, Akihisa; Yoshio, Kotaro; Murakami, Naoya; Nakamura, Satoshi; Hamada, Minoru; Abe, Yoshihisa; Itami, Jun

    2014-01-01

    The influence of deviations in dwell times and source positions for 192Ir HDR-RALS was investigated. The potential dose errors for various kinds of brachytherapy procedures were evaluated. The deviations of dwell time ΔT of a 192Ir HDR source for the various dwell times were measured with a well-type ionization chamber. The deviations of source position ΔP were measured with two methods. One is to measure actual source position using a check ruler device. The other is to analyze peak distances from radiographic film irradiated with 20 mm gap between the dwell positions. The composite dose errors were calculated using Gaussian distribution with ΔT and ΔP as 1σ of the measurements. Dose errors depend on dwell time and distance from the point of interest to the dwell position. To evaluate the dose error in clinical practice, dwell times and point of interest distances were obtained from actual treatment plans involving cylinder, tandem-ovoid, tandem-ovoid with interstitial needles, multiple interstitial needles, and surface-mold applicators. The ΔT and ΔP were 32 ms (maximum for various dwell times) and 0.12 mm (ruler), 0.11 mm (radiographic film). The multiple interstitial needles represent the highest dose error of 2%, while the others represent less than approximately 1%. Potential dose error due to dwell time and source position deviation can depend on kinds of brachytherapy techniques. In all cases, the multiple interstitial needles is most susceptible. PMID:24566719

  14. Estimation of error rates in classification of distorted imagery.

    PubMed

    Lahart, M J

    1984-04-01

    This correspondence considers the problem of matching image data to a large library of objects when the image is distorted. Two types of distortions are considered: blur-type, in which a transfer function is applied to Fourier components of the image, and scale-type, in which each Fourier component is mapped into another. The objects of the library are assumed to be normally distributed in an appropriate feature space. Approximate expressions are developed for classification error rates as a function of noise. The error rates they predict are compared with those from classification of artificial data, generated by a Gaussian random number generator, and with error rates from classification of actual data. It is demonstrated that, for classification purposes, distortions can be characterized by a small number of parameters.

  15. Failure modes and effects analysis in image-guided high-dose-rate brachytherapy: Quality control optimization to reduce errors in treatment volume.

    PubMed

    Wadi-Ramahi, Shada; Alnajjar, Waleed; Mahmood, Rana; Jastaniyah, Noha; Moftah, Belal

    2016-01-01

    Analyze the inputs which cause treatment to the wrong volume in high-dose-rate brachytherapy (HDRB), with emphasis on imaging role during implant, planning, and treatment verification. The end purpose is to compare our current practice to the findings of the study and apply changes where necessary. Failure mode and effects analysis was used to study the failure pathways for treating the wrong volume in HDRB. The role of imaging and personnel was emphasized, and subcategories were formed. A quality assurance procedure is proposed for each high-scoring failure mode (FM). Forty FMs were found that lead to treating the wrong volume. Of these, 73% were human failures, 20% were machine failures, and 7% were procedural/guideline failures. The use of imaging was found to resolve 85% of the FMs. We also noted that imaging processes were under used in current practice of HDRB especially in pretreatment verification. Twelve FMs (30%) scored the highest, and for each one of them, we propose clinical/practical solutions that could be applied to reduce the risk by increasing detectability. This work resulted in two conclusions: the role of imaging in improving failure detection and the emphasized role of human-based failures. The majority of FMs are human failures, and imaging increased the ability to detect 85% of all FMs. We proposed quality assurance practices for each high-scoring FM and have implemented some of them in our own practice. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  16. Error Rate Estimation in Quantum Key Distribution with Finite Resources

    NASA Astrophysics Data System (ADS)

    Lu, Zhao; Shi, Jian-Hong; Li, Feng-Guang

    2017-04-01

    The goal of quantum key distribution (QKD) is to generate secret key shared between two distant players, Alice and Bob. We present the connection between sampling rate and erroneous judgment probability when estimating error rate with random sampling method, and propose a method to compute optimal sampling rate, which can maximize final secure key generation rate. These results can be applied to choose the optimal sampling rate and improve the performance of QKD system with finite resources. Supported by the National Natural Science Foundation of China under Grant Nos. U1304613 and 11204379

  17. Reduction of Error Rates at PW Pipe. Evaluation Report.

    ERIC Educational Resources Information Center

    Rhodes, Larry

    During the Workplace Training Project, workplace trainers from Oregon's Lane Community College (LCC) provided workplace math classes to employees of an area business, PW Pipe. The math training was designed to help employees increase their proficiency in math and thereby reduce production error rates. During the training, PW Pipe's employees…

  18. Bit Error Rate of Coherent M-ary PSK

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1985-01-01

    The bit error rate (BER) for the coherent detection of M-ary PSK signals with Gray code bit mapping is considered. A closed-form expression for the exact BER of M-ary PSK is presented. Tight upper and lower bounds on BER are also obtained for M-ary PSK with larger M.

  19. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    PubMed

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  20. Impact of translational error-induced and error-free misfolding on the rate of protein evolution

    PubMed Central

    Yang, Jian-Rong; Zhuang, Shi-Mei; Zhang, Jianzhi

    2010-01-01

    What determines the rate of protein evolution is a fundamental question in biology. Recent genomic studies revealed a surprisingly strong anticorrelation between the expression level of a protein and its rate of sequence evolution. This observation is currently explained by the translational robustness hypothesis in which the toxicity of translational error-induced protein misfolding selects for higher translational robustness of more abundant proteins, which constrains sequence evolution. However, the impact of error-free protein misfolding has not been evaluated. We estimate that a non-negligible fraction of misfolded proteins are error free and demonstrate by a molecular-level evolutionary simulation that selection against protein misfolding results in a greater reduction of error-free misfolding than error-induced misfolding. Thus, an overarching protein-misfolding-avoidance hypothesis that includes both sources of misfolding is superior to the translational robustness hypothesis. We show that misfolding-minimizing amino acids are preferentially used in highly abundant yeast proteins and that these residues are evolutionarily more conserved than other residues of the same proteins. These findings provide unambiguous support to the role of protein-misfolding-avoidance in determining the rate of protein sequence evolution. PMID:20959819

  1. Impact of translational error-induced and error-free misfolding on the rate of protein evolution.

    PubMed

    Yang, Jian-Rong; Zhuang, Shi-Mei; Zhang, Jianzhi

    2010-10-19

    What determines the rate of protein evolution is a fundamental question in biology. Recent genomic studies revealed a surprisingly strong anticorrelation between the expression level of a protein and its rate of sequence evolution. This observation is currently explained by the translational robustness hypothesis in which the toxicity of translational error-induced protein misfolding selects for higher translational robustness of more abundant proteins, which constrains sequence evolution. However, the impact of error-free protein misfolding has not been evaluated. We estimate that a non-negligible fraction of misfolded proteins are error free and demonstrate by a molecular-level evolutionary simulation that selection against protein misfolding results in a greater reduction of error-free misfolding than error-induced misfolding. Thus, an overarching protein-misfolding-avoidance hypothesis that includes both sources of misfolding is superior to the translational robustness hypothesis. We show that misfolding-minimizing amino acids are preferentially used in highly abundant yeast proteins and that these residues are evolutionarily more conserved than other residues of the same proteins. These findings provide unambiguous support to the role of protein-misfolding-avoidance in determining the rate of protein sequence evolution.

  2. On quaternary DPSK error rates due to noise and interferences

    NASA Astrophysics Data System (ADS)

    Lye, K. M.; Tjhung, T. T.

    A method for computing the error rates of a quaternary, differentially encoded and detected, phase shift keyed (DPSK) system with Gaussian noise, intersymbol and adjacent channel interferences is presented. In the calculations, intersymbol effects due to the band-limiting IF filter were assumed to have come only from immediately adjacent symbols. Similarly, only immediately adjacent channels were assumed to have contributed toward interchannel interferences. Noise effects were handled by using a probability density formula for corrupted phase differences derived recently by Paula (1981). An experimental system was set up, and error rates measured to verify the analytical results. From the results, optimum receiver bandwidth and channel separation for quaternary DPSK systems can be determined.

  3. Calculate bit error rate for digital radio signal transmission

    NASA Astrophysics Data System (ADS)

    Sandberg, Jorgen

    1987-06-01

    A method for estimating symbol error rate caused by imperfect transmission channels is proposed. The method relates the symbol error rate to peak-to-peak amplitude and phase ripple, maximum gain slope, and maximum group delay distortion. The performance degradation of QPSK, offset QPSK (OQPSK), M-ary PSK (MSK) signals transmitted over a wideband channel, exhibiting either sinusoidal amplitude or phase ripples are evaluated using the proposed method. The transmission channel model, which is a single filter with transfer characteristics, for modeling the frequency of response of a system is described. Consideration is given to signal detection and system degradation. The calculations reveal that the QPSK modulated carrier degrades less then the OQPSK and MSK carriers for peak-to-peak amplitude ripple values less than 6 dB and peak-to-peak phase ripple values less than 45 deg.

  4. Coevolution of Quasispecies: B-Cell Mutation Rates Maximize Viral Error Catastrophes

    NASA Astrophysics Data System (ADS)

    Kamp, Christel; Bornholdt, Stefan

    2002-02-01

    Coevolution of two coupled quasispecies is studied, motivated by the competition between viral evolution and adapting immune response. In this coadaptive model, besides the classical error catastrophe for high virus mutation rates, a second ``adaptation'' catastrophe occurs, when virus mutation rates are too small to escape immune attack. Maximizing both regimes of viral error catastrophes is a possible strategy for an optimal immune response, reducing the range of allowed viral mutation rates to a minimum. From this requirement, one obtains constraints on B-cell mutation rates and receptor lengths, yielding an estimate of somatic hypermutation rates in the germinal center in accordance with observation.

  5. Theoretical Accuracy for ESTL Bit Error Rate Tests

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin

    1998-01-01

    "Bit error rate" [BER] for the purposes of this paper is the fraction of binary bits which are inverted by passage through a communication system. BER can be measured for a block of sample bits by comparing a received block with the transmitted block and counting the erroneous bits. Bit Error Rate [BER] tests are the most common type of test used by the ESTL for evaluating system-level performance. The resolution of the test is obvious: the measurement cannot be resolved more finely than 1/N, the number of bits tested. The tolerance is not. This paper examines the measurement accuracy of the bit error rate test. It is intended that this information will be useful in analyzing data taken in the ESTL. This paper is divided into four sections and follows a logically ordered presentation, with results developed before they are evaluated. However, first-time readers will derive the greatest benefit from this paper by skipping the lengthy section devoted to analysis, and treating it as reference material. The analysis performed in this paper is based on a Probability Density Function [PDF] which is developed with greater detail in a past paper, Theoretical Accuracy for ESTL Probability of Acquisition Tests, EV4-98-609.

  6. Chemotherapy medication errors in a pediatric cancer treatment center: prospective characterization of error types and frequency and development of a quality improvement initiative to lower the error rate.

    PubMed

    Watts, Raymond G; Parsons, Kerry

    2013-08-01

    Chemotherapy medication errors occur in all cancer treatment programs. Such errors have potential severe consequences: either enhanced toxicity or impaired disease control. Understanding and limiting chemotherapy errors are imperative. A multi-disciplinary team developed and implemented a prospective pharmacy surveillance system of chemotherapy prescribing and administration errors from 2008 to 2011 at a Children's Oncology Group-affiliated, pediatric cancer treatment program. Every chemotherapy order was prospectively reviewed for errors at the time of order submission. All chemotherapy errors were graded using standard error severity codes. Error rates were calculated by number of patient encounters and chemotherapy doses dispensed. Process improvement was utilized to develop techniques to minimize errors with a goal of zero errors reaching the patient. Over the duration of the study, more than 20,000 chemotherapy orders were reviewed. Error rates were low (6/1,000 patient encounters and 3.9/1,000 medications dispensed) at the start of the project and reduced by 50% to 3/1,000 patient encounters and 1.8/1,000 medications dispensed during the initiative. Error types included chemotherapy dosing or prescribing errors (42% of errors), treatment roadmap errors (26%), supportive care errors (15%), timing errors (12%), and pharmacy dispensing errors (4%). Ninety-two percent of errors were intercepted before reaching the patient. No error caused identified patient harm. Efforts to lower rates were successful but have not succeeded in preventing all errors. Chemotherapy medication errors are possibly unavoidable, but can be minimized by thoughtful, multispecialty review of current policies and procedures. Pediatr Blood Cancer 2013;601320-1324. © 2013 Wiley Periodicals, Inc. Copyright © 2013 Wiley Periodicals, Inc.

  7. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  8. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  9. Controlling Rater Stringency Error in Clinical Performance Rating: Further Validation of a Performance Rating Theory.

    ERIC Educational Resources Information Center

    Cason, Gerald J.; And Others

    Prior research in a single clinical training setting has shown Cason and Cason's (1981) simplified model of their performance rating theory can improve rating reliability and validity through statistical control of rater stringency error. Here, the model was applied to clinical performance ratings of 14 cohorts (about 250 students and 200 raters)…

  10. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  11. Error propagation from prime variables into specific rates and metabolic fluxes for mammalian cells in perfusion culture.

    PubMed

    Goudar, Chetan T; Biener, Richard; Konstantinov, Konstantin B; Piret, James M

    2009-01-01

    Error propagation from prime variables into specific rates and metabolic fluxes was quantified for high-concentration CHO cell perfusion cultivation. Prime variable errors were first determined from repeated measurements and ranged from 4.8 to 12.2%. Errors in nutrient uptake and metabolite/product formation rates for 5-15% error in prime variables ranged from 8-22%. The specific growth rate, however, was characterized by higher uncertainty as 15% errors in the bioreactor and harvest cell concentration resulted in 37.8% error. Metabolic fluxes were estimated for 12 experimental conditions, each of 10 day duration, during 120-day perfusion cultivation and were used to determine error propagation from specific rates into metabolic fluxes. Errors of the greater metabolic fluxes (those related to glycolysis, lactate production, TCA cycle and oxidative phosphorylation) were similar in magnitude to those of the related greater specific rates (glucose, lactate, oxygen and CO(2) rates) and were insensitive to errors of the lesser specific rates (amino acid catabolism and biosynthesis rates). Errors of the lesser metabolic fluxes (those related to amino acid metabolism), however, were extremely sensitive to errors of the greater specific rates to the extent that they were no longer representative of cellular metabolism and were much less affected by errors in the lesser specific rates. We show that the relationship between specific rate and metabolic flux error could be accurately described by normalized sensitivity coefficients, which were readily calculated once metabolic fluxes were estimated. Their ease of calculation, along with their ability to accurately describe the specific rate-metabolic flux error relationship, makes them a necessary component of metabolic flux analysis. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.

  12. Coding gains and error rates from the Big Viterbi Decoder

    NASA Astrophysics Data System (ADS)

    Onyszchuk, I. M.

    1991-08-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  13. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  14. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  15. National suicide rates a century after Durkheim: do we know enough to estimate error?

    PubMed

    Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W

    2010-06-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.

  16. High population increase rates.

    PubMed

    1991-09-01

    In addition to its economic and ethnic difficulties, the USSR faces several pressing demographic problems, including high population increase rates in several of its constituent republics. It has now become clear that although the country's rigid centralized planning succeeded in covering the basic needs of people, it did not lead to welfare growth. Since the 1970s, the Soviet economy has remained sluggish, which as led to increase in the death and birth rates. Furthermore, the ideology that held that demography could be entirely controlled by the country's political and economic system is contradicted by current Soviet reality, which shows that religion and ethnicity also play a significant role in demographic dynamics. Currently, Soviet republics fall under 2 categories--areas with high or low natural population increase rates. Republics with low rates consist of Christian populations (Armenia, Moldavia, Georgia, Byelorussia, Russia, Lithuania, Estonia, Latvia, Ukraine), while republics with high rates are Muslim (Tadzhikistan, Uzbekistan, Turkmenistan, Kirgizia, Azerbaijan Kazakhstan). The later group has natural increase rates as high as 3.3%. Although the USSR as a whole is not considered a developing country, the later group of republics fit the description of the UNFPA's priority list. Another serious demographic issue facing the USSR is its extremely high rate of abortion. This is especially true in the republics of low birth rates, where up to 60% of all pregnancies are terminated by induced abortions. Up to 1/5 of the USSR's annual health care budget is spent on clinical abortions -- money which could be better spent on the production of contraceptives. Along with the recent political and economic changes, the USSR is now eager to deal with its demographic problems.

  17. Error Rates and Channel Capacities in Multipulse PPM

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Moision, Bruce

    2007-01-01

    A method of computing channel capacities and error rates in multipulse pulse-position modulation (multipulse PPM) has been developed. The method makes it possible, when designing an optical PPM communication system, to determine whether and under what conditions a given multipulse PPM scheme would be more or less advantageous, relative to other candidate modulation schemes. In conventional M-ary PPM, each symbol is transmitted in a time frame that is divided into M time slots (where M is an integer >1), defining an M-symbol alphabet. A symbol is represented by transmitting a pulse (representing 1) during one of the time slots and no pulse (representing 0 ) during the other M 1 time slots. Multipulse PPM is a generalization of PPM in which pulses are transmitted during two or more of the M time slots.

  18. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; hide

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  19. Optical refractive synchronization: bit error rate analysis and measurement

    NASA Astrophysics Data System (ADS)

    Palmer, James R.

    1999-11-01

    The direction of this paper is to describe the analytical tools and measurement techniques used at SilkRoad to evaluate the optical and electrical signals used in Optical Refractive Synchronization for transporting SONET signals across the transmission fiber. Fundamentally, the direction of this paper is to provide an outline of how SilkRoad, Inc., transports a multiplicity of SONET signals across a distance of fiber > 100 Km without amplification or regeneration of the optical signal, i.e., one laser over one fiber. Test and measurement data are presented to reflect how the SilkRoad technique of Optical Refractive Synchronization is employed to provide a zero bit error rate for transmission of multiple OC-12 and OC-48 SONET signals that are sent over a fiber optical cable which is > 100Km. The recovery and transformation modules are described for the modification and transportation of these SONET signals.

  20. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  1. Error effects in anterior cingulate cortex reverse when error likelihood is high

    PubMed Central

    Jessup, Ryan K.; Busemeyer, Jerome R.; Brown, Joshua W.

    2010-01-01

    Strong error-related activity in medial prefrontal cortex (mPFC) has been shown repeatedly with neuroimaging and event-related potential studies for the last several decades. Multiple theories have been proposed to account for error effects, including comparator models and conflict detection models, but the neural mechanisms that generate error signals remain in dispute. Typical studies use relatively low error rates, confounding the expectedness and the desirability of an error. Here we show with a gambling task and fMRI that when losses are more frequent than wins, the mPFC error effect disappears, and moreover, exhibits the opposite pattern by responding more strongly to unexpected wins than losses. These findings provide perspective on recent ERP studies and suggest that mPFC error effects result from a comparison between actual and expected outcomes. PMID:20203206

  2. Detecting Identity by Descent and Estimating Genotype Error Rates in Sequence Data

    PubMed Central

    Browning, Brian L.; Browning, Sharon R.

    2013-01-01

    Existing methods for identity by descent (IBD) segment detection were designed for SNP array data, not sequence data. Sequence data have a much higher density of genetic variants and a different allele frequency distribution, and can have higher genotype error rates. Consequently, best practices for IBD detection in SNP array data do not necessarily carry over to sequence data. We present a method, IBDseq, for detecting IBD segments in sequence data and a method, SEQERR, for estimating genotype error rates at low-frequency variants by using detected IBD. The IBDseq method estimates probabilities of genotypes observed with error for each pair of individuals under IBD and non-IBD models. The ratio of estimated probabilities under the two models gives a LOD score for IBD. We evaluate several IBD detection methods that are fast enough for application to sequence data (IBDseq, Beagle Refined IBD, PLINK, and GERMLINE) under multiple parameter settings, and we show that IBDseq achieves high power and accuracy for IBD detection in sequence data. The SEQERR method estimates genotype error rates by comparing observed and expected rates of pairs of homozygote and heterozygote genotypes at low-frequency variants in IBD segments. We demonstrate the accuracy of SEQERR in simulated data, and we apply the method to estimate genotype error rates in sequence data from the UK10K and 1000 Genomes projects. PMID:24207118

  3. Error Rates in Users of Automatic Face Recognition Software.

    PubMed

    White, David; Dunn, James D; Schmid, Alexandra C; Kemp, Richard I

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated 'candidate lists' selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers-who use the system in their daily work-and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced "facial examiners" outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems-potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.

  4. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  5. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    SciTech Connect

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  6. A simple calculation method for heavy ion induced soft error rate in space environment

    NASA Astrophysics Data System (ADS)

    Galimov, A. M.; Elushov, I. V.; Zebrev, G. I.

    2016-12-01

    In this paper based on the new parameterization shape, an alternative heavy ion induced soft errors characterization approach is proposed and validated. The method provides an unambiguous calculation procedure to predict an upset rate in highly-scaled memory in a space environment.

  7. Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers

    NASA Technical Reports Server (NTRS)

    Ha, Eunho; North, Gerald R.

    1995-01-01

    Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.

  8. Bit error rate measurement above and below bit rate tracking threshold

    NASA Technical Reports Server (NTRS)

    Kobayaski, H. S.; Fowler, J.; Kurple, W. (Inventor)

    1978-01-01

    Bit error rate is measured by sending a pseudo-random noise (PRN) code test signal simulating digital data through digital equipment to be tested. An incoming signal representing the response of the equipment being tested, together with any added noise, is received and tracked by being compared with a locally generated PRN code. Once the locally generated PRN code matches the incoming signal a tracking lock is obtained. The incoming signal is then integrated and compared bit-by-bit against the locally generated PRN code and differences between bits being compared are counted as bit errors.

  9. Testing Theories of Transfer Using Error Rate Learning Curves.

    PubMed

    Koedinger, Kenneth R; Yudelson, Michael V; Pavlik, Philip I

    2016-07-01

    We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions. Copyright © 2016 Cognitive Science Society, Inc.

  10. High Data Rate Quantum Cryptography

    NASA Astrophysics Data System (ADS)

    Kwiat, Paul; Christensen, Bradley; McCusker, Kevin; Kumor, Daniel; Gauthier, Daniel

    2015-05-01

    While quantum key distribution (QKD) systems are now commercially available, the data rate is a limiting factor for some desired applications (e.g., secure video transmission). Most QKD systems receive at most a single random bit per detection event, causing the data rate to be limited by the saturation of the single-photon detectors. Recent experiments have begun to explore using larger degree of freedoms, i.e., temporal or spatial qubits, to optimize the data rate. Here, we continue this exploration using entanglement in multiple degrees of freedom. That is, we use simultaneous temporal and polarization entanglement to reach up to 8.3 bits of randomness per coincident detection. Due to current technology, we are unable to fully secure the temporal degree of freedom against all possible future attacks; however, by assuming a technologically-limited eavesdropper, we are able to obtain 23.4 MB/s secure key rate across an optical table, after error reconciliation and privacy amplification. In this talk, we will describe our high-rate QKD experiment, with a short discussion on our work towards extending this system to ship-to-ship and ship-to-shore communication, aiming to secure the temporal degree of freedom and to implement a 30-km free-space link over a marine environment.

  11. Rates of computational errors for scoring the SIRS primary scales.

    PubMed

    Tyner, Elizabeth A; Frederick, Richard I

    2013-12-01

    We entered item scores for the Structured Interview of Reported Symptoms (SIRS; Rogers, Bagby, & Dickens, 1991) into a spreadsheet and compared computed scores with those hand-tallied by examiners. We found that about 35% of the tests had at least 1 scoring error. Of SIRS scale scores tallied by examiners, about 8% were incorrectly summed. When the errors were corrected, only 1 SIRS classification was reclassified in the fourfold scheme used by the SIRS. We note that mistallied scores on psychological tests are common, and we review some strategies for reducing scale score errors on the SIRS.

  12. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    PubMed

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of

  13. A comparison of endoscopic localization error rate between operating surgeons and referring endoscopists in colorectal cancer.

    PubMed

    Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A

    2017-03-01

    Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error

  14. Effect of Electronic Editing on Error Rate of Newspaper.

    ERIC Educational Resources Information Center

    Randall, Starr D.

    1979-01-01

    A study of a North Carolina newspaper indicates that newspapers using fully integrated electronic editing systems have fewer errors in spelling, punctuation, sentence construction, hyphenation, and typography than newspapers not using electronic editing. (GT)

  15. Effect of Electronic Editing on Error Rate of Newspaper.

    ERIC Educational Resources Information Center

    Randall, Starr D.

    1979-01-01

    A study of a North Carolina newspaper indicates that newspapers using fully integrated electronic editing systems have fewer errors in spelling, punctuation, sentence construction, hyphenation, and typography than newspapers not using electronic editing. (GT)

  16. Adaptive rate error control through the use of diversity combining and majority logic decoding in a hybrid-ARQ protocol

    NASA Astrophysics Data System (ADS)

    Wicker, Stephen B.

    The author demonstrates an adaptive rate coding system based on the majority logic decoding of convolutional codes. The proposed system retains the high-data-rate capability of FEC (forward error correction) majority logic decoders while providing an adaptive code rate and a significant improvement in error protection through the incorporation of diversity combining and hybrid-ARQ (automatic repeat request) techniques. It is shown through analysis and simulation that this error control system provides a high level of data reliability at the expense of a minimal reduction in throughput.

  17. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    PubMed

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  18. Competence in Streptococcus pneumoniae is regulated by the rate of ribosomal decoding errors.

    PubMed

    Stevens, Kathleen E; Chang, Diana; Zwack, Erin E; Sebert, Michael E

    2011-01-01

    Competence for genetic transformation in Streptococcus pneumoniae develops in response to accumulation of a secreted peptide pheromone and was one of the initial examples of bacterial quorum sensing. Activation of this signaling system induces not only expression of the proteins required for transformation but also the production of cellular chaperones and proteases. We have shown here that activity of this pathway is sensitively responsive to changes in the accuracy of protein synthesis that are triggered by either mutations in ribosomal proteins or exposure to antibiotics. Increasing the error rate during ribosomal decoding promoted competence, while reducing the error rate below the baseline level repressed the development of both spontaneous and antibiotic-induced competence. This pattern of regulation was promoted by the bacterial HtrA serine protease. Analysis of strains with the htrA (S234A) catalytic site mutation showed that the proteolytic activity of HtrA selectively repressed competence when translational fidelity was high but not when accuracy was low. These findings redefine the pneumococcal competence pathway as a response to errors during protein synthesis. This response has the capacity to address the immediate challenge of misfolded proteins through production of chaperones and proteases and may also be able to address, through genetic exchange, upstream coding errors that cause intrinsic protein folding defects. The competence pathway may thereby represent a strategy for dealing with lesions that impair proper protein coding and for maintaining the coding integrity of the genome. The signaling pathway that governs competence in the human respiratory tract pathogen Streptococcus pneumoniae regulates both genetic transformation and the production of cellular chaperones and proteases. The current study shows that this pathway is sensitively controlled in response to changes in the accuracy of protein synthesis. Increasing the error rate during

  19. The effects of digitizing rate and phase distortion errors on the shock response spectrum

    NASA Technical Reports Server (NTRS)

    Wise, J. H.

    1983-01-01

    Some of the methods used for acquisition and digitization of high-frequency transients in the analysis of pyrotechnic events, such as explosive bolts for spacecraft separation, are discussed with respect to the reduction of errors in the computed shock response spectrum. Equations are given for maximum error as a function of the sampling rate, phase distortion, and slew rate, and the effects of the characteristics of the filter used are analyzed. A filter is noted to exhibit good passband amplitude, phase response, and response to a step function is a compromise between the flat passband of the elliptic filter and the phase response of the Bessel filter; it is suggested that it be used with a sampling rate of 10f (5 percent).

  20. The Evolutionary Design of Error-Rates, and the Fast Fixation Enigma

    NASA Astrophysics Data System (ADS)

    Ninio, Jacques

    1997-12-01

    Genetic and non-genetic error-rates are analyzed in parallel for a lower and a higher organism (E. coli and man, respectively). From the comparison of mutation with fixation rates, contrasting proposals are made, concerning the arrangement of error-rates in the two organisms. In E. coli, reproduction is very conservative, but genetic variability is high within populations. Most mutations are discarded by selection, yet single mutational variants of a gene have, on average, little impact on fitness. In man, the mutation rate per generation is high, the variability generated in the population is comparatively low, and most mutations are fixed by drift rather than selection. The variants of a gene are in general more deleterious than in E. coli. There is a discrepancy in the published mutation rates: the rate of mutation fixations in human populations is twice or four times higher than the individual rate of mutation production, a feature which is not consistent with current population genetics models. Two, not mutually exclusive, hypotheses may explain this `fast fixation enigma': (i) Mutation rates have substantially decreased in recent human evolution and (ii) A substantial fraction of the fixed mutations were generated in a process - such as gene conversion - that violates the principle of independence of mutation events.

  1. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  2. Avoiding ambiguity with the Type I error rate in noninferiority trials.

    PubMed

    Kang, Seung-Ho

    2016-01-01

    This review article sets out to examine the Type I error rates used in noninferiority trials. Most papers regarding noninferiority trials only state Type I error rate without mentioning clearly which Type I error rate is evaluated. Therefore, the Type I error rate in one paper is often different from the Type I error rate in another paper, which can confuse readers and makes it difficult to understand papers. Which Type I error rate should be evaluated is related directly to which paradigm is employed in the analysis of noninferiority trial, and to how the historical data are treated. This article reviews the characteristics of the within-trial Type I error rate and the unconditional across-trial Type I error rate which have frequently been examined in noninferiority trials. The conditional across-trial Type I error rate is also briefly discussed. In noninferiority trials comparing a new treatment with an active control without a placebo arm, it is argued that the within-trial Type I error rate should be controlled in order to obtain approval of the new treatment from the regulatory agencies. I hope that this article can help readers understand the difference between two paradigms employed in noninferiority trials.

  3. Simultaneous control of error rates in fMRI data analysis.

    PubMed

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-12-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain.

  4. An Examination of Negative Halo Error in Ratings.

    ERIC Educational Resources Information Center

    Lance, Charles E.; And Others

    1990-01-01

    A causal model of halo error (HE) is derived. Three hypotheses are formulated to explain findings of negative HE. It is suggested that apparent negative HE may have been misinferred from existing correlational measures of HE, and that positive HE is more prevalent than had previously been thought. (SLD)

  5. An Examination of Negative Halo Error in Ratings.

    ERIC Educational Resources Information Center

    Lance, Charles E.; And Others

    1990-01-01

    A causal model of halo error (HE) is derived. Three hypotheses are formulated to explain findings of negative HE. It is suggested that apparent negative HE may have been misinferred from existing correlational measures of HE, and that positive HE is more prevalent than had previously been thought. (SLD)

  6. Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.

    PubMed

    Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda

    2017-09-01

    We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.

  7. Systematic error detection in experimental high-throughput screening

    PubMed Central

    2011-01-01

    Background High-throughput screening (HTS) is a key part of the drug discovery process during which thousands of chemical compounds are screened and their activity levels measured in order to identify potential drug candidates (i.e., hits). Many technical, procedural or environmental factors can cause systematic measurement error or inequalities in the conditions in which the measurements are taken. Such systematic error has the potential to critically affect the hit selection process. Several error correction methods and software have been developed to address this issue in the context of experimental HTS [1-7]. Despite their power to reduce the impact of systematic error when applied to error perturbed datasets, those methods also have one disadvantage - they introduce a bias when applied to data not containing any systematic error [6]. Hence, we need first to assess the presence of systematic error in a given HTS assay and then carry out systematic error correction method if and only if the presence of systematic error has been confirmed by statistical tests. Results We tested three statistical procedures to assess the presence of systematic error in experimental HTS data, including the χ2 goodness-of-fit test, Student's t-test and Kolmogorov-Smirnov test [8] preceded by the Discrete Fourier Transform (DFT) method [9]. We applied these procedures to raw HTS measurements, first, and to estimated hit distribution surfaces, second. The three competing tests were applied to analyse simulated datasets containing different types of systematic error, and to a real HTS dataset. Their accuracy was compared under various error conditions. Conclusions A successful assessment of the presence of systematic error in experimental HTS assays is possible when the appropriate statistical methodology is used. Namely, the t-test should be carried out by researchers to determine whether systematic error is present in their HTS data prior to applying any error correction method

  8. Systematic error detection in experimental high-throughput screening.

    PubMed

    Dragiev, Plamen; Nadon, Robert; Makarenkov, Vladimir

    2011-01-19

    High-throughput screening (HTS) is a key part of the drug discovery process during which thousands of chemical compounds are screened and their activity levels measured in order to identify potential drug candidates (i.e., hits). Many technical, procedural or environmental factors can cause systematic measurement error or inequalities in the conditions in which the measurements are taken. Such systematic error has the potential to critically affect the hit selection process. Several error correction methods and software have been developed to address this issue in the context of experimental HTS 1234567. Despite their power to reduce the impact of systematic error when applied to error perturbed datasets, those methods also have one disadvantage - they introduce a bias when applied to data not containing any systematic error 6. Hence, we need first to assess the presence of systematic error in a given HTS assay and then carry out systematic error correction method if and only if the presence of systematic error has been confirmed by statistical tests. We tested three statistical procedures to assess the presence of systematic error in experimental HTS data, including the χ2 goodness-of-fit test, Student's t-test and Kolmogorov-Smirnov test 8 preceded by the Discrete Fourier Transform (DFT) method 9. We applied these procedures to raw HTS measurements, first, and to estimated hit distribution surfaces, second. The three competing tests were applied to analyse simulated datasets containing different types of systematic error, and to a real HTS dataset. Their accuracy was compared under various error conditions. A successful assessment of the presence of systematic error in experimental HTS assays is possible when the appropriate statistical methodology is used. Namely, the t-test should be carried out by researchers to determine whether systematic error is present in their HTS data prior to applying any error correction method. This important step can significantly

  9. Study of bit error rate (BER) for multicarrier OFDM

    NASA Astrophysics Data System (ADS)

    Alshammari, Ahmed; Albdran, Saleh; Matin, Mohammad

    2012-10-01

    Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technique that is being used more and more in recent wideband digital communications. It is known for its ability to handle severe channel conditions, the efficiency of spectral usage and the high data rate. Therefore, It has been used in many wired and wireless communication systems such as DSL, wireless networks and 4G mobile communications. Data streams are modulated and sent over multiple subcarriers using either M-QAM or M-PSK. OFDM has lower inter simple interference (ISI) levels because of the of the low data rates of carriers resulting in long symbol periods. In this paper, BER performance of OFDM with respect to signal to noise ratio (SNR) is evaluated. BPSK Modulation is used in s Simulation based system in order to get the BER over different wireless channels. These channels include additive white Gaussian Noise (AWGN) and fading channels that are based on Doppler spread and Delay spread. Plots of the results are compared with each other after varying some of the key parameters of the system such as the IFFT, number of carriers, SNR. The results of the simulation give visualization of what kind of BER to expect when the signal goes through those channels.

  10. A long lifetime, low error rate RRAM design with self-repair module

    NASA Astrophysics Data System (ADS)

    Zhiqiang, You; Fei, Hu; Liming, Huang; Peng, Liu; Jishun, Kuang; Shiying, Li

    2016-11-01

    Resistive random access memory (RRAM) is one of the promising candidates for future universal memory. However, it suffers from serious error rate and endurance problems. Therefore, exploring a technical solution is greatly demanded to enhance endurance and reduce error rate. In this paper, we propose a reliable RRAM architecture that includes two reliability modules: error correction code (ECC) and self-repair modules. The ECC module is used to detect errors and decrease error rate. The self-repair module, which is proposed for the first time for RRAM, can get the information of error bits and repair wear-out cells by a repair voltage. Simulation results show that the proposed architecture can achieve lowest error rate and longest lifetime compared to previous reliable designs. Project supported by the New Century Excellent Talents in University (No. NCET-12-0165) and the National Natural Science Foundation of China (Nos. 61472123, 61272396).

  11. Reducing error rates in straintronic multiferroic nanomagnetic logic by pulse shaping.

    PubMed

    Munira, Kamaram; Xie, Yunkun; Nadri, Souheil; Forgues, Mark B; Fashami, Mohammad Salehi; Atulasimha, Jayasimha; Bandyopadhyay, Supriyo; Ghosh, Avik W

    2015-06-19

    Dipole-coupled nanomagnetic logic (NML), where nanomagnets (NMs) with bistable magnetization states act as binary switches and information is transferred between them via dipole-coupling and Bennett clocking, is a potential replacement for conventional transistor logic since magnets dissipate less energy than transistors when they switch in a logic circuit. Magnets are also 'non-volatile' and hence can store the results of a computation after the computation is over, thereby doubling as both logic and memory-a feat that transistors cannot achieve. However, dipole-coupled NML is much more error-prone than transistor logic at room temperature [Formula: see text] because thermal noise can easily disrupt magnetization dynamics. Here, we study a particularly energy-efficient version of dipole-coupled NML known as straintronic multiferroic logic (SML) where magnets are clocked/switched with electrically generated mechanical strain. By appropriately 'shaping' the voltage pulse that generates strain, we show that the error rate in SML can be reduced to tolerable limits. We describe the error probabilities associated with various stress pulse shapes and discuss the trade-off between error rate and switching speed in SML.The lowest error probability is obtained when a 'shaped' high voltage pulse is applied to strain the output NM followed by a low voltage pulse. The high voltage pulse quickly rotates the output magnet's magnetization by 90° and aligns it roughly along the minor (or hard) axis of the NM. Next, the low voltage pulse produces the critical strain to overcome the shape anisotropy energy barrier in the NM and produce a monostable potential energy profile in the presence of dipole coupling from the neighboring NM. The magnetization of the output NM then migrates to the global energy minimum in this monostable profile and completes a 180° rotation (magnetization flip) with high likelihood.

  12. A Six Sigma approach to the rate and clinical effect of registration errors in a laboratory.

    PubMed

    Vanker, Naadira; van Wyk, Johan; Zemlin, Annalise E; Erasmus, Rajiv T

    2010-05-01

    Laboratory errors made during the pre-analytical phase can have an impact on clinical care. Quality management tools such as Six Sigma may help improve error rates. To use elements of a Six Sigma model to establish the error rate of test registration onto the laboratory information system (LIS), and to deduce the potential clinical impact of these errors. In this retrospective study, test request forms were compared with the tests registered onto the LIS, and all errors were noted before being rectified. The error rate was calculated. The corresponding patient records were then examined to determine the actual outcome, and to deduce the potential clinical impact of the registration errors. Of the 47 543 tests requested, 72 errors were noted, resulting in an error rate of 0.151%, equating to a sigma score of 4.46. The patient records reviewed indicated that these errors could, in various ways, have impacted on clinical care. This study highlights the clinical effect of errors made during the pre-analytical phase of the laboratory testing process. Reduction of errors may be achieved through implementation of a Six Sigma programme.

  13. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Astrophysics Data System (ADS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-06-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  14. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  15. Controlling Type I Error Rate in Evaluating Differential Item Functioning for Four DIF Methods: Use of Three Procedures for Adjustment of Multiple Item Testing

    ERIC Educational Resources Information Center

    Kim, Jihye

    2010-01-01

    In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…

  16. Controlling Type I Error Rate in Evaluating Differential Item Functioning for Four DIF Methods: Use of Three Procedures for Adjustment of Multiple Item Testing

    ERIC Educational Resources Information Center

    Kim, Jihye

    2010-01-01

    In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…

  17. HIGH ENERGY RATE EXTRUSION.

    DTIC Science & Technology

    Thin structural shapes can now be produced by high velocity extrusion equipment. Tooling, dies, die coatings, lubricants and general processing...degrees was important in reducing the initial peak stresses to a controllable level and tooling failures were reduced by using high strength (Rc 55-60...the high inertial forces present) can be lessened and eliminated in many cases by the selection of low reduction ratios (15:1 or below) and low impact speeds. (Author)

  18. Conserved rates and patterns of transcription errors across bacterial growth states and lifestyles

    PubMed Central

    Traverse, Charles C.; Ochman, Howard

    2016-01-01

    Errors that occur during transcription have received much less attention than the mutations that occur in DNA because transcription errors are not heritable and usually result in a very limited number of altered proteins. However, transcription error rates are typically several orders of magnitude higher than the mutation rate. Also, individual transcripts can be translated multiple times, so a single error can have substantial effects on the pool of proteins. Transcription errors can also contribute to cellular noise, thereby influencing cell survival under stressful conditions, such as starvation or antibiotic stress. Implementing a method that captures transcription errors genome-wide, we measured the rates and spectra of transcription errors in Escherichia coli and in endosymbionts for which mutation and/or substitution rates are greatly elevated over those of E. coli. Under all tested conditions, across all species, and even for different categories of RNA sequences (mRNA and rRNAs), there were no significant differences in rates of transcription errors, which ranged from 2.3 × 10−5 per nucleotide in mRNA of the endosymbiont Buchnera aphidicola to 5.2 × 10−5 per nucleotide in rRNA of the endosymbiont Carsonella ruddii. The similarity of transcription error rates in these bacterial endosymbionts to that in E. coli (4.63 × 10−5 per nucleotide) is all the more surprising given that genomic erosion has resulted in the loss of transcription fidelity factors in both Buchnera and Carsonella. PMID:26884158

  19. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    ERIC Educational Resources Information Center

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  20. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    ERIC Educational Resources Information Center

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  1. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  2. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    ERIC Educational Resources Information Center

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  3. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    ERIC Educational Resources Information Center

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  4. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  5. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  6. Strategies implementation to reduce medicine preparation error rate in neonatal intensive care units.

    PubMed

    Campino, Ainara; Santesteban, Elena; Pascual, Pilar; Sordo, Beatriz; Arranz, Casilda; Unceta, Maria; Lopez-de-Heredia, Ion

    2016-06-01

    This study assessed the rate of errors in intravenous medicine preparation at bedside in neonatal intensive care units versus preparation error rate in a hospital pharmacy service before and after several strategies were implemented. We performed a prospective observational study during 2013-2015. Ten Spanish neonatal intensive care units and one hospital pharmacy service participated in the study. Two types of preparation errors were considered, calculation errors and accuracy errors. The study was carried out over three consecutive phases: (1) pre-intervention phase, when medicine preparation samples were collected from neonatal intensive care units and hospital pharmacy service according to their normal clinical practice; (2) intervention phase, when protocol standardisation and educational strategy took place; and (3) post-intervention phase, when new medicine samples were collected after strategy implementation. In neonatal intensive care units, 1.35 % of samples registered calculation errors in pre-intervention phase; no calculation errors were registered in hospital pharmacy service samples. In post-intervention phase, no calculation errors were registered in either group. Accuracy error rate decreased both in neonatal intensive care units (54.7 vs 23 %) and hospital pharmacy service (38.3 vs 14.6 %). Calculation errors can disappear with good standardisation protocols. Decrease in accuracy error depends on good preparation technique and environmental factors. • Medication use is associated with a risk of errors and adverse events. Medication errors are more frequent and have more severe consequences in paediatric patients. • Lack of commercial drug formulations adapted to newborn infants makes medicine preparation process more prone to error. What is New: • Calculation errors are minimising using concentration standard protocols. Preparation rules are essential to ensure the accuracy process. • Environmental conditions affect the accuracy process.

  7. Dispensing error rate after implementation of an automated pharmacy carousel system.

    PubMed

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  8. Design and verification of a bit error rate tester in Altera FPGA for optical link developments

    NASA Astrophysics Data System (ADS)

    Cao, T.; Chang, J.; Gong, D.; Liu, C.; Liu, T.; Xiang, A.; Ye, J.

    2010-12-01

    This paper presents a custom bit error rate (BER) tester implementation in an Altera Stratix II GX signal integrity development kit. This BER tester deploys a parallel to serial pseudo random bit sequence (PRBS) generator, a bit and link status error detector and an error logging FIFO. The auto-correlation pattern enables receiver synchronization without specifying protocol at the physical layer. The error logging FIFO records both bit error data and link operation events. The tester's BER and data acquisition functions are utilized in a proton test of a 5 Gbps serializer. Experimental and data analysis results are discussed.

  9. Reduction of LNG operator error and equipment failure rates. Topical report, 20 April 1990

    SciTech Connect

    Atallah, S.; Shah, J.N.; Betti, M.

    1990-04-01

    Tables summarizing human error rates and equipment failure frequencies applicable to the LNG industry are presented. Improved training, better supervision, emergency response drills and improved panel design were methods recommended for reducing human error rates. Outright scheduled replacement of critical components, regular inspection and maintenance, and the use of redundant components were reviewed as means for reducing equipment failure rates. The effect of reducing human error and equipment failure rates on the frequency of overfilling an LNG tank were examined. In addition, guidelines for estimating the cost and benefits of these mitigation measures were considered.

  10. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    PubMed

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data.

  11. Multipath error in range rate measurement by PLL-transponder/GRARR/TDRS

    NASA Technical Reports Server (NTRS)

    Sohn, S. J.

    1970-01-01

    Range rate errors due to specular and diffuse multipath are calculated for a tracking and data relay satellite (TDRS) using an S band Goddard range and range rate (GRARR) system modified with a phase-locked loop transponder. Carrier signal processing in the coherent turn-around transponder and the GRARR reciever is taken into account. The root-mean-square (rms) range rate error was computed for the GRARR Doppler extractor and N-cycle count range rate measurement. Curves of worst-case range rate error are presented as a function of grazing angle at the reflection point. At very low grazing angles specular scattering predominates over diffuse scattering as expected, whereas for grazing angles greater than approximately 15 deg, the diffuse multipath predominates. The range rate errors at different low orbit altutudes peaked between 5 and 10 deg grazing angles.

  12. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    PubMed

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments.

  13. Effect of Spectral Domain Optical Coherence Tomography Image Quality on Macular Thickness Measurements and Error Rate.

    PubMed

    Falavarjani, Khalil Ghasemi; Mehrpuya, Amirabbas; Amirkourjani, Foad

    2017-02-01

    To evaluate the effect of Topcon spectral domain optical coherence tomography (OCT) image quality on macular thickness measurements and the error rate in healthy subjects and patients with clinically significant diabetic macular edema (CSME). In this prospective, comparative case series, macular thickness measurements, and the rate of decentration and segmentation errors were evaluated before and after reducing the image quality factor (QF). The measurements were evaluated again after correcting the decentration and segmentation errors. To reduce the image QF below 45, tetracycline eye ointment was applied on the corneal surface. Forty eyes of 40 subjects including 18 healthy eyes and 22 eyes with CSME were included. In both groups, the difference in central subfield thickness measurements before and after reducing the image QF was not statistically significant both before and after error correction (all P>0.05). The rate of decentration error was statistically similar before and after reducing image QF in normal and CSME eyes (P=0.50, P=0.69, respectively). However, the rate of segmentation error was statistically significantly higher after reducing image QF both in normal and CSME eyes (P=0.008 and P=0.004, respectively). In both groups, eyes with a segmentation error had higher image QF reduction (both P=0.01). Reducing image quality results in a higher rate of the segmentation error in normal eyes and in eyes with CSME.

  14. Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis.

    PubMed

    Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E

    2013-12-01

    In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.

  15. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    PubMed

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  16. Topological quantum computing with a very noisy network and local error rates approaching one percent

    PubMed Central

    Nickerson, Naomi H.; Li, Ying; Benjamin, Simon C.

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems. PMID:23612297

  17. Sensitivity to Error Fields in NSTX High Beta Plasmas

    SciTech Connect

    Park, Jong-Kyu; Menard, Jonathan E.; Gerhardt, Stefan P.; Buttery, Richard J.; Sabbagh, Steve A.; Bell, Steve E.; LeBlanc, Benoit P.

    2011-11-07

    It was found that error field threshold decreases for high β in NSTX, although the density correlation in conventional threshold scaling implies the threshold would increase since higher β plasmas in our study have higher plasma density. This greater sensitivity to error field in higher β plasmas is due to error field amplification by plasmas. When the effect of amplification is included with ideal plasma response calculations, the conventional density correlation can be restored and threshold scaling becomes more consistent with low β plasmas. However, it was also found that the threshold can be significantly changed depending on plasma rotation. When plasma rotation was reduced by non-resonant magnetic braking, the further increase of sensitivity to error field was observed.

  18. The Tukey Honestly Significant Difference Procedure and Its Control of the Type I Error-Rate.

    ERIC Educational Resources Information Center

    Barnette, J. Jackson; McLean, James E.

    Tukey's Honestly Significant Difference (HSD) procedure (J. Tukey, 1953) is probably the most recommended and used procedure for controlling Type I error rate when making multiple pairwise comparisons as follow-ups to a significant omnibus F test. This study compared observed Type I errors with nominal alphas of 0.01, 0.05, and 0.10 compared for…

  19. Design and Verification of an FPGA-based Bit Error Rate Tester

    NASA Astrophysics Data System (ADS)

    Xiang, Annie; Gong, Datao; Hou, Suen; Liu, Chonghan; Liang, Futian; Liu, Tiankuan; Su, Da-Shung; Teng, Ping-Kun; Ye, Jingbo

    Bit error rate (BER) is the principle measure of performance of a data transmission link. With the integration of high-speed transceivers inside a field programmable gate array (FPGA), the BER testing can now be handled by transceiver-enabled FPGA hardware. This provides a cheaper alternative to dedicated table-top equipment and offers the flexibility of test customization and data analysis. This paper presents a BER tester implementation based on the Altera Stratix II GX and IV GT development boards. The architecture of the tester is described. Lab test results and field test data analysis are discussed. The Stratix II GX tester operates at up to 5 Gbps and the Stratix IV GT tester operates at up to 10 Gbps, both in 4 duplex channels. The tester deploys a pseudo random bit sequence (PRBS) generator and detector, a transceiver controller, and an error logger. It also includes a computer interface for data acquisition and user configuration. The tester's functionality was validated and its performance characterized in a point-to-point serial optical link setup. BER vs. optical receiver sensitivity was measured to emulate stressed link conditions. The Stratix II GX tester was also used in a proton test on a custom designed serializer chip to record and analyse radiation-induced errors.

  20. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  1. Bit-Error-Rate Performance of a Gigabit Ethernet O-CDMA Technology Demonstrator (TD)

    SciTech Connect

    Hernandez, V J; Mendez, A J; Bennett, C V; Lennon, W J

    2004-07-09

    An O-CDMA TD based on 2-D (wavelength/time) codes is described, with bit-error-rate (BER) and eye-diagram measurements given for eight users. Simulations indicate that the TD can support 32 asynchronous users.

  2. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  3. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  4. An improved lane detection algorithm and the definition of the error rate standard

    NASA Astrophysics Data System (ADS)

    Yu, Chung-Hsien; Su, Chung-Yen

    2012-04-01

    In this paper, we propose a method to improve the problem that the assistant lane marks caused by pulse. We also define a method to distinguish the assistant lane marks' error rate objectively. To improve the problem, we mainly use the Sobel edge detection to replace the Canny edge detection. Also, we make use of the Gaussian filter to filter noise. Finally, we improve the ellipse ROI size in tracking part and the performance of the FPS (frame per second) from 32 to 39. In the past, we distinguished the assistant lane marks' error rate very subjectively. To avoid judging subjectively, we propose an objective method to define the assistant lane marks' error rate as a standard. We use the performance and the error rate to choose the ellipse ROI parameter.

  5. Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions

    ERIC Educational Resources Information Center

    Yormaz, Seha; Sünbül, Önder

    2017-01-01

    This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…

  6. Conjunction error rates on a continuous recognition memory test: little evidence for recollection.

    PubMed

    Jones, Todd C; Atchley, Paul

    2002-03-01

    Two experiments examined conjunction memory errors on a continuous recognition task where the lag between parent words (e.g., blackmail, jailbird) and later conjunction lures (blackbird) was manipulated. In Experiment 1, contrary to expectations, the conjunction error rate was highest at the shortest lag (1 word) and decreased as the lag increased. In Experiment 2 the conjunction error rate increased significantly from a 0- to a 1-word lag, then decreased slightly from a 1- to a 5-word lag. The results provide mixed support for simple familiarity and dual-process accounts of recognition. Paradoxically, searching for an item in memory does not appear to be a good encoding task.

  7. Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates

    DOE PAGES

    Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.; ...

    2015-12-01

    Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.

  8. Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates

    SciTech Connect

    Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.; Draper, Jeffrey

    2015-12-01

    Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.

  9. Threshold-Based Bit Error Rate for Stopping Iterative Turbo Decoding in a Varying SNR Environment

    NASA Astrophysics Data System (ADS)

    Mohamad, Roslina; Harun, Harlisya; Mokhtar, Makhfudzah; Adnan, Wan Azizun Wan; Dimyati, Kaharudin

    2017-01-01

    Online bit error rate (BER) estimation (OBE) has been used as a stopping iterative turbo decoding criterion. However, the stopping criteria only work at high signal-to-noise ratios (SNRs), and fail to have early termination at low SNRs, which contributes to an additional iteration number and an increase in computational complexity. The failure of the stopping criteria is caused by the unsuitable BER threshold, which is obtained by estimating the expected BER performance at high SNRs, and this threshold does not indicate the correct termination according to convergence and non-convergence outputs (CNCO). Hence, in this paper, the threshold computation based on the BER of CNCO is proposed for an OBE stopping criterion (OBEsc). From the results, OBEsc is capable of terminating early in a varying SNR environment. The optimum number of iterations achieved by the OBEsc allows huge savings in decoding iteration number and decreasing the delay of turbo iterative decoding.

  10. High Rate Digital Demodulator ASIC

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder; Sheikh, Salman; Koubek, Steve; Hoy, Scott; Gray, Andrew

    1998-01-01

    The architecture of High Rate (600 Mega-bits per second) Digital Demodulator (HRDD) ASIC capable of demodulating BPSK and QPSK modulated data is presented in this paper. The advantages of all-digital processing include increased flexibility and reliability with reduced reproduction costs. Conventional serial digital processing would require high processing rates necessitating a hardware implementation in other than CMOS technology such as Gallium Arsenide (GaAs) which has high cost and power requirements. It is more desirable to use CMOS technology with its lower power requirements and higher gate density. However, digital demodulation of high data rates in CMOS requires parallel algorithms to process the sampled data at a rate lower than the data rate. The parallel processing algorithms described here were developed jointly by NASA's Goddard Space Flight Center (GSFC) and the Jet Propulsion Laboratory (JPL). The resulting all-digital receiver has the capability to demodulate BPSK, QPSK, OQPSK, and DQPSK at data rates in excess of 300 Mega-bits per second (Mbps) per channel. This paper will provide an overview of the parallel architecture and features of the HRDR ASIC. In addition, this paper will provide an over-view of the implementation of the hardware architectures used to create flexibility over conventional high rate analog or hybrid receivers. This flexibility includes a wide range of data rates, modulation schemes, and operating environments. In conclusion it will be shown how this high rate digital demodulator can be used with an off-the-shelf A/D and a flexible analog front end, both of which are numerically computer controlled, to produce a very flexible, low cost high rate digital receiver.

  11. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors

    PubMed Central

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Introduction: Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. Materials and methods: This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. Results: A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. Conclusions: The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples. PMID:25351356

  12. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.

    PubMed

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.

  13. Mean and Random Errors of Visual Roll Rate Perception from Central and Peripheral Visual Displays

    NASA Technical Reports Server (NTRS)

    Vandervaart, J. C.; Hosman, R. J. A. W.

    1984-01-01

    A large number of roll rate stimuli, covering rates from zero to plus or minus 25 deg/sec, were presented to subjects in random order at 2 sec intervals. Subjects were to make estimates of magnitude of perceived roll rate stimuli presented on either a central display, on displays in the peripheral ield of vision, or on all displays simultaneously. Response was by way of a digital keyboard device, stimulus exposition times were varied. The present experiment differs from earlier perception tasks by the same authors in that mean rate perception error (and standard deviation) was obtained as a function of rate stimulus magnitude, whereas the earlier experiments only yielded mean absolute error magnitude. Moreover, in the present experiment, all stimulus rates had an equal probability of occurrence, whereas the earlier tests featured a Gaussian stimulus probability density function. Results yield a ood illustration of the nonlinear functions relating rate presented to rate perceived by human observers or operators.

  14. The effect of sampling on estimates of lexical specificity and error rates.

    PubMed

    Rowland, Caroline F; Fletcher, Sarah L

    2006-11-01

    Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.

  15. High Rate GPS on Volcanoes

    NASA Astrophysics Data System (ADS)

    Mattia, M.

    2005-12-01

    The high rate GPS data processing can be considered as the "new deal" in geodetic monitoring of active volcanoes. Before an eruption, infact, transient episodes of ground displacements related to the dynamics of magmatic fluids can be revealed through a careful analysis of high rate GPS data. In the very first phases of an eruption the real time processing of high rate GPS data can be used by the authorities of Civil Protection to follow the opening of fractures field on the slopes of the volcanoes. During an eruption large explosions, opening of vents, migration of fractures fields, landslides and other dangerous phenomena can be followed and their potential of damage estimated by authorities. Examples from the recent eruption of Stromboli volcano and from the current activities of high rate GPS monitoring on Mt. Etna are reported, with the aim to show the great potential and the perspectives of this technique.

  16. Asymptotic error-rate analysis of FSO links using transmit laser selection over gamma-gamma atmospheric turbulence channels with pointing errors.

    PubMed

    García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen

    2012-01-30

    Since free-space optical (FSO) systems are usually installed on high buildings and building sway may cause vibrations in the transmitted beam, an unsuitable alignment between transmitter and receiver together with fluctuations in the irradiance of the transmitted optical beam due to the atmospheric turbulence can severely degrade the performance of optical wireless communication systems. In this paper, asymptotic bit error-rate (BER) performance for FSO communication systems using transmit laser selection over atmospheric turbulence channels with pointing errors is analyzed. Novel closed-form asymptotic expressions are derived when the irradiance of the transmitted optical beam is susceptible to either a wide range of turbulence conditions (weak to strong), following a gamma-gamma distribution of parameters α and β, or pointing errors, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results provide significant insight into the impact of various system and channel parameters, showing that the diversity order is independent of the pointing error when the equivalent beam radius at the receiver is at least 2(min{α,β})(1/2) times the value of the pointing error displacement standard deviation at the receiver. Moreover, since proper FSO transmission requires transmitters with accurate control of their beamwidth, asymptotic expressions are used to find the optimum beamwidth that minimizes the BER at different turbulence conditions. Simulation results are further demonstrated to confirm the accuracy and usefulness of the derived results, showing that asymptotic expressions here obtained lead to simple bounds on the bit error probability that get tighter over a wider range of signal-to-noise ratio (SNR) as the turbulence strength increases.

  17. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    PubMed

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  18. Bit error rate investigation of spin-transfer-switched magnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Wang, Zihui; Zhou, Yuchen; Zhang, Jing; Huai, Yiming

    2012-10-01

    A method is developed to enable a fast bit error rate (BER) characterization of spin-transfer-torque magnetic random access memory magnetic tunnel junction (MTJ) cells without integrating with complementary metal-oxide semiconductor circuit. By utilizing the reflected signal from the devices under test, the measurement setup allows a fast measurement of bit error rates at >106, writing events per second. It is further shown that this method provides a time domain capability to examine the MTJ resistance states during a switching event, which can assist write error analysis in great detail. BER of a set of spin-transfer-torque MTJ cells has been evaluated by using this method, and bit error free operation (down to 10-8) for optimized in-plane MTJ cells has been demonstrated.

  19. The effect of voice recognition software on comparative error rates in radiology reports.

    PubMed

    McGurk, S; Brauer, K; Macfarlane, T V; Duncan, K A

    2008-10-01

    This study sought to confirm whether reports generated in a department of radiology contain more errors if generated using voice recognition (VR) software than if traditional dictation-transcription (DT) is used. All radiology reports generated over a 1-week period in a British teaching hospital were assessed. The presence of errors and their impact on the report were assessed. Data collected included the type of report, site of dictation, the experience of the operator, and whether English was the first language of the operator. 1887 reports were reviewed. 1160 (61.5%) were dictated using VR and 727 reports (38.5%) were generated by DT. 71 errors (3.8% of all reports) were identified. 56 errors were made using VR (4.8% of VR reports), whereas 15 errors were identified in DT reports (2.1% of transcribed reports). The difference in report errors between these two dictation methods was statistically significant (p = 0.002). Of the 71 reports containing errors, 37 (52.1%) had errors that affecting understanding. Other factors were also identified that significantly increased the likelihood of errors in a VR-generated report, such as working in a busy inpatient environment (p<0.001) and having a language other than English as a first language (p = 0.034). Operator grade was not significantly associated with increased errors. In conclusion, using VR significantly increases the number of reports containing errors. Errors using VR are significantly more likely to occur in noisy areas with a high workload and are more likely to be made by radiologists for whom English is not their first language.

  20. Compensatory and Noncompensatory Information Integration and Halo Error in Performance Rating Judgments.

    ERIC Educational Resources Information Center

    Kishor, Nand

    1992-01-01

    The relationship between compensatory and noncompensatory information integration and the intensity of the halo effect in performance rating was studied. Seventy University of British Columbia (Canada) students rated 27 teacher profiles. That the way performance information is mentally integrated affects the intensity of halo error was supported.…

  1. Compensatory and Noncompensatory Information Integration and Halo Error in Performance Rating Judgments.

    ERIC Educational Resources Information Center

    Kishor, Nand

    1992-01-01

    The relationship between compensatory and noncompensatory information integration and the intensity of the halo effect in performance rating was studied. Seventy University of British Columbia (Canada) students rated 27 teacher profiles. That the way performance information is mentally integrated affects the intensity of halo error was supported.…

  2. Measuring radiation induced changes in the error rate of fiber optic data links

    NASA Astrophysics Data System (ADS)

    Decusatis, Casimer; Benedict, Mel

    1996-12-01

    The purpose of this work is to investigate the effects of ionizing (gamma) radiation exposure on the bit error rate (BER) of an optical fiber data communication link. While it is known that exposure to high radiation dose rates will darken optical fiber permanently, comparatively little work has been done to evaluate modern dose rates. The resulting increase in fiber attenuation over time represents an additional penalty in the link optical power budget, which can degrade the BER if it is not accounted for in the link design. Modeling the link to predict this penalty is difficult, and it requires detailed information about the fiber composition that may not be available to the link designer. We describe a laboratory method for evaluating the effects of moderate dose rates on both single-mode and multimode fiber. Once a sample of fiber has been measured, the data can be fit to a simple model for predicting (at least to first order) BER as a function of radiation dose for fibers of similar composition.

  3. Error Baseline Rates of Five Sequencing Strategies Used for RNA Virus Population Characterization

    DTIC Science & Technology

    2017-01-31

    viral evolution , including the emergence of resistance to medical 21 countermeasures. To explore the sources of error in the determination of the...pressure on evolution of 36 viral genotypes and phenotypes, optimizing vaccine design, and identifying virus genome 37 mutations that may lead to...preparation and 94 pre-processing steps for analysis of intra-host RNA virus evolution . We determined baseline 95 error rates by analyzing an

  4. A Very Efficient Transfer Function Bounding Technique on Bit Error Rate for Viterbi Decoded, Rate 1/N Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.

  5. High performance interconnection between high data rate networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.

    1992-01-01

    The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.

  6. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-01-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  7. A stochastic node-failure network with individual tolerable error rate at multiple sinks

    NASA Astrophysics Data System (ADS)

    Huang, Cheng-Fu; Lin, Yi-Kuei

    2014-05-01

    Many enterprises consider several criteria during data transmission such as availability, delay, loss, and out-of-order packets from the service level agreements (SLAs) point of view. Hence internet service providers and customers are gradually focusing on tolerable error rate in transmission process. The internet service provider should provide the specific demand and keep a certain transmission error rate by their SLAs to each customer. This paper is mainly to evaluate the system reliability that the demand can be fulfilled under the tolerable error rate at all sinks by addressing a stochastic node-failure network (SNFN), in which each component (edge or node) has several capacities and a transmission error rate. An efficient algorithm is first proposed to generate all lower boundary points, the minimal capacity vectors satisfying demand and tolerable error rate for all sinks. Then the system reliability can be computed in terms of such points by applying recursive sum of disjoint products. A benchmark network and a practical network in the United States are demonstrated to illustrate the utility of the proposed algorithm. The computational complexity of the proposed algorithm is also analyzed.

  8. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets

    PubMed Central

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W.; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  9. Estimation of the minimum mRNA splicing error rate in vertebrates.

    PubMed

    Skandalis, A

    2016-01-01

    The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Voice recognition versus transcriptionist: error rates and productivity in MRI reporting.

    PubMed

    Strahan, Rodney H; Schneider-Kolsky, Michal E

    2010-10-01

    Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Fifty MRI reports generated by VR and 50 finalized MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Forty-two % and 30% of the finalized VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR. © 2010 The Authors. Journal of Medical Imaging and Radiation Oncology © 2010 The Royal Australian and New Zealand College of Radiologists.

  11. Manufacturing Error Effects on Mechanical Properties and Dynamic Characteristics of Rotor Parts under High Acceleration

    NASA Astrophysics Data System (ADS)

    Jia, Mei-Hui; Wang, Cheng-Lin; Ren, Bin

    2017-07-01

    Stress, strain and vibration characteristics of rotor parts should be changed significantly under high acceleration, manufacturing error is one of the most important reason. However, current research on this problem has not been carried out. A rotor with an acceleration of 150,000 g is considered as the objective, the effects of manufacturing errors on rotor mechanical properties and dynamic characteristics are executed by the selection of the key affecting factors. Through the force balance equation of the rotor infinitesimal unit establishment, a theoretical model of stress calculation based on slice method is proposed and established, a formula for the rotor stress at any point derives. A finite element model (FEM) of rotor with holes is established with manufacturing errors. The changes of the stresses and strains of a rotor in parallelism and symmetry errors are analyzed, which verify the validity of the theoretical model. The pre-stressing modal analysis is performed based on the aforementioned static analysis. The key dynamic characteristics are analyzed. The results demonstrated that, as the parallelism and symmetry errors increase, the equivalent stresses and strains of the rotor slowly increase linearly, the highest growth rate does not exceed 4%, the maximum change rate of natural frequency is 0.1%. The rotor vibration mode is not significantly affected. The FEM construction method of the rotor with manufacturing errors can be utilized for the quantitative research on rotor characteristics, which will assist in the active control of rotor component reliability under high acceleration.

  12. Minimum attainable RMS attitude error using co-located rate sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1989-01-01

    A closed form analytical expression for the minimum attainable attitude error (as well as the error rate) in a flexible beam by feedback control using co-located rate sensors is announced. For simplicity, researchers consider a beam clamped at one end with an offset mass (antenna) at the other end where the controls and sensors are located. Both control moment generators and force actuators are provided. The results apply to any beam-like lattice-type truss, and provide the kind of performance criteria needed under CSI - Controls-Stuctures-Integrated optimization.

  13. Parallel Transmission Pulse Design with Explicit Control for the Specific Absorption Rate in the Presence of Radiofrequency Errors

    PubMed Central

    Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L.; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L.; Guerin, Bastien

    2016-01-01

    Purpose A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. Methods The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors (“worst-case SAR”) is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Results Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled “worst-case SAR” in the presence of errors of this magnitude at minor cost of the excitation profile quality. Conclusion Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. PMID:26147916

  14. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  15. High Data Rate Instrument Study

    NASA Technical Reports Server (NTRS)

    Schober, Wayne; Lansing, Faiza; Wilson, Keith; Webb, Evan

    1999-01-01

    The High Data Rate Instrument Study was a joint effort between the Jet Propulsion Laboratory (JPL) and the Goddard Space Flight Center (GSFC). The objectives were to assess the characteristics of future high data rate Earth observing science instruments and then to assess the feasibility of developing data processing systems and communications systems required to meet those data rates. Instruments and technology were assessed for technology readiness dates of 2000, 2003, and 2006. The highest data rate instruments are hyperspectral and synthetic aperture radar instruments which are capable of generating 3.2 Gigabits per second (Gbps) and 1.3 Gbps, respectively, with a technology readiness date of 2003. These instruments would require storage of 16.2 Terebits (Tb) of information (RF communications case of two orbits of data) or 40.5 Tb of information (optical communications case of five orbits of data) with a technology readiness date of 2003. Onboard storage capability in 2003 is estimated at 4 Tb; therefore, all the data created cannot be stored without processing or compression. Of the 4 Tb of stored data, RF communications can only send about one third of the data to the ground, while optical communications is estimated at 6.4 Tb across all three technology readiness dates of 2000, 2003, and 2006 which were used in the study. The study includes analysis of the onboard processing and communications technologies at these three dates and potential systems to meet the high data rate requirements. In the 2003 case, 7.8% of the data can be stored and downlinked by RF communications while 10% of the data can be stored and downlinked with optical communications. The study conclusion is that only 1 to 10% of the data generated by high data rate instruments will be sent to the ground from now through 2006 unless revolutionary changes in spacecraft design and operations such as intelligent data extraction are developed.

  16. 12 h shifts and rates of error among nurses: A systematic review.

    PubMed

    Clendon, Jill; Gibbons, Veronique

    2015-07-01

    To determine the effect of working 12 h or more on a single shift in an acute care hospital setting compared with working less than 12 h on rates of error among nurses. Systematic review. A three-step search strategy was utilised. An initial search of Cochrane, the Joanna Briggs Institute (JBI), MEDLINE and CINAHL was undertaken. A second search using all identified keywords and index terms was then undertaken across all included databases (Embase, Current contents, Proquest Nursing and Allied Health Source, Proquest Theses and Dissertations, Dissertation Abstracts International). Thirdly, reference lists of identified reports and articles were searched for additional studies. Studies published in English before August 2014 were included. Following review of title and abstract of 5429 publications, 26 studies were identified as meeting the inclusion criteria and selected for full retrieval and assessment for methodological quality. Of these, 13 were of sufficient quality to be included for review. Six studies reported higher rates of error for nurses working greater than 12 h on a single shift, four reported higher rates of error on shifts of up to 8 h, and three reported no difference. The six studies reporting significant rises in error rates among nurses working 12 h or more on a single shift comprised 89% of the total sample size (N=60,780 with the total sample size N=67,967). The risk of making an error appears higher among nurses working 12 h or longer on a single shift in acute care hospitals. Hospitals and units currently operating 12 h shift systems should review this scheduling practice due to the potential negative impact on patient outcomes. Further research is required to consider factors that may mitigate the risk of error where 12 h shifts are scheduled and this cannot be changed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Bit error rate testing of a proof-of-concept model baseband processor

    NASA Technical Reports Server (NTRS)

    Stover, J. B.; Fujikawa, G.

    1986-01-01

    Bit-error-rate tests were performed on a proof-of-concept baseband processor. The BBP, which operates at an intermediate frequency in the C-Band, demodulates, demultiplexes, routes, remultiplexes, and remodulates digital message segments received from one ground station for retransmission to another. Test methods are discussed and test results are compared with the Contractor's test results.

  18. Practical bit error rate measurements on fibre optic communications links in student teaching laboratories

    NASA Astrophysics Data System (ADS)

    Walsh, Douglas; Moodie, David; Mauchline, Iain; Conner, Steve; Johnstone, Walter; Culshaw, Brian

    2005-10-01

    In this paper we describe the principles and design of a fibre optic communications teaching package and a cost effective extension module to this kit which enables students to investigate the effects of noise, attenuation and dispersion on the bit error rate at the receiver of laser and LED based digital fibre optic communication systems.

  19. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    The Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) was submitted to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and to identify possible study characteristics that are predictive of reliability variation. The meta-analysis was performed…

  20. Design of an Excel Spreadsheet to Estimate Rate Constants, Determine Associated Errors, and Choose Curve's Extent

    ERIC Educational Resources Information Center

    Moreira, Luis; Martins, Filomena; Elvas-Leitao, Ruben

    2006-01-01

    A new Microsoft Excel spreadsheet design that enables a prompt calculation of rate constant, k values and associated errors and also addresses less common features such as the choice of an experimental curve's extent is presented. To complete the spreadsheet design, several highlights and squared colored boxes were included to assist the user and…

  1. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    2001-01-01

    Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…

  2. Kurzweil Reading Machine: A Partial Evaluation of Its Optical Character Recognition Error Rate.

    ERIC Educational Resources Information Center

    Goodrich, Gregory L.; And Others

    1979-01-01

    A study designed to assess the ability of the Kurzweil reading machine (a speech reading device for the visually handicapped) to read three different type styles produced by five different means indicated that the machines tested had different error rates depending upon the means of producing the copy and upon the type style used. (Author/CL)

  3. Type I Error Rate and Power of Some Alternative Methods to the Independent Samples "t" Test.

    ERIC Educational Resources Information Center

    Nthangeni, Mbulaheni; Algina, James

    2001-01-01

    Examined Type I error rates and power for four tests for treatment control studies in which a larger treatment mean may be accompanied by a larger treatment variance and examined these aspects of the independent samples "t" test and the Welch test. Evaluated each test and suggested conditions for the use of each approach. (SLD)

  4. Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.

    2010-01-01

    We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.

  5. Treatment coverage rates for refractive error in the National Eye Health survey.

    PubMed

    Foreman, Joshua; Xie, Jing; Keel, Stuart; Taylor, Hugh R; Dirani, Mohamed

    2017-01-01

    To present treatment coverage rates and risk factors associated with uncorrected refractive error in Australia. Thirty population clusters were randomly selected from all geographic remoteness strata in Australia to provide samples of 1738 Indigenous Australians aged 40 years and older and 3098 non-Indigenous Australians aged 50 years and older. Presenting visual acuity was measured and those with vision loss (worse than 6/12) underwent pinhole testing and hand-held auto-refraction. Participants whose corrected visual acuity improved to be 6/12 or better were assigned as having uncorrected refractive error as the main cause of vision loss. The treatment coverage rates of refractive error were calculated (proportion of participants with refractive error that had distance correction and presenting visual acuity better than 6/12), and risk factor analysis for refractive correction was performed. The refractive error treatment coverage rate in Indigenous Australians of 82.2% (95% CI 78.6-85.3) was significantly lower than in non-Indigenous Australians (93.5%, 92.0-94.8) (Odds ratio [OR] 0.51, 0.35-0.75). In Indigenous participants, remoteness (OR 0.41, 0.19-0.89 and OR 0.55, 0.35-0.85 in Outer Regional and Very Remote areas, respectively), having never undergone an eye examination (OR 0.08, 0.02-0.43) and having consulted a health worker other than an optometrist or ophthalmologist (OR 0.30, 0.11-0.84) were risk factors for low coverage. On the other hand, speaking English was a protective factor (OR 2.72, 1.13-6.45) for treatment of refractive error. Compared to non-Indigenous Australians who had an eye examination within one year, participants who had not undergone an eye examination within the past five years (OR 0.08, 0.03-0.21) or had never been examined (OR 0.05, 0.10-0.23) had lower coverage. Interventions that increase integrated optometry services in regional and remote Indigenous communities may improve the treatment coverage rate of refractive error

  6. POWER-ENHANCED MULTIPLE DECISION FUNCTIONS CONTROLLING FAMILY-WISE ERROR AND FALSE DISCOVERY RATES

    PubMed Central

    Peña, Edsel A.; Habiger, Joshua D.; Wu, Wensong

    2014-01-01

    Improved procedures, in terms of smaller missed discovery rates (MDR), for performing multiple hypotheses testing with weak and strong control of the family-wise error rate (FWER) or the false discovery rate (FDR) are developed and studied. The improvement over existing procedures such as the Šidák procedure for FWER control and the Benjamini–Hochberg (BH) procedure for FDR control is achieved by exploiting possible differences in the powers of the individual tests. Results signal the need to take into account the powers of the individual tests and to have multiple hypotheses decision functions which are not limited to simply using the individual p-values, as is the case, for example, with the Šidák, Bonferroni, or BH procedures. They also enhance understanding of the role of the powers of individual tests, or more precisely the receiver operating characteristic (ROC) functions of decision processes, in the search for better multiple hypotheses testing procedures. A decision-theoretic framework is utilized, and through auxiliary randomizers the procedures could be used with discrete or mixed-type data or with rank-based nonparametric tests. This is in contrast to existing p-value based procedures whose theoretical validity is contingent on each of these p-value statistics being stochastically equal to or greater than a standard uniform variable under the null hypothesis. Proposed procedures are relevant in the analysis of high-dimensional “large M, small n” data sets arising in the natural, physical, medical, economic and social sciences, whose generation and creation is accelerated by advances in high-throughput technology, notably, but not limited to, microarray technology. PMID:25018568

  7. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate.

  8. Power penalties for multi-level PAM modulation formats at arbitrary bit error rates

    NASA Astrophysics Data System (ADS)

    Kaliteevskiy, Nikolay A.; Wood, William A.; Downie, John D.; Hurley, Jason; Sterlingov, Petr

    2016-03-01

    There is considerable interest in combining multi-level pulsed amplitude modulation formats (PAM-L) and forward error correction (FEC) in next-generation, short-range optical communications links for increased capacity. In this paper we derive new formulas for the optical power penalties due to modulation format complexity relative to PAM-2 and due to inter-symbol interference (ISI). We show that these penalties depend on the required system bit-error rate (BER) and that the conventional formulas overestimate link penalties. Our corrections to the standard formulas are very small at conventional BER levels (typically 1×10-12) but become significant at the higher BER levels enabled by FEC technology, especially for signal distortions due to ISI. The standard formula for format complexity, P = 10log(L-1), is shown to overestimate the actual penalty for PAM-4 and PAM-8 by approximately 0.1 and 0.25 dB respectively at 1×10-3 BER. Then we extend the well-known PAM-2 ISI penalty estimation formula from the IEEE 802.3 standard 10G link modeling spreadsheet to the large BER case and generalize it for arbitrary PAM-L formats. To demonstrate and verify the BER dependence of the ISI penalty, a set of PAM-2 experiments and Monte-Carlo modeling simulations are reported. The experimental results and simulations confirm that the conventional formulas can significantly overestimate ISI penalties at relatively high BER levels. In the experiments, overestimates up to 2 dB are observed at 1×10-3 BER.

  9. Delay characteristics for ARQ protocols in the case of fluctuating error rates

    NASA Astrophysics Data System (ADS)

    Fayolle, G.; Thomas, R.

    A model of mass transport of data over a satellite circuit using ARQ protocols, which are procedures for retransmitting data when errors are corrected, is presented for the case when successive error rates form a stochastic process. The model and the protocols are described, and the main characteristic parameters of the four protocols, including the time required to transmit a given number of messages correctly, and the loads for random arrivals, are computed. Numerical examples for a 1 Mbit/s TELECOM 1 link are given.

  10. Rate-distortion optimal video transport over IP allowing packets with bit errors.

    PubMed

    Harmanci, Oztan; Tekalp, A Murat

    2007-05-01

    We propose new models and methods for rate-distortion (RD) optimal video delivery over IP, when packets with bit errors are also delivered. In particular, we propose RD optimal methods for slicing and unequal error protection (UEP) of packets over IP allowing transmission of packets with bit errors. The proposed framework can be employed in a classical independent-layer transport model for optimal slicing, as well as in a cross-layer transport model for optimal slicing and UEP, where the forward error correction (FEC) coding is performed at the link layer, but the application controls the FEC code rate with the constraint that a given IP packet is subject to constant channel protection. The proposed method uses a novel dynamic programming approach to determine the optimal slicing and UEP configuration for each video frame in a practical manner, that is compliant with the AVC/H.264 standard. We also propose new rate and distortion estimation techniques at the encoder side in order to efficiently evaluate the objective function for a slice configuration. The cross-layer formulation option effectively determines which regions of a frame should be protected better; hence, it can be considered as a spatial UEP scheme. We successfully demonstrate, by means of experimental results, that each component of the proposed system provides significant gains, up to 2.0 dB, compared to competitive methods.

  11. High spin rate magnetic controller for nanosatellites

    NASA Astrophysics Data System (ADS)

    Slavinskis, A.; Kvell, U.; Kulu, E.; Sünter, I.; Kuuste, H.; Lätt, S.; Voormansik, K.; Noorma, M.

    2014-02-01

    This paper presents a study of a high rate closed-loop spin controller that uses only electromagnetic coils as actuators. The controller is able to perform spin rate control and simultaneously align the spin axis with the Earth's inertial reference frame. It is implemented, optimised and simulated for a 1-unit CubeSat ESTCube-1 to fulfil its mission requirements: spin the satellite up to 360 deg s-1 around the z-axis and align its spin axis with the Earth's polar axis with a pointing error of less than 3°. The attitude of the satellite is determined using a magnetic field vector, a Sun vector and angular velocity. It is estimated using an Unscented Kalman Filter and controlled using three electromagnetic coils. The algorithm is tested in a simulation environment that includes models of space environment and environmental disturbances, sensor and actuator emulation, attitude estimation, and a model to simulate the time delay caused by on-board calculations. In addition to the normal operation mode, analyses of reduced satellite functionality are performed: significant errors of attitude estimation due to non-operational Sun sensors; and limited actuator functionality due to two non-operational coils. A hardware-in-the-loop test is also performed to verify on-board software.

  12. The Impact of Sex of the Speaker, Sex of the Rater and Profanity Type of Language Trait Errors in Speech Evaluation: A Test of the Rating Error Paradigm.

    ERIC Educational Resources Information Center

    Bock, Douglas G.; And Others

    1984-01-01

    This study (1) demonstrates the negative impact of profanity in a public speech and (2) sheds light on the conceptualization of the term "rating error." Implications for classroom teaching are discussed. (PD)

  13. Pupillary response predicts multiple object tracking load, error rate, and conscientiousness, but not inattentional blindness.

    PubMed

    Wright, Timothy J; Boot, Walter R; Morgan, Chelsea S

    2013-09-01

    Research on inattentional blindness (IB) has uncovered few individual difference measures that predict failures to detect an unexpected event. Notably, no clear relationship exists between primary task performance and IB. This is perplexing as better task performance is typically associated with increased effort and should result in fewer spare resources to process the unexpected event. We utilized a psychophysiological measure of effort (pupillary response) to explore whether differences in effort devoted to the primary task (multiple object tracking) are related to IB. Pupillary response was sensitive to tracking load and differences in primary task error rates. Furthermore, pupillary response was a better predictor of conscientiousness than primary task errors; errors were uncorrelated with conscientiousness. Despite being sensitive to task load, individual differences in performance and conscientiousness, pupillary response did not distinguish between those who noticed the unexpected event and those who did not. Results provide converging evidence that effort and primary task engagement may be unrelated to IB.

  14. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  15. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.

  16. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    PubMed

    Kugelman, Jeffrey R; Wiley, Michael R; Nagle, Elyse R; Reyes, Daniel; Pfeffer, Brad P; Kuhn, Jens H; Sanchez-Lockhart, Mariano; Palacios, Gustavo F

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA) as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5) of all compared methods.

  17. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  18. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  19. Multipath errors in range rate measurement by a TDRS/VHF - GRARR

    NASA Technical Reports Server (NTRS)

    Sohn, S. J.

    1970-01-01

    Range rate errors due to multipath reflection are calculated for a tracking and data relay satellite system using the VHF Goddard range and range rate (GRARR) system. At VHF the reflection is primarily specular, and the strength of the multipath relative to the direct path can be modeled in terms of the geometry and the surface characteristics, specifically the root-mean-square (rms) ocean wave height. The uplink and downlink multipath introduces phase jitter on the GRARR carrier and subcarrier. The derivation of these effects is reviewed leading to an expression for the rms range rate error. The derivation assumed the worst-case orbital configurations in which there was very little relative specular Doppler. This means that the specular multipath interference was not attenuated by the carrier and subcarrier PLL transfer functions. Curves of range rate error are presented as a function of grazing angle with wave height 0.3 to 0.7 meters and spacecraft altitude 100 to 700 miles as parameters.

  20. High data rate optical crosslinks

    NASA Astrophysics Data System (ADS)

    Boroson, Don M.; Bondurant, Roy S.

    1992-03-01

    Optical technologies, due to their extremely short wavelengths, can be designed to be much more compact than RF when addressing high data rate crosslinks and multiple apertures approaching the multi-Gbps operational range. Currently available optical technologies can furnish hundreds-of- Mbps in a package of less than 100 lbs and several cubic feet. Attention is presently given to communications and spatial acquisition/tracking system analysis, the character of such space-qualified optics hardware as the requisite laser transmitter, and advanced hardware prototypes.

  1. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  2. Bit Error Rate Performance Limitations Due to Raman Amplifier Induced Crosstalk in a WDM Transmission System

    NASA Astrophysics Data System (ADS)

    Tithi, F. H.; Majumder, S. P.

    2017-03-01

    Analysis is carried out for a single span wavelength division multiplexing (WDM) transmission system with distributed Raman amplification to find the effect of amplifier induced crosstalk on the bit error rate (BER) with different system parameters. The results are evaluated in terms of crosstalk power induced in a WDM channel due to Raman amplification, optical signal to crosstalk ratio (OSCR) and BER at any distance for different pump power and number of WDM channels. The results show that the WDM system suffers power penalty due to crosstalk which is significant at higher pump power, higher channel separation and number of WDM channel. It is noticed that at a BER 10-9, the power penalty is 8.7 dB and 10.5 dB for the length of 180 km and number of WDM channel N=32 and 64 respectively when the pump power is 20 mW and is higher at high pump power. Analytical results are validated by simulation.

  3. Bit-error rate improvement of a laser communication system with low-order adaptive optics

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.; Canning, Douglas E.

    2002-12-01

    Recent experiments performed at UNC Charlotte indicate a reduction in the bit-error rate for a laser communication system with the implementaion of low-order adaptive optics in a free-space communication link. With simulated atmospheric tilt injected by a conventional PZT tilt mirror, an adaptive optics system with a Xinetics tilt mirror was used in a closed loop. The laboratory experiments replicated a monostatic propagation with a cooperative wavefront beacon at the receiver. Due to constraints in the speed of the processing hardware, the data is scaled to represent an actual propagation of a few kilometers under moderate scintillation conditions. We compare the experimental data and calculated bit-error rate before correction and after correction and compare it with a rigorous theoretical prediction.

  4. Probability of anomalously large bit-error rate in long haul optical transmission.

    PubMed

    Chernyak, V; Chertkov, M; Kolokolov, I; Lebedev, V

    2003-12-01

    We consider a linear model of optical transmission through a fiber with birefringent disorder in the presence of amplifier noise. Both disorder and noise are assumed to be weak, i.e., the average bit-error rate (BER) is small. The probability distribution function (PDF) of rare violent events leading to the values of BER much larger than its typical value is estimated. We show that the PDF has a long algebraic-like tail.

  5. Error Rates Resulting From Anemia Can Be Corrected in Multiple Commonly Used Point of Care Glucometers

    DTIC Science & Technology

    2008-01-01

    Error Rates Resulting From Anemia can be Corrected in Multiple Commonly Used Point-of-Care Glucometers Elizabeth A. Mann, MS, RN, Jose Salinas, PhD...strategies, increasing the prevalence of both hypoglycemia and anemia in the ICU.14–20 The change in allogeneic blood transfusion practices occurred in...transfusion-related risk. As physicians adopted practices that resulted in permissive anemia , the number of critically ill patients at risk of inappropriate

  6. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Astrophysics Data System (ADS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  7. Error rates for nanopore discrimination among cytosine, methylcytosine, and hydroxymethylcytosine along individual DNA strands.

    PubMed

    Schreiber, Jacob; Wescoe, Zachary L; Abu-Shumays, Robin; Vivian, John T; Baatar, Baldandorj; Karplus, Kevin; Akeson, Mark

    2013-11-19

    Cytosine, 5-methylcytosine, and 5-hydroxymethylcytosine were identified during translocation of single DNA template strands through a modified Mycobacterium smegmatis porin A (M2MspA) nanopore under control of phi29 DNA polymerase. This identification was based on three consecutive ionic current states that correspond to passage of modified or unmodified CG dinucleotides and their immediate neighbors through the nanopore limiting aperture. To establish quality scores for these calls, we examined ~3,300 translocation events for 48 distinct DNA constructs. Each experiment analyzed a mixture of cytosine-, 5-methylcytosine-, and 5-hydroxymethylcytosine-bearing DNA strands that contained a marker that independently established the correct cytosine methylation status at the target CG of each molecule tested. To calculate error rates for these calls, we established decision boundaries using a variety of machine-learning methods. These error rates depended upon the identity of the bases immediately 5' and 3' of the targeted CG dinucleotide, and ranged from 1.7% to 12.2% for a single-pass read. We estimate that Q40 values (0.01% error rates) for methylation status calls could be achieved by reading single molecules 5-19 times depending upon sequence context.

  8. High Resolution, High Frame Rate Video Technology

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Papers and working group summaries presented at the High Resolution, High Frame Rate Video (HHV) Workshop are compiled. HHV system is intended for future use on the Space Shuttle and Space Station Freedom. The Workshop was held for the dual purpose of: (1) allowing potential scientific users to assess the utility of the proposed system for monitoring microgravity science experiments; and (2) letting technical experts from industry recommend improvements to the proposed near-term HHV system. The following topics are covered: (1) State of the art in the video system performance; (2) Development plan for the HHV system; (3) Advanced technology for image gathering, coding, and processing; (4) Data compression applied to HHV; (5) Data transmission networks; and (6) Results of the users' requirements survey conducted by NASA.

  9. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.

  10. Rate-Distortion Optimization for Stereoscopic Video Streaming with Unequal Error Protection

    NASA Astrophysics Data System (ADS)

    Tan, A. Serdar; Aksay, Anil; Akar, Gozde Bozdagi; Arikan, Erdal

    2008-12-01

    We consider an error-resilient stereoscopic streaming system that uses an H.264-based multiview video codec and a rateless Raptor code for recovery from packet losses. One aim of the present work is to suggest a heuristic methodology for modeling the end-to-end rate-distortion (RD) characteristic of such a system. Another aim is to show how to make use of such a model to optimally select the parameters of the video codec and the Raptor code to minimize the overall distortion. Specifically, the proposed system models the RD curve of video encoder and performance of channel codec to jointly derive the optimal encoder bit rates and unequal error protection (UEP) rates specific to the layered stereoscopic video streaming. We define analytical RD curve modeling for each layer that includes the interdependency of these layers. A heuristic analytical model of the performance of Raptor codes is also defined. Furthermore, the distortion on the stereoscopic video quality caused by packet losses is estimated. Finally, analytical models and estimated single-packet loss distortions are used to minimize the end-to-end distortion and to obtain optimal encoder bit rates and UEP rates. The simulation results clearly demonstrate the significant quality gain against the nonoptimized schemes.

  11. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    PubMed Central

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  12. Preliminary error budget for an optical ranging system: Range, range rate, and differenced range observables

    NASA Technical Reports Server (NTRS)

    Folkner, W. M.; Finger, M. H.

    1990-01-01

    Future missions to the outer solar system or human exploration of Mars may use telemetry systems based on optical rather than radio transmitters. Pulsed laser transmission can be used to deliver telemetry rates of about 100 kbits/sec with an efficiency of several bits for each detected photon. Navigational observables that can be derived from timing pulsed laser signals are discussed. Error budgets are presented based on nominal ground stations and spacecraft-transceiver designs. Assuming a pulsed optical uplink signal, two-way range accuracy may approach the few centimeter level imposed by the troposphere uncertainty. Angular information can be achieved from differenced one-way range using two ground stations with the accuracy limited by the length of the available baseline and by clock synchronization and troposphere errors. A method of synchronizing the ground station clocks using optical ranging measurements is presented. This could allow differenced range accuracy to reach the few centimeter troposphere limit.

  13. Analysis of bit error rate for modified T-APPM under weak atmospheric turbulence channel

    NASA Astrophysics Data System (ADS)

    Liu, Zhe; Zhang, Qi; Wang, Yong-jun; Liu, Bo; Zhang, Li-jia; Wang, Kai-min; Xiao, Fei; Deng, Chao-gong

    2013-12-01

    T-APPM is combined of TCM (trellis-coded modulation) and APPM (Amplitude-Pulse-position modulation) and has broad application prospects in space optical communication. Set partitioning in standard T-APPM algorithm has the optimal performance in a multi-carrier system, but whether this method has the optimal performance in APPM which is a single-carrier system is unknown. To solve this problem, we first research the atmospheric channel model with weak turbulence; then a modified T-APPM algorithm was proposed, compared to the standard T-APPM algorithm, modified algorithm uses Gray code mapping instead of set partitioning mapping; finally, simulate the two algorithms with Monte-Carlo method. Simulation results showed that, when bit error rate at 10-4, the modified T-APPM algorithm achieved 0.4dB in SNR, effectively improve the system error performance.

  14. Influence of wave-front aberrations on bit error rate in inter-satellite laser communications

    NASA Astrophysics Data System (ADS)

    Yang, Yuqiang; Han, Qiqi; Tan, Liying; Ma, Jing; Yu, Siyuan; Yan, Zhibin; Yu, Jianjie; Zhao, Sheng

    2011-06-01

    We derive the bit error rate (BER) of inter-satellite laser communication (lasercom) links with on-off-keying systems in the presence of both wave-front aberrations and pointing error, but without considering the noise of the detector. Wave-front aberrations induced by receiver terminal have no influence on the BER, while wave-front aberrations induced by transmitter terminal will increase the BER. The BER depends on the area S which is truncated out by the threshold intensity of the detector (such as APD) on the intensity function in the receiver plane, and changes with root mean square (RMS) of wave-front aberrations. Numerical results show that the BER rises with the increasing of RMS value. The influences of Astigmatism, Coma, Curvature and Spherical aberration on the BER are compared. This work can benefit the design of lasercom system.

  15. Effect of Vertical Rate Error on Recovery from Loss of Well Clear Between UAS and Non-Cooperative Intruders

    NASA Technical Reports Server (NTRS)

    Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2016-01-01

    are suppressed, for all vertical error rate thresholds examined. However, results also show that in roughly 35 of the encounters where a vertical maneuver was selected, forcing the UAS to do a horizontal maneuver instead increased the severity of the loss of well-clear for that encounter. Finally, results showed a small reduction in the number of severe losses of well-clear when the high performance UAS (2000 fpm climb and descent rate) was allowed to maneuver vertically, and the vertical rate error was below 500 fpm. Overall, the results show that using a single vertical rate threshold is not advisable, and that limiting a UAS to horizontal maneuvers when vertical rate errors are above 175 fpm can make a UAS less safe about a third of the time. It is suggested that the hard limit be removed, and system manufacturers instructed to account for their own UAS performance, as well as vertical rate error and encounter geometry, when determining whether or not to provide vertical guidance to regain well-clear.

  16. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  17. Standardized error severity score (ESS) ratings to quantify risk associated with child restraint system (CRS) and booster seat misuse.

    PubMed

    Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley

    2017-11-17

    Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation

  18. Prediction of error rates in dose-imprinted memories on board CRRES by two different methods

    NASA Astrophysics Data System (ADS)

    Brucker, G. J.; Stassinopoulos, E. G.

    1991-06-01

    An analysis of the expected space radiation effects on the single event upset (SEU) properties of CMOS/bulk memories onboard the Combined Release and Radiation Effects Satellite (CRRES) is presented. Dose-imprint data from ground test irradiations of identical devices are applied to the predictions of cosmic-ray-induced space upset rates in the memories onboard the spacecraft. The calculations take into account the effect of total dose on the SEU sensitivity of the devices as the dose accumulates in orbit. Estimates of error rates, which involved an arbitrary selection of a single pair of threshold linear energy transfer (LET) and asymptotic cross-section values, were compared to the results of an integration over the cross-section curves versus LET. The integration gave lower upset rates than the use of the selected values of the SEU parameters. Since the integration approach is more accurate and eliminates the need for an arbitrary definition of threshold LET and asymptotic cross section, it is recommended for all error rate predictions where experimental sigma-versus-LET curves are available.

  19. Study of flow rate induced measurement error in flow-through nano-hole plasmonic sensor

    PubMed Central

    Tu, Long; Huang, Liang; Wang, Tianyi; Wang, Wenhui

    2015-01-01

    Flow-through gold film perforated with periodically arrayed sub-wavelength nano-holes can cause extraordinary optical transmission (EOT), which has recently emerged as a label-free surface plasmon resonance sensor in biochemical detection by measuring the transmission spectral shift. This paper describes a systematic study of the effect of microfluidic field on the spectrum of EOT associated with the porous gold film. To detect biochemical molecules, the sub-micron-thick film is free-standing in a microfluidic field and thus subject to hydrodynamic deformation. The film deformation alone may cause spectral shift as measurement error, which is coupled with the spectral shift as real signal associated with the molecules. However, this microfluid-induced measurement error has long been overlooked in the field and needs to be identified in order to improve the measurement accuracy. Therefore, we have conducted simulation and analytic analysis to investigate how the microfluidic flow rate affects the EOT spectrum and verified the effect through experiment with a sandwiched device combining Au/Cr/Si3N4 nano-hole film and polydimethylsiloxane microchannels. We found significant spectral blue shift associated with even small flow rates, for example, 12.60 nm for 4.2 μl/min. This measurement error corresponds to 90 times the optical resolution of the current state-of-the-art commercially available spectrometer or 8400 times the limit of detection. This really severe measurement error suggests that we should pay attention to the microfluidic parameter setting for EOT-based flow-through nano-hole sensors and adopt right scheme to improve the measurement accuracy. PMID:26649131

  20. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains

    PubMed Central

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-01-01

    Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially

  1. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  2. Error-prone DnaE2 Balances the Genome Mutation Rates in Myxococcus xanthus DK1622

    PubMed Central

    Peng, Ran; Chen, Jiang-he; Feng, Wan-wan; Zhang, Zheng; Yin, Jun; Li, Ze-shuo; Li, Yue-zhong

    2017-01-01

    dnaE is an alpha subunit of the tripartite protein complex of DNA polymerase III that is responsible for the replication of bacterial genome. The dnaE gene is often duplicated in many bacteria, and the duplicated dnaE gene was reported dispensable for cell survivals and error-prone in DNA replication in a mystery. In this study, we found that all sequenced myxobacterial genomes possessed two dnaE genes. The duplicate dnaE genes were both highly conserved but evolved divergently, suggesting their importance in myxobacteria. Using Myxococcus xanthus DK1622 as a model, we confirmed that dnaE1 (MXAN_5844) was essential for cell survival, while dnaE2 (MXAN_3982) was dispensable and encoded an error-prone enzyme for replication. The deletion of dnaE2 had small effects on cellular growth and social motility, but significantly decreased the development and sporulation abilities, which could be recovered by the complementation of dnaE2. The expression of dnaE1 was always greatly higher than that of dnaE2 in either the growth or developmental stage. However, overexpression of dnaE2 could not make dnaE1 deletable, probably due to their protein structural and functional divergences. The dnaE2 overexpression not only improved the growth, development and sporulation abilities, but also raised the genome mutation rate of M. xanthus. We argued that the low-expressed error-prone DnaE2 played as a balancer for the genome mutation rates, ensuring low mutation rates for cell adaptation in new environments but avoiding damages from high mutation rates to cells. PMID:28203231

  3. Error-prone DnaE2 Balances the Genome Mutation Rates in Myxococcus xanthus DK1622.

    PubMed

    Peng, Ran; Chen, Jiang-He; Feng, Wan-Wan; Zhang, Zheng; Yin, Jun; Li, Ze-Shuo; Li, Yue-Zhong

    2017-01-01

    dnaE is an alpha subunit of the tripartite protein complex of DNA polymerase III that is responsible for the replication of bacterial genome. The dnaE gene is often duplicated in many bacteria, and the duplicated dnaE gene was reported dispensable for cell survivals and error-prone in DNA replication in a mystery. In this study, we found that all sequenced myxobacterial genomes possessed two dnaE genes. The duplicate dnaE genes were both highly conserved but evolved divergently, suggesting their importance in myxobacteria. Using Myxococcus xanthus DK1622 as a model, we confirmed that dnaE1 (MXAN_5844) was essential for cell survival, while dnaE2 (MXAN_3982) was dispensable and encoded an error-prone enzyme for replication. The deletion of dnaE2 had small effects on cellular growth and social motility, but significantly decreased the development and sporulation abilities, which could be recovered by the complementation of dnaE2. The expression of dnaE1 was always greatly higher than that of dnaE2 in either the growth or developmental stage. However, overexpression of dnaE2 could not make dnaE1 deletable, probably due to their protein structural and functional divergences. The dnaE2 overexpression not only improved the growth, development and sporulation abilities, but also raised the genome mutation rate of M. xanthus. We argued that the low-expressed error-prone DnaE2 played as a balancer for the genome mutation rates, ensuring low mutation rates for cell adaptation in new environments but avoiding damages from high mutation rates to cells.

  4. Creation and implementation of department-wide structured reports: an analysis of the impact on error rate in radiology reports.

    PubMed

    Hawkins, C Matthew; Hall, Seth; Zhang, Bin; Towbin, Alexander J

    2014-10-01

    The purpose of this study was to evaluate and compare textual error rates and subtypes in radiology reports before and after implementation of department-wide structured reports. Randomly selected radiology reports that were generated following the implementation of department-wide structured reports were evaluated for textual errors by two radiologists. For each report, the text was compared to the corresponding audio file. Errors in each report were tabulated and classified. Error rates were compared to results from a prior study performed prior to implementation of structured reports. Calculated error rates included the average number of errors per report, average number of nongrammatical errors per report, the percentage of reports with an error, and the percentage of reports with a nongrammatical error. Identical versions of voice-recognition software were used for both studies. A total of 644 radiology reports were randomly evaluated as part of this study. There was a statistically significant reduction in the percentage of reports with nongrammatical errors (33 to 26%; p = 0.024). The likelihood of at least one missense omission error (omission errors that changed the meaning of a phrase or sentence) occurring in a report was significantly reduced from 3.5 to 1.2% (p = 0.0175). A statistically significant reduction in the likelihood of at least one comission error (retained statements from a standardized report that contradict the dictated findings or impression) occurring in a report was also observed (3.9 to 0.8%; p = 0.0007). Carefully constructed structured reports can help to reduce certain error types in radiology reports.

  5. Equilibrating errors: reliable estimation of information transmission rates in biological systems with spectral analysis-based methods.

    PubMed

    Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

    2014-06-01

    Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.

  6. Theoretical Bit Error Rate Performance of the Kalman Filter Excisor for FM Interference

    DTIC Science & Technology

    1992-12-01

    un filtre de Kalman as- servi numeriquement par verrouillage de phase et s’avere quasi-optimum quant a la demodulation d’une interference de type MF...Puisqu’on presuppose que l’interftrence est plus forte le signal ou que le bruit, le filtre de Kalman se verrouille sur l’interfdrence et permet...AD-A263 018 THEORETICAL BIT ERROR RATE PERFORMANCE OF THE KALMAN FILTER EXCISOR FOR FM INTERFERENCE by Brian RKominchuk APR 19 1993 DEFENCE RESEARCH

  7. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.

  8. Digitally modulated bit error rate measurement system for microwave component evaluation

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary Jo W.; Budinger, James M.

    1989-01-01

    The NASA Lewis Research Center has developed a unique capability for evaluation of the microwave components of a digital communication system. This digitally modulated bit-error-rate (BER) measurement system (DMBERMS) features a continuous data digital BER test set, a data processor, a serial minimum shift keying (SMSK) modem, noise generation, and computer automation. Application of the DMBERMS has provided useful information for the evaluation of existing microwave components and of design goals for future components. The design and applications of this system for digitally modulated BER measurements are discussed.

  9. A priori mesh quality metric error analysis applied to a high-order finite element method

    NASA Astrophysics Data System (ADS)

    Lowrie, W.; Lukin, V. S.; Shumlak, U.

    2011-06-01

    Characterization of computational mesh's quality prior to performing a numerical simulation is an important step in insuring that the result is valid. A highly distorted mesh can result in significant errors. It is therefore desirable to predict solution accuracy on a given mesh. The HiFi/SEL high-order finite element code is used to study the effects of various mesh distortions on solution quality of known analytic problems for spatial discretizations with different order of finite elements. The measured global error norms are compared to several mesh quality metrics by independently varying both the degree of the distortions and the order of the finite elements. It is found that the spatial spectral convergence rates are preserved for all considered distortion types, while the total error increases with the degree of distortion. For each distortion type, correlations between the measured solution error and the different mesh metrics are quantified, identifying the most appropriate overall mesh metric. The results show promise for future a priori computational mesh quality determination and improvement.

  10. Effect of a misspecification of response rates on type I and type II errors, in a phase II Simon design.

    PubMed

    Baey, Charlotte; Le Deley, Marie-Cécile

    2011-07-01

    Phase-II trials are a key stage in the clinical development of a new treatment. Their main objective is to provide the required information for a go/no-go decision regarding a subsequent phase-III trial. In single arm phase-II trials, widely used in oncology, this decision relies on the comparison of efficacy outcomes observed in the trial to historical controls. The false positive rate generally accepted in phase-II trials, around 10%, contrasts with the very high attrition rate of new compounds tested in phase-III trials, estimated at about 60%. We assumed that this gap could partly be explained by the misspecification of the response rate expected with standard treatment, leading to erroneous hypotheses tested in the phase-II trial. We computed the false positive probability of a defined design under various hypotheses of expected efficacy probability. Similarly we calculated the power of the trial to detect the efficacy of a new compound for different expected efficacy rates. Calculations were done considering a binary outcome, such as the response rate, with a decision rule based on a Simon two-stage design. When analysing a single-arm phase-II trial, based on a design with a pre-specified null hypothesis, a 5% absolute error in the expected response rate leads to a false positive rate of about 30% when it is supposed to be 10%. This inflation of type-I error varies only slightly according to the hypotheses of the initial design. Single-arm phase-II trials poorly control for the false positive rate. Randomised phase-II trials should, therefore, be more often considered.

  11. Motion error correction approach for high-resolution synthetic aperture radar imaging

    NASA Astrophysics Data System (ADS)

    Jia, Gaowei; Chang, Wenge; Li, Xiangyang

    2014-01-01

    An innovative data-based motion compensation approach is proposed for the high-resolution synthetic aperture radar (SAR). The main idea is to extract the displacements in line-of-sight direction and the range-dependent phase errors from raw data, based on an instantaneous Doppler rate estimate. The approach is implemented by a two-step process: (1) the correction of excessive range cell migration; (2) the compensation of range-dependent phase errors. Experimental results show that the proposed method is capable of producing high-resolution SAR imagery with a spatial resolution of 0.17×0.2 m2 (range×azimuth) in Ku band.

  12. Data-driven region-of-interest selection without inflating Type I error rate.

    PubMed

    Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard

    2017-01-01

    In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies. © 2016 Society for Psychophysiological Research.

  13. Impact of Temporal Masking of Flip-Flop Upsets on Soft Error Rates of Sequential Circuits

    NASA Astrophysics Data System (ADS)

    Chen, R. M.; Mahatme, N. N.; Diggins, Z. J.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.

    2017-08-01

    Reductions in single-event (SE) upset (SEU) rates for sequential circuits due to temporal masking effects are evaluated. The impacts of supply voltage, combinational-logic delay, flip-flop (FF) SEU performance, and particle linear energy transfer (LET) values are analyzed for SE cross sections of sequential circuits. Alpha particles and heavy ions with different LET values are used to characterize the circuits fabricated at the 40-nm bulk CMOS technology node. Experimental results show that increasing the delay of the logic circuit present between FFs and decreasing the supply voltage are two effective ways of reducing SE error rates for sequential circuits for particles with low LET values due to temporal masking. SEU-hardened FFs benefit less from temporal masking than conventional FFs. Circuit hardening implications for SEU-hardened and unhardened FFs are discussed.

  14. Analytical Evaluation of Bit Error Rate Performance of a Free-Space Optical Communication System with Receive Diversity Impaired by Pointing Error

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2015-06-01

    Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.

  15. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    NASA Astrophysics Data System (ADS)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  16. Investigation on the bit error rate performance of 40Gb/s space optical communication system based on BPSK scheme

    NASA Astrophysics Data System (ADS)

    Li, Mi; Li, Bowen; Zhang, Xuping; Song, Yuejiang; Liu, Jia; Tu, Guojie

    2015-08-01

    Space optical communication technique is attracting increasingly more attention because it owns advantages such as high security and great communication quality compared with microwave communication. As the space optical communication develops, people have already achieved the communication at data rate of Gb/s currently. The next generation for space optical system have goal of the higher data rate of 40Gb/s. However, the traditional optical communication system cannot satisfy it when the data rate of system is at such high extent. This paper will introduce ground optical communication system of 40Gb/s data rate as to achieve the space optical communication at high data rate. Speaking of the data rate of 40Gb/s, we must apply waveguide modulator to modulate the optical signal and magnify this signal by laser amplifier. Moreover, the more sensitive avalanche photodiode (APD) will be as the detector to increase the communication quality. Based on communication system above, we analyze character of communication quality in downlink of space optical communication system when data rate is at the level of 40Gb/s. The bit error rate (BER) performance, an important factor to justify communication quality, versus some parameter ratios is discussed. From results, there exists optimum ratio of gain factor and divergence angle, which shows the best BER performance. We can also increase ratio of receiving diameter and divergence angle for better communication quality. These results can be helpful to comprehend the character of optical communication system at high data rate and contribute to the system design.

  17. Critical error rate of quantum-key-distribution protocols versus the size and dimensionality of the quantum alphabet

    NASA Astrophysics Data System (ADS)

    Sych, Denis V.; Grishanin, Boris A.; Zadkov, Victor N.

    2004-11-01

    A quantum-information analysis of how the size and dimensionality of the quantum alphabet affect the critical error rate of the quantum-key-distribution (QKD) protocols is given on an example of two QKD protocols—the six-state and ∞-state (i.e., a protocol with continuous alphabet) ones. In the case of a two-dimensional Hilbert space, it is shown that, under certain assumptions, increasing the number of letters in the quantum alphabet up to infinity slightly increases the critical error rate. Increasing additionally the dimensionality of the Hilbert space leads to a further increase in the critical error rate.

  18. TCP Flow Level Performance Evaluation on Error Rate Aware Scheduling Algorithms in Evolved UTRA and UTRAN Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Uchida, Masato; Tsuru, Masato; Oie, Yuji

    We present a TCP flow level performance evaluation on error rate aware scheduling algorithms in Evolved UTRA and UTRAN networks. With the introduction of the error rate, which is the probability of transmission failure under a given wireless condition and the instantaneous transmission rate, the transmission efficiency can be improved without sacrificing the balance between system performance and user fairness. The performance comparison with and without error rate awareness is carried out dependant on various TCP traffic models, user channel conditions, schedulers with different fairness constraints, and automatic repeat request (ARQ) types. The results indicate that error rate awareness can make the resource allocation more reasonable and effectively improve the system and individual performance, especially for poor channel condition users.

  19. Anti-saccade error rates as a measure of attentional bias in cocaine dependent subjects.

    PubMed

    Dias, Nadeeka R; Schmitz, Joy M; Rathnayaka, Nuvan; Red, Stuart D; Sereno, Anne B; Moeller, F Gerard; Lane, Scott D

    2015-10-01

    Cocaine-dependent (CD) subjects show attentional bias toward cocaine-related cues, and this form of cue-reactivity may be predictive of craving and relapse. Attentional bias has previously been assessed by models that present drug-relevant stimuli and measure physiological and behavioral reactivity (often reaction time). Studies of several CNS diseases outside of substance use disorders consistently report anti-saccade deficits, suggesting a compromise in the interplay between higher-order cortical processes in voluntary eye control (i.e., anti-saccades) and reflexive saccades driven more by involuntary midbrain perceptual input (i.e., pro-saccades). Here, we describe a novel attentional-bias task developed by using measurements of saccadic eye movements in the presence of cocaine-specific stimuli, combining previously unique research domains to capitalize on their respective experimental and conceptual strengths. CD subjects (N = 46) and healthy controls (N = 41) were tested on blocks of pro-saccade and anti-saccade trials featuring cocaine and neutral stimuli (pictures). Analyses of eye-movement data indicated (1) greater overall anti-saccade errors in the CD group; (2) greater attentional bias in CD subjects as measured by anti-saccade errors to cocaine-specific (relative to neutral) stimuli; and (3) no differences in pro-saccade error rates. Attentional bias was correlated with scores on the obsessive-compulsive cocaine scale. The results demonstrate increased saliency and differential attentional to cocaine cues by the CD group. The assay provides a sensitive index of saccadic (visual inhibitory) control, a specific index of attentional bias to drug-relevant cues, and preliminary insight into the visual circuitry that may contribute to drug-specific cue reactivity. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Carbon and sediment accumulation in the Everglades (USA) during the past 4000 years: rates, drivers, and sources of error

    USGS Publications Warehouse

    Glaser, Paul H.; Volin, John C.; Givnish, Thomas J.; Hansen, Barbara C. S.; Stricker, Craig A.

    2012-01-01

    Tropical and sub-tropical wetlands are considered to be globally important sources for greenhouse gases but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida, to assess these problems and determine the factors that could govern carbon accumulation in this large sub-tropical wetland. Accelerator mass spectroscopy dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion ( 0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1 ) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands.

  1. Bit error rate performance of free-space optical link under effect of plasma sheath turbulence

    NASA Astrophysics Data System (ADS)

    Li, Jiangting; Yang, Shaofei; Guo, Lixin; Cheng, Mingjian; Gong, Teng

    2017-08-01

    Based on the power spectrum of the refractive-index fluctuation in the plasma sheath turbulence, the expressions for wave structure functions and scintillation index of optical wave propagating in a turbulent plasma sheath are derived. The effect of the turbulence microstructure on the propagation characteristics of optical waves are simulated and analyzed. Finally, the bit error performance of a free-space optical (FSO) link is investigated under the effect of plasma sheath turbulence. The results indicate that the spherical waves have a better communication performance in the FSO link. In addition, a greater variance of the refractive index fluctuation causes a more severe fluctuation in electron density, temperature, and collision frequency inside the plasma sheath. However, when the outer scale is close to the thickness of the plasma sheath, the turbulence eddies have almost no influence on the wave propagation. Therefore, the bit error rate (BER) obviously increases with the increase in variance of the refractive index fluctuation and the decrease in the outer scale. These results are fundamental for evaluating the performance of the FSO link under the effect of plasma sheath turbulence.

  2. The Visual Motor Integration Test: High Interjudge Reliability, High Potential For Diagnostic Error.

    ERIC Educational Resources Information Center

    Snyder, Peggy P.; And Others

    1981-01-01

    Investigated scoring agreement among three different training levels of Visual Motor Integration Test (VMI) diagnosticians. Correlational data demonstrated high interexaminer reliabilities; however, there were gross errors in precision after raw scores had been converted into VMI age equivalent scores. (Author/RC)

  3. The Visual Motor Integration Test: High Interjudge Reliability, High Potential For Diagnostic Error.

    ERIC Educational Resources Information Center

    Snyder, Peggy P.; And Others

    1981-01-01

    Investigated scoring agreement among three different training levels of Visual Motor Integration Test (VMI) diagnosticians. Correlational data demonstrated high interexaminer reliabilities; however, there were gross errors in precision after raw scores had been converted into VMI age equivalent scores. (Author/RC)

  4. Inflation of the type I error: investigations on regulatory recommendations for bioequivalence of highly variable drugs.

    PubMed

    Wonnemann, Meinolf; Frömke, Cornelia; Koch, Armin

    2015-01-01

    We investigated different evaluation strategies for bioequivalence trials with highly variable drugs on their resulting empirical type I error and empirical power. The classical 'unscaled' crossover design with average bioequivalence evaluation, the Add-on concept of the Japanese guideline, and the current 'scaling' approach of EMA were compared. Simulation studies were performed based on the assumption of a single dose drug administration while changing the underlying intra-individual variability. Inclusion of Add-on subjects following the Japanese concept led to slight increases of the empirical α-error (≈7.5%). For the approach of EMA we noted an unexpected tremendous increase of the rejection rate at a geometric mean ratio of 1.25. Moreover, we detected error rates slightly above the pre-set limit of 5% even at the proposed 'scaled' bioequivalence limits. With the classical 'unscaled' approach and the Japanese guideline concept the goal of reduced subject numbers in bioequivalence trials of HVDs cannot be achieved. On the other hand, widening the acceptance range comes at the price that quite a number of products will be accepted bioequivalent that had not been accepted in the past. A two-stage design with control of the global α therefore seems the better alternative.

  5. A decrease in conjunction error rates across lags on a continuous recognition task: a robust pattern.

    PubMed

    Jones, Todd C; Atchley, Paul

    2008-11-01

    In four experiments, the lag retention interval from parent words (e.g., blackmail, jailbird) to a conjunction word (blackbird) was manipulated in a continuous recognition task. Alterations to the basic procedure of Jones and Atchley (2002) were employed in Experiments 1 and 2 to bolster recollection to reject conjunction lures, yet conjunction error rates still decreased across lags of 1 to 20 words. Experiment 3 and a multiexperiment analysis examined the increments of forgetting in familiarity across lags of 1-20 words. Finally, in Experiment 4, participants attempted to identify conjunction probes as "old", and the data were contrasted with those from a previous experiment (Jones & Atchley, 2002, Exp. 1), in which participants attempted not to identify conjunction probes as "old". In support of earlier findings, the decrease in familiarity across lags of 1-20 words appears robust, with a constant level of weak recollection occurring for parent words.

  6. Bit Error Rate Performance of Partially Coherent Dual-Branch SSC Receiver over Composite Fading Channels

    NASA Astrophysics Data System (ADS)

    Milić, Dejan N.; Đorđević, Goran T.

    2013-01-01

    In this paper, we study the effects of imperfect reference signal recovery on the bit error rate (BER) performance of dual-branch switch and stay combining receiver over Nakagami-m fading/gamma shadowing channels with arbitrary parameters. The average BER of quaternary phase shift keying is evaluated under the assumption that the reference carrier signal is extracted from the received modulated signal. We compute numerical results illustrating simultaneous influence of average signal-to-noise ratio per bit, fading severity, shadowing, phase-locked loop bandwidth-bit duration (BLTb) product, and switching threshold on BER performance. The effects of BLTb on receiver performance under different channel conditions are emphasized. Optimal switching threshold is determined which minimizes BER performance under given channel and receiver parameters.

  7. A web-based team-oriented medical error communication assessment tool: development, preliminary reliability, validity, and user ratings.

    PubMed

    Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas

    2011-01-01

    Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.

  8. Accuracy of High-Rate GPS for Seismology

    NASA Technical Reports Server (NTRS)

    Elosegui, P.; Davis, J. L.; Oberlander, D.; Baena, R.; Ekstrom, G.

    2006-01-01

    We built a device for translating a GPS antenna on a positioning table to simulate the ground motions caused by an earthquake. The earthquake simulator is accurate to better than 0.1 mm in position, and provides the "ground truth" displacements for assessing the technique of high-rate GPS. We found that the root-mean-square error of the 1-Hz GPS position estimates over the 15-min duration of the simulated seismic event was 2.5 mm, with approximately 96% of the observations in error by less than 5 mm, and is independent of GPS antenna motion. The error spectrum of the GPS estimates is approximately flicker noise, with a 50% decorrelation time for the position error of approx.1.6 s. We that, for the particular event simulated, the spectrum of dependent error in the GPS measurements. surface deformations exceeds the GPS error spectrum within a finite band. More studies are required to determine whether a generally optimal bandwidth exists for a target group of seismic events.

  9. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    SciTech Connect

    Sterling, D; Ehler, E

    2015-06-15

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing.

  10. Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error

    ERIC Educational Resources Information Center

    Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju

    2009-01-01

    Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…

  11. Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error

    ERIC Educational Resources Information Center

    Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju

    2009-01-01

    Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…

  12. A high rate proportional chamber

    SciTech Connect

    Henderson, R.; Fraszer, W.; Openshaw, R.; Sheffer, G.; Salomon, M.; Dew, S.; Marans, J.; Wilson, P.

    1987-02-01

    Gas mixtures with high specific ionization allow the use of small interelectrode distances while still maintaining full efficiency. With the short electron drift distances the timing resolution is also improved. The authors have built and operated two 25 cm/sup 2/ chambers with small interelectrode distances. Also single wire detector cells have been built to test gas mixture lifetimes. Various admixtures of CF/sub 4/, DME, Isobutane, Ethane and Argon have been tested. Possible applications of such chambers are as beam profile monitors, position tagging of rare events and front end chambers in spectrometers.

  13. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    PubMed

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ

  14. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    PubMed

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-02-10

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of FST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of FST. In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  15. Compact Modeling and Simulation of Heavy Ion-Induced Soft Error Rate in Space Environment: Principles and Validation

    NASA Astrophysics Data System (ADS)

    Zebrev, Gennady I.; Galimov, Artur M.

    2017-08-01

    A simple physical model for calculation of the ion-induced soft error rate in space environment has been proposed, based on the phenomenological cross-sectional notion. The proposed numerical procedure is adapted to the multiple cell upset characterization in highly scaled memories. Nonlocality of the ion impact has been revealed as the key concept determining the difference between physical processes in low-scaled and highly scaled memories. The model has been validated by comparison between the simulation results and the literature on-board data. It was shown that the proposed method provides single-valued prediction results correlating well with on-board data-based solely on cross-sectional data and LET spectra without any hidden fitting parameters and procedures.

  16. Dependence of Satellite Sampling Error on Monthly Averaged Rain Rates:Comparison of Simple Models and Recent Studies.

    NASA Astrophysics Data System (ADS)

    Bell, Thomas L.; Kundu, Prasun K.

    2000-01-01

    Considerable progress has been made in recent years with using satellite data to generate maps of rain rate with grid resolutions of 1°-5° square. In parallel with these efforts, much work has been devoted to the problem of attaching error estimates to these products. There are two main sources of error, the intrinsic errors in the remote sensing measurements themselves (retrieval errors) and the lack of continuity in the coverage by low earth-orbiting satellites (sampling error). Perhaps a dozen or so studies have attempted to estimate the sampling-error component. These studies have been based on rain gauge and radar-derived data, and the estimates vary so much that it is clear that the sampling error cannot be represented satisfactorily by a single value.These studies are reviewed. Some of the results reported in these studies are based on a method referred to in this paper as `resampling by shifts.' The authors find that the method unfortunately tends to produce estimates that are subject to too much uncertainty to be used quantitatively. After setting these results aside, the authors find that the variability in the remaining sampling-error estimates can be explained to a considerable extent using assumptions common to many statistical models of rain. All such models predict that sampling error relative to the average rain rate R is proportional to R1/2. Although the sampling error at any given site seems (to the extent that data have been examined) to change with R in the way predicted by the model, the proportionality constant in this relationship seen in the various studies appears to change from site to site. This constant can be obtained from the satellite estimates themselves if retrieval errors are not correlated over scales of the order of the grid-box size.

  17. Time-resolved in vivo luminescence dosimetry for online error detection in pulsed dose-rate brachytherapy

    SciTech Connect

    Andersen, Claus E.; Nielsen, Soeren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari

    2009-11-15

    Purpose: The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Methods: Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with {sup 192}Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from {+-}5 to {+-}15 mm) were simulated in software in order to assess the ability of the system to detect errors. Results: For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when

  18. Commercial optical inter-satellite communication at high data rates

    NASA Astrophysics Data System (ADS)

    Gregory, Mark; Heine, Frank; Kämpfner, Hartmut; Lange, Robert; Lutzer, Michael; Meyer, Rolf

    2012-03-01

    Laser communication terminals with data rates far above 1 Gbps have been in operation in orbit since January 2008, and the links established between two low Earth orbit (LEO) satellites have demonstrated error-free communication. Bit error rates better than 10-11 have been achieved without data encoding. Signal acquisition can be reproducibly achieved within a few seconds. After adaptation to larger link separation distances these laser communication terminals will be used in the low earth orbit-geosynchronous satellite (LEO-GEO) link of European data relay satellite (EDRS), the GEO European data relay system. LEO-to-ground and ground-to-LEO links have examined the impact of the atmosphere on such optical links. In the future, high data rate GEO-to-ground links will require ground stations equipped with adaptive optics, which are currently under development.

  19. A call for more transparent reporting of error rates: the quality of AFLP data in ecological and evolutionary research.

    PubMed

    Crawford, Lindsay A; Koscinski, Daria; Keyghobadi, Nusha

    2012-12-01

    Despite much discussion of the importance of quantifying and reporting genotyping error in molecular studies, it is still not standard practice in the literature. This is particularly a concern for amplified fragment length polymorphism (AFLP) studies, where differences in laboratory, peak-calling and locus-selection protocols can generate data sets varying widely in genotyping error rate, the number of loci used and potentially estimates of genetic diversity or differentiation. In our experience, papers rarely provide adequate information on AFLP reproducibility, making meaningful comparisons among studies difficult. To quantify the extent of this problem, we reviewed the current molecular ecology literature (470 recent AFLP articles) to determine the proportion of studies that report an error rate and follow established guidelines for assessing error. Fifty-four per cent of recent articles do not report any assessment of data set reproducibility. Of those studies that do claim to have assessed reproducibility, the majority (~90%) either do not report a specific error rate or do not provide sufficient details to allow the reader to judge whether error was assessed correctly. Even of the papers that do report an error rate and provide details, many (≥23%) do not follow recommended standards for quantifying error. These issues also exist for other marker types such as microsatellites, and next-generation sequencing techniques, particularly those which use restriction enzymes for fragment generation. Therefore, we urge all researchers conducting genotyping studies to estimate and more transparently report genotyping error using existing guidelines and encourage journals to enforce stricter standards for the publication of genotyping studies.

  20. Measuring error rates in genomic perturbation screens: gold standards for human functional genomics

    PubMed Central

    Hart, Traver; Brown, Kevin R; Sircoulomb, Fabrice; Rottapel, Robert; Moffat, Jason

    2014-01-01

    Technological advancement has opened the door to systematic genetics in mammalian cells. Genome-scale loss-of-function screens can assay fitness defects induced by partial gene knockdown, using RNA interference, or complete gene knockout, using new CRISPR techniques. These screens can reveal the basic blueprint required for cellular proliferation. Moreover, comparing healthy to cancerous tissue can uncover genes that are essential only in the tumor; these genes are targets for the development of specific anticancer therapies. Unfortunately, progress in this field has been hampered by off-target effects of perturbation reagents and poorly quantified error rates in large-scale screens. To improve the quality of information derived from these screens, and to provide a framework for understanding the capabilities and limitations of CRISPR technology, we derive gold-standard reference sets of essential and nonessential genes, and provide a Bayesian classifier of gene essentiality that outperforms current methods on both RNAi and CRISPR screens. Our results indicate that CRISPR technology is more sensitive than RNAi and that both techniques have nontrivial false discovery rates that can be mitigated by rigorous analytical methods. PMID:24987113

  1. Detecting trends in raptor counts: power and type I error rates of various statistical tests

    USGS Publications Warehouse

    Hatfield, J.S.; Gould, W.R.; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.

    1996-01-01

    We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.

  2. A search for high proper motion objects in two gamma-ray burst error regions

    NASA Technical Reports Server (NTRS)

    Ricker, George R.; Vanderspek, Roland K.; Ajhar, Edward A.

    1986-01-01

    Deep optical images of small gamma-ray burst error regions have generally resulted in the detection of several faint sources in each error region. It may be possible to identify the neutron star source of a GRB on the basis of a high transverse peculiar velocity if the source is at moderate distance. The results of searches for high proper motion objects in the error regions of GBS1412+78 and GBS2251-02 are reported.

  3. Integrating GPS with GLONASS for high-rate seismogeodesy

    NASA Astrophysics Data System (ADS)

    Geng, Jianghui; Jiang, Peng; Liu, Jingnan

    2017-04-01

    High-rate GPS is a precious seismogeodetic tool to capture coseismic displacements unambiguously and usually improved by sidereal filtering to mitigate multipath effects dominating the periods of tens of seconds to minutes. We further introduced GLONASS (Globalnaya navigatsionnaya sputnikovaya sistema) data into high-rate GPS to deliver over 2000 24 h displacements at 99 stations in Europe. We find that the major displacement errors induced by orbits and atmosphere on the low-frequency band that are not characterized by sidereal repeatabilities can be amplified markedly by up to 40% after GPS sidereal filtering. In contrast, integration with GLONASS can reduce the noise of high-rate GPS significantly and near uniformly over the entire frequency band, especially for the north components by up to 40%, suggesting that this integration is able to mitigate more errors than only multipath within high-rate GPS. Integrating GPS with GLONASS outperforms GPS sidereal filtering substantially in ameliorating displacement noise by up to 60% over a wide frequency band (e.g., 2 s-0.5 days) except a minor portion between 100 and 1000 s. High-rate multi-GNSS (Global Navigation Satellite System) can be enhanced further by sidereal filtering, which should however be carefully implemented to avoid adverse complications of the noise spectrum of displacements.

  4. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    PubMed

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  5. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  6. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, W. S.; Burkhart, J. F.; Kylling, A.

    2015-08-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  7. Research on controlling middle spatial frequency error of high gradient precise aspheric by pitch tool

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan; Zhong, Xianyun

    2016-09-01

    Extreme optical fabrication projects known as EUV and X-ray optic systems, which are representative of today's advanced optical manufacturing technology level, have special requirements for the optical surface quality. In synchroton radiation (SR) beamlines, mirrors of high shape accuracy is always used in grazing incidence. In nanolithograph systems, middle spatial frequency errors always lead to small-angle scattering or flare that reduces the contrast of the image. The slope error is defined for a given horizontal length, the increase or decrease in form error at the end point relative to the starting point is measured. The quality of reflective optical elements can be described by their deviation from ideal shape at different spatial frequencies. Usually one distinguishes between the figure error, the low spatial error part ranging from aperture length to 1mm frequencies, and the mid-high spatial error part from 1mm to 1 μm and from1 μm to some 10 nm spatial frequencies, respectively. Firstly, this paper will disscuss the relationship between slope error and middle spatial frequency error, which both describe the optical surface error along with the form profile. Then, experimental researches will be conducted on a high gradient precise aspheric with pitch tool, which aim to restraining the middle spatial frequency error.

  8. Error analysis of metabolic-rate measurements in mammalian-cell culture by carbon and nitrogen balances.

    PubMed

    Bonarius, H P; Houtman, J H; Schmid, G; de Gooijer, C D; Tramper, J

    1999-05-01

    The analysis of metabolic fluxes of large stoichiometric systems is sensitive to measurement errors in metabolic uptake and production rates. It is therefore desirable to independently test the consistency of measurement data, which is possible if at least two elemental balances can be closed. For mammalian-cell culture, closing the C balance has been hampered by problems in measuring the carbon-dioxide production rate. Here, it is shown for various sets of measurement data that the C balance can be closed by applying a method to correct for the bicarbonate buffer in the culture medium. The measurement data are subsequently subject to measurement-error analysis on the basis of the C and N balances. It is shown at 90% reliability that no gross measurement errors are present, neither in the measured production- and consumption rates, nor in the estimated in- and outgoing metabolic rates of te subnetwork, that contains the glycolysis, the pentose-phosphate, and the glutaminolysis pathways.

  9. Reduction in write error rate of voltage-driven dynamic magnetization switching by improving thermal stability factor

    NASA Astrophysics Data System (ADS)

    Shiota, Yoichi; Nozaki, Takayuki; Tamaru, Shingo; Yakushiji, Kay; Kubota, Hitoshi; Fukushima, Akio; Yuasa, Shinji; Suzuki, Yoshishige

    2017-07-01

    In this study, we demonstrate voltage-driven dynamic magnetization switching for the write error rate (WER) of the order of 10-5. The largest voltage effect on the perpendicular magnetic anisotropy in Ta/(CoxFe100-x)80B20/MgO structure (x = 0, 10, 31, 51) is obtained for x = 31 after annealing at 250 °C. Based on investigations using perpendicularly magnetized magnetic tunnel junctions that have different (Co31Fe69)80B20 free layer thicknesses, we demonstrate that the improvement in the thermal stability factor is important to reduce the WER. Our results will facilitate the design of highly reliable, voltage-torque, magnetoresistive random access memory.

  10. Bit error rate analysis of free-space optical system with spatial diversity over strong atmospheric turbulence channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Krishnan, Prabu; Sriram Kumar, D.

    2014-12-01

    Free-space optical communication (FSO) is emerging as a captivating alternative to work out the hindrances in the connectivity problems. It can be used for transmitting signals over common lands and properties that the sender or receiver may not own. The performance of an FSO system depends on the random environmental conditions. The bit error rate (BER) performance of differential phase shift keying FSO system is investigated. A distributed strong atmospheric turbulence channel with pointing error is considered for the BER analysis. Here, the system models are developed for single-input, single-output-FSO (SISO-FSO) and single-input, multiple-output-FSO (SIMO-FSO) systems. The closed-form mathematical expressions are derived for the average BER with various combining schemes in terms of the Meijer's G function.

  11. Bit error rate analysis of the K channel using wavelength diversity

    NASA Astrophysics Data System (ADS)

    Shah, Dhaval; Kothari, Dilip Kumar; Ghosh, Anjan K.

    2017-05-01

    The presence of atmospheric turbulence in the free space causes fading and degrades the performance of a free space optical (FSO) system. To mitigate the turbulence-induced fading, multiple copies of the signal can be transmitted on a different wavelength. Each signal, in this case, will undergo different fadings. This is known as the wavelength diversity technique. Bit error rate (BER) performance of the FSO systems with wavelength diversity under strong turbulence condition is investigated. K-distribution is chosen to model a strong turbulence scenario. The source information is transmitted onto three carrier wavelengths of 1.55, 1.31, and 0.85 μm. The signals at the receiver side are combined using three different methods: optical combining (OC), equal gain combining (EGC), and selection combining (SC). Mathematical expressions are derived for the calculation of the BER for all three schemes (OC, EGC, and SC). Results are presented for the link distance of 2 and 3 km under strong turbulence conditions for all the combining methods. The performance of all three schemes is also compared. It is observed that OC provides better performance than the other two techniques. Proposed method results are also compared with the published article.

  12. Serialized quantum error correction protocol for high-bandwidth quantum repeaters

    NASA Astrophysics Data System (ADS)

    Glaudell, A. N.; Waks, E.; Taylor, J. M.

    2016-09-01

    Advances in single-photon creation, transmission, and detection suggest that sending quantum information over optical fibers may have losses low enough to be correctable using a quantum error correcting code (QECC). Such error-corrected communication is equivalent to a novel quantum repeater scheme, but crucial questions regarding implementation and system requirements remain open. Here we show that long-range entangled bit generation with rates approaching 108 entangled bits per second may be possible using a completely serialized protocol, in which photons are generated, entangled, and error corrected via sequential, one-way interactions with as few matter qubits as possible. Provided loss and error rates of the required elements are below the threshold for quantum error correction, this scheme demonstrates improved performance over transmission of single photons. We find improvement in entangled bit rates at large distances using this serial protocol and various QECCs. In particular, at a total distance of 500 km with fiber loss rates of 0.3 dB km-1, logical gate failure probabilities of 10-5, photon creation and measurement error rates of 10-5, and a gate speed of 80 ps, we find the maximum single repeater chain entangled bit rates of 51 Hz at a 20 m node spacing and 190 000 Hz at a 43 m node spacing for the {[[3,1,2

  13. Optimization design of an adaptive CFRC reflector for high order wave-front error control

    NASA Astrophysics Data System (ADS)

    Lan, Lan; Fang, Houfei; Wu, Ke; Jiang, Shuidong; Zhou, Yang

    2017-04-01

    The trend in future space high precision reflectors is going towards large aperture, lightweight and actively controlled deformable antennas. An adaptive shape control system for a Carbon Fiber Reinforced Composite (CFRC) reflector is conducted by Piezoelectric Ceramic Transducer (PZT) actuators. This adaptive shape control system has been shown to effectively mitigate common low order wave-front error, but it is inevitably plagued by high order wave-front error control. In order to improve the controllability of the adaptive CFRC reflector control system for high order wave-front error, the design of adaptive CFRC reflector requires optimizing further. According to numerical and experimental results, the print-through error induced by manufacturing and PZT actuators actuation is a type of predominant high order wave-front error. This paper describes a design which some secondary rib elements are embedded within the triangular cells of the primary ribs. These small secondary ribs are designed to support the reflector surface's weak region. Controllability of this new adaptive CFRC reflector control system with small secondary ribs is evaluated by generalized Zernike functions. This new design scheme can reduce high order residual error and suppress the high order wave-front error such as print-through error. Finally, design parameters of the adaptive CFRC reflector control system with small secondary ribs, such as primary rib height, secondary rib height, cut-out height of primary rib, are optimized.

  14. Modelling non-linear redshift-space distortions in the galaxy clustering pattern: systematic errors on the growth rate parameter

    NASA Astrophysics Data System (ADS)

    de la Torre, Sylvain; Guzzo, Luigi

    2012-11-01

    We investigate the ability of state-of-the-art redshift-space distortion models for the galaxy anisotropic two-point correlation function, ξ(r⊥, r∥), to recover precise and unbiased estimates of the linear growth rate of structure f, when applied to catalogues of galaxies characterized by a realistic bias relation. To this aim, we make use of a set of simulated catalogues at z = 0.1 and 1 with different luminosity thresholds, obtained by populating dark matter haloes from a large N-body simulation using halo occupation prescriptions. We examine the most recent developments in redshift-space distortion modelling, which account for non-linearities on both small and intermediate scales produced, respectively, by randomized motions in virialized structures and non-linear coupling between the density and velocity fields. We consider the possibility of including the linear component of galaxy bias as a free parameter and directly estimate the growth rate of structure f. Results are compared to those obtained using the standard dispersion model, over different ranges of scales. We find that the model of Taruya et al., the most sophisticated one considered in this analysis, provides in general the most unbiased estimates of the growth rate of structure, with systematic errors within ±4 per cent over a wide range of galaxy populations spanning luminosities between L > L* and L > 3L*. The scale dependence of galaxy bias plays a role on recovering unbiased estimates of f when fitting quasi-non-linear scales. Its effect is particularly severe for most luminous galaxies, for which systematic effects in the modelling might be more difficult to mitigate and have to be further investigated. Finally, we also test the impact of neglecting the presence of non-negligible velocity bias with respect to mass in the galaxy catalogues. This can produce an additional systematic error of the order of 1-3 per cent depending on the redshift, comparable to the statistical errors the we

  15. Do remote community telepharmacies have higher medication error rates than traditional community pharmacies? Evidence from the North Dakota Telepharmacy Project.

    PubMed

    Friesner, Daniel L; Scott, David M; Rathke, Ann M; Peterson, Charles D; Anderson, Howard C

    2011-01-01

    To evaluate the differences in medication dispensing errors between remote telepharmacy sites (pharmacist not physically present) and standard community pharmacy sites (pharmacist physically present and no telepharmacy technology; comparison group). Pilot, cross-sectional, comparison study. North Dakota from January 2005 to September 2008. Pharmacy staff at 14 remote telepharmacy sites and 8 comparison community pharmacies. The Pharmacy Quality Commitment (PQC) reporting system was incorporated into the North Dakota Telepharmacy Project. A session was conducted to train pharmacists and technicians on use of the PQC system. A quality-related event (QRE) was defined as either a near miss (i.e., mistake caught before reaching patient; pharmacy discovery), or an error (i.e., mistake discovered after patient received medication; patient discovery). QREs for prescriptions. During a 45-month period, the remote telepharmacy group reported 47,078 prescriptions and 631 QREs compared with 123,346 prescriptions and 1,002 QREs in the standard pharmacy group. Results for near misses (pharmacy discovery) and errors (patient discovery) for the remote and comparison sites were 553 and 887 and 78 and 125, respectively. Percentage of "where the mistake was caught" (i.e., pharmacist check) for the remote and comparison sites were 58% and 69%, respectively. This study reported a lower overall rate (1.0%) and a slight difference in medication dispensing error rates between remote telepharmacy sites (1.3%) and comparison sites (0.8%). Both rates are comparable with nationally reported levels (1.7% error rate for 50 pharmacies).

  16. Effect of automated drug distribution systems on medication error rates in a short-stay geriatric unit

    PubMed Central

    Cousein, Etienne; Mareville, Julie; Lerooy, Alexandre; Caillau, Antoine; Labreuche, Julien; Dambre, Delphine; Odou, Pascal; Bonte, Jean-Paul; Puisieux, François; Decaudin, Bertrand; Coupé, Patrick

    2014-01-01

    Rationale, aims and objectives To assess the impact of an automated drug distribution system on medication errors (MEs). Methods Before-after observational study in a 40-bed short stay geriatric unit within a 1800 bed general hospital in Valenciennes, France. Researchers attended nurse medication administration rounds and compared administered to prescribed drugs, before and after the drug distribution system changed from a ward stock system (WSS) to a unit dose dispensing system (UDDS), integrating a unit dose dispensing robot and automated medication dispensing cabinet (AMDC). Results A total of 615 opportunities of errors (OEs) were observed among 148 patients treated during the WSS period, and 783 OEs were observed among 166 patients treated during the UDDS period. ME [medication administration error (MAE)] rates were calculated and compared between the two periods. Secondary measures included type of errors, seriousness of errors and risk reduction for the patients. The implementation of an automated drug dispensing system resulted in a 53% reduction in MAEs. All error types were reduced in the UDDS period compared with the WSS period (P < 0.001). Wrong dose and wrong drug errors were reduced by 79.1% (2.4% versus 0.5%, P = 0.005) and 93.7% (1.9% versus 0.01%, P = 0.009), respectively. Conclusion An automated UDDS combining a unit dose dispensing robot and AMDCs could reduce discrepancies between ordered and administered drugs, thus improving medication safety among the elderly. PMID:24917185

  17. Soft error rate estimations of the Kintex-7 FPGA within the ATLAS Liquid Argon (LAr) Calorimeter

    NASA Astrophysics Data System (ADS)

    Wirthlin, M. J.; Takai, H.; Harding, A.

    2014-01-01

    This paper summarizes the radiation testing performed on the Xilinx Kintex-7 FPGA in an effort to determine if the Kintex-7 can be used within the ATLAS Liquid Argon (LAr) Calorimeter. The Kintex-7 device was tested with wide-spectrum neutrons, protons, heavy-ions, and mixed high-energy hadron environments. The results of these tests were used to estimate the configuration ram and block ram upset rate within the ATLAS LAr. These estimations suggest that the configuration memory will upset at a rate of 1.1 × 10-10 upsets/bit/s and the bram memory will upset at a rate of 9.06 × 10-11 upsets/bit/s. For the Kintex 7K325 device, this translates to 6.85 × 10-3 upsets/device/s for configuration memory and 1.49 × 10-3 for block memory.

  18. Estimation of genotyping error rate from repeat genotyping, unintentional recaptures and known parent-offspring comparisons in 16 microsatellite loci for brown rockfish (Sebastes auriculatus).

    PubMed

    Hess, Maureen A; Rhydderch, James G; LeClair, Larry L; Buckley, Raymond M; Kawase, Mitsuhiro; Hauser, Lorenz

    2012-11-01

    Genotyping errors are present in almost all genetic data and can affect biological conclusions of a study, particularly for studies based on individual identification and parentage. Many statistical approaches can incorporate genotyping errors, but usually need accurate estimates of error rates. Here, we used a new microsatellite data set developed for brown rockfish (Sebastes auriculatus) to estimate genotyping error using three approaches: (i) repeat genotyping 5% of samples, (ii) comparing unintentionally recaptured individuals and (iii) Mendelian inheritance error checking for known parent-offspring pairs. In each data set, we quantified genotyping error rate per allele due to allele drop-out and false alleles. Genotyping error rate per locus revealed an average overall genotyping error rate by direct count of 0.3%, 1.5% and 1.7% (0.002, 0.007 and 0.008 per allele error rate) from replicate genotypes, known parent-offspring pairs and unintentionally recaptured individuals, respectively. By direct-count error estimates, the recapture and known parent-offspring data sets revealed an error rate four times greater than estimated using repeat genotypes. There was no evidence of correlation between error rates and locus variability for all three data sets, and errors appeared to occur randomly over loci in the repeat genotypes, but not in recaptures and parent-offspring comparisons. Furthermore, there was no correlation in locus-specific error rates between any two of the three data sets. Our data suggest that repeat genotyping may underestimate true error rates and may not estimate locus-specific error rates accurately. We therefore suggest using methods for error estimation that correspond to the overall aim of the study (e.g. known parent-offspring comparisons in parentage studies).

  19. Highly porous thermal protection materials: Modelling and prediction of the methodical experimental errors

    NASA Astrophysics Data System (ADS)

    Cherepanov, Valery V.; Alifanov, Oleg M.; Morzhukhina, Alena V.; Budnik, Sergey A.

    2016-11-01

    The formation mechanisms and the main factors affecting the systematic error of thermocouples were investigated. According to the results of experimental studies and mathematical modelling it was established that in highly porous heat resistant materials for aerospace application the thermocouple errors are determined by two competing mechanisms provided correlation between the errors and the difference between radiation and conduction heat fluxes. The comparative analysis was carried out and some features of the methodical error formation related to the distances from the heated surface were established.

  20. People's Hypercorrection of High-Confidence Errors: Did They Know It All Along?

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2011-01-01

    This study investigated the "knew it all along" explanation of the hypercorrection effect. The hypercorrection effect refers to the finding that when people are given corrective feedback, errors that are committed with high confidence are easier to correct than low-confidence errors. Experiment 1 showed that people were more likely to…

  1. Research on high-precision laser displacement sensor-based error compensation model

    NASA Astrophysics Data System (ADS)

    Zhang, Zhifeng; Zhai, Yusheng; Su, Zhan; Qiao, Lin; Tang, Yiming; Wang, Xinjie; Su, Yuling; Song, Zhijun

    2015-08-01

    The triangulation measurement is a kind of active vision measurement. The laser triangulation displacement is widely used with advantages of non-contact, high precision, high sensitivity. The measuring error will increase with the nonlinear and noise disturbance when sensors work in large distance. The paper introduces the principle of laser triangulation measurement and analyzes the measuring error and establishes the compensation error. Spot centroid is extracted with digital image processing technology to increase noise-signal ratio. Results of simulation and experiment show the method can meet requirement of large distance and high precision.

  2. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    SciTech Connect

    Chau, H.F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1{radical}(5){approx_equal}27.6%, thereby making it the most error resistant scheme known to date.

  3. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    NASA Astrophysics Data System (ADS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-11-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  4. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.

  5. Compensation of spectral and RF errors in swept-source OCT for high extinction complex demodulation

    PubMed Central

    Siddiqui, Meena; Tozburun, Serhat; Zhang, Ellen Ziyi; Vakoc, Benjamin J.

    2015-01-01

    We provide a framework for compensating errors within passive optical quadrature demodulation circuits used in swept-source optical coherence tomography (OCT). Quadrature demodulation allows for detection of both the real and imaginary components of an interference fringe, and this information separates signals from positive and negative depth spaces. To achieve a high extinction (∼60 dB) between these positive and negative signals, the demodulation error must be less than 0.1% in amplitude and phase. It is difficult to construct a system that achieves this low error across the wide spectral and RF bandwidths of high-speed swept-source systems. In a prior work, post-processing methods for removing residual spectral errors were described. Here, we identify the importance of a second class of errors originating in the RF domain, and present a comprehensive framework for compensating both spectral and RF errors. Using this framework, extinctions >60 dB are demonstrated. A stability analysis shows that calibration parameters associated with RF errors are accurate for many days, while those associated with spectral errors must be updated prior to each imaging session. Empirical procedures to derive both RF and spectral calibration parameters simultaneously and to update spectral calibration parameters are presented. These algorithms provide the basis for using passive optical quadrature demodulation circuits with high speed and wide-bandwidth swept-source OCT systems. PMID:25836784

  6. Compensation of spectral and RF errors in swept-source OCT for high extinction complex demodulation.

    PubMed

    Siddiqui, Meena; Tozburun, Serhat; Zhang, Ellen Ziyi; Vakoc, Benjamin J

    2015-03-09

    We provide a framework for compensating errors within passive optical quadrature demodulation circuits used in swept-source optical coherence tomography (OCT). Quadrature demodulation allows for detection of both the real and imaginary components of an interference fringe, and this information separates signals from positive and negative depth spaces. To achieve a high extinction (∼60 dB) between these positive and negative signals, the demodulation error must be less than 0.1% in amplitude and phase. It is difficult to construct a system that achieves this low error across the wide spectral and RF bandwidths of high-speed swept-source systems. In a prior work, post-processing methods for removing residual spectral errors were described. Here, we identify the importance of a second class of errors originating in the RF domain, and present a comprehensive framework for compensating both spectral and RF errors. Using this framework, extinctions >60 dB are demonstrated. A stability analysis shows that calibration parameters associated with RF errors are accurate for many days, while those associated with spectral errors must be updated prior to each imaging session. Empirical procedures to derive both RF and spectral calibration parameters simultaneously and to update spectral calibration parameters are presented. These algorithms provide the basis for using passive optical quadrature demodulation circuits with high speed and wide-bandwidth swept-source OCT systems.

  7. Internal pressure gradient errors in σ-coordinate ocean models in high resolution fjord studies

    NASA Astrophysics Data System (ADS)

    Berntsen, Jarle; Thiem, Øyvind; Avlesen, Helge

    2015-08-01

    Terrain following ocean models are today applied in coastal areas and fjords where the topography may be very steep. Recent advances in high performance computing facilitate model studies with very high spatial resolution. In general, numerical discretization errors tend to zero with the grid size. However, in fjords and near the coast the slopes may be very steep, and the internal pressure gradient errors associated with σ-models may be significant even in high resolution studies. The internal pressure gradient errors are due to errors when estimating the density gradients in σ-models, and these errors are investigated for two idealized test cases and for the Hardanger fjord in Norway. The methods considered are the standard second order method and a recently proposed method that is balanced such that the density gradients are zero for the case ρ = ρ(z) where ρ is the density and z is the vertical coordinate. The results show that by using the balanced method, the errors may be reduced considerably also for slope parameters larger than the maximum suggested value of 0.2. For the Hardanger fjord case initialized with ρ = ρ(z) , the errors in the results produced with the balanced method are orders of magnitude smaller than the corresponding errors in the results produced with the second order method.

  8. Structure of turbulence at high shear rate

    NASA Technical Reports Server (NTRS)

    Lee, Moon Joo; Kim, John; Moin, Parviz

    1990-01-01

    The structure of homogeneous turbulence subject to high shear rate has been investigated by using three-dimensional, time-dependent numerical simulations of the Navier-Stokes equations. This study indicates that high shear rate alone is sufficient for generation of the streaky structures, and that the presence of a solid boundary is not necessary. Evolution of the statistical correlations is examined to determine the effect of high shear rate on the development of anisotropy in turbulence. It is shown that the streamwise fluctuating motions are enhanced so profoundly that a highly anisotropic turbulence state with a 'one-component' velocity field and 'two-component' vorticity field develops asymptotically as total shear increases. Because of high-shear rate, rapid distortion theory predicts remarkably well the anisotropic behavior of the structural quantities.

  9. High burn rate solid composite propellants

    NASA Astrophysics Data System (ADS)

    Manship, Timothy D.

    High burn rate propellants help maintain high levels of thrust without requiring complex, high surface area grain geometries. Utilizing high burn rate propellants allows for simplified grain geometries that not only make production of the grains easier, but the simplified grains tend to have better mechanical strength, which is important in missiles undergoing high-g accelerations. Additionally, high burn rate propellants allow for a higher volumetric loading which reduces the overall missile's size and weight. The purpose of this study is to present methods of achieving a high burn rate propellant and to develop a composite propellant formulation that burns at 1.5 inches per second at 1000 psia. In this study, several means of achieving a high burn rate propellant were presented. In addition, several candidate approaches were evaluated using the Kepner-Tregoe method with hydroxyl terminated polybutadiene (HTPB)-based propellants using burn rate modifiers and dicyclopentadiene (DCPD)-based propellants being selected for further evaluation. Propellants with varying levels of nano-aluminum, nano-iron oxide, FeBTA, and overall solids loading were produced using the HTPB binder and evaluated in order to determine the effect the various ingredients have on the burn rate and to find a formulation that provides the burn rate desired. Experiments were conducted to compare the burn rates of propellants using the binders HTPB and DCPD. The DCPD formulation matched that of the baseline HTPB mix. Finally, GAP-plasticized DCPD gumstock dogbones were attempted to be made for mechanical evaluation. Results from the study show that nano-additives have a substantial effect on propellant burn rate with nano-iron oxide having the largest influence. Of the formulations tested, the highest burn rate was a 84% solids loading mix using nano-aluminum nano-iron oxide, and ammonium perchlorate in a 3:1(20 micron: 200 micron) ratio which achieved a burn rate of 1.2 inches per second at 1000

  10. Considering the Role of Time Budgets on Copy-Error Rates in Material Culture Traditions: An Experimental Assessment

    PubMed Central

    Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J.

    2014-01-01

    Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural ‘mutation’, can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such ‘time constraints’ affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D ‘target handaxe form’ using a standardized foam block and a plastic knife. Three distinct ‘time conditions’ were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific ‘threshold’ might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a ‘threshold’ effect, below which mutation rates increase more markedly. Our results also suggest that ‘time budgets’ available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, ‘time-budgeting’ factors need to be given greater consideration in evolutionary models of material culture change. PMID:24809848

  11. People’s Hypercorrection of High Confidence Errors: Did They Know it All Along?

    PubMed Central

    Metcalfe, Janet; Finn, Bridgid

    2010-01-01

    This study investigated the ‘knew it all along’ explanation of the hypercorrection effect. The hypercorrection effect refers to the finding that when given corrective feedback, errors that are committed with high confidence are easier to correct than low confidence errors. Experiment 1 showed that people were more likely to claim that they ‘knew it all along,’ when they were given the answers to high confidence errors as compared to low confidence errors. Experiments 2 and 3 investigated whether people really did know the correct answers before being told, or whether the claim in Experiment 1 was mere hindsight bias. Experiment 2 showed that (1) participants were more likely to choose the correct answer in a second guess multiple-choice test when they had expressed an error with high rather than low confidence, and (2) that they were more likely to generate the correct answers to high confidence as compared to low confidence errors, after being told they were wrong and to try again. Experiment 3 showed that (3) people were more likely to produce the correct answer when given a two-letter cue to high rather than low confidence errors, and that (4) when feedback was scaffolded by presenting the target letters one by one, people needed fewer such letter prompts to reach the correct answers when they had committed high, rather than low confidence errors. These results converge on the conclusion that when people said that they ‘knew it all along’, they were right. This knowledge, no doubt, contributes to why they are able to correct those high confidence errors so easily. PMID:21355668

  12. People's hypercorrection of high-confidence errors: did they know it all along?

    PubMed

    Metcalfe, Janet; Finn, Bridgid

    2011-03-01

    This study investigated the "knew it all along" explanation of the hypercorrection effect. The hypercorrection effect refers to the finding that when people are given corrective feedback, errors that are committed with high confidence are easier to correct than low-confidence errors. Experiment 1 showed that people were more likely to claim that they knew it all along when they were given the answers to high-confidence errors as compared with low-confidence errors. Experiments 2 and 3 investigated whether people really did know the correct answers before being told or whether the claim in Experiment 1 was mere hindsight bias. Experiment 2 showed that (a) participants were more likely to choose the correct answer in a 2nd guess multiple-choice test when they had expressed an error with high rather than low confidence and (b) that they were more likely to generate the correct answers to high-confidence as compared with low-confidence errors after being told they were wrong and to try again. Experiment 3 showed that (c) people were more likely to produce the correct answer when given a 2-letter cue to high- rather than low-confidence errors and that (d) when feedback was scaffolded by presenting the target letters 1 by 1, people needed fewer such letter prompts to reach the correct answers when they had committed high- rather than low-confidence errors. These results converge on the conclusion that when people said that they knew it all along, they were right. This knowledge, no doubt, contributes to why they are able to correct those high-confidence errors so easily.

  13. Bit-error rate performance of coherent optical M-ary PSK/QAM using decision-aided maximum likelihood phase estimation.

    PubMed

    Yu, Changyuan; Zhang, Shaoliang; Kam, Pooi Yuen; Chen, Jian

    2010-06-07

    The bit-error rate (BER) expressions of 16- phase-shift keying (PSK) and 16- quadrature amplitude modulation (QAM) are analytically obtained in the presence of a phase error. By averaging over the statistics of the phase error, the performance penalty can be analytically examined as a function of the phase error variance. The phase error variances leading to a 1-dB signal-to-noise ratio per bit penalty at BER=10(-4) have been found to be 8.7 x 10(-2) rad(2), 1.2 x 10(-2) rad(2), 2.4 x 10(-3) rad(2), 6.0 x 10(-4) rad(2) and 2.3 x 10(-3) rad(2) for binary, quadrature, 8-, and 16-PSK and 16QAM, respectively. With the knowledge of the allowable phase error variance, the corresponding laser linewidth tolerance can be predicted. We extend the phase error variance analysis of decision-aided maximum likelihood carrier phase estimation in M-ary PSK to 16QAM, and successfully predict the laser linewidth tolerance in different modulation formats, which agrees well with the Monte Carlo simulations. Finally, approximate BER expressions for different modulation formats are introduced to allow a quick estimation of the BER performance as a function of the phase error variance. Further, the BER approximations give a lower bound on the laser linewidth requirements in M-ary PSK and 16QAM. It is shown that as far as laser linewidth tolerance is concerned, 16QAM outperforms 16PSK which has the same spectral efficiency (SE), and has nearly the same performance as 8PSK which has lower SE. Thus, 16-QAM is a promising modulation format for high SE coherent optical communications.

  14. Computation of the bit error rate of coherent M-ary PSK with Gray code bit mapping

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1986-01-01

    Efficient computation of the bit error rate (BER) for the coherent M-ary PSK signals with Gray code bit mapping is considered. A closed-form expression for the exact BER of 8-ary PSK is presented. Tight upper and lower bounds on BER are also obtained for M-ary PSK with larger M.

  15. Comparison of Self-Scoring Error Rate for SDS (Self Directed Search) (1970) and the Revised SDS (1977).

    ERIC Educational Resources Information Center

    Price, Gary E.; And Others

    A comparison of Self-Scoring Error Rate for Self Directed Search (SDS) and the revised SDS is presented. The subjects were college freshmen and sophomores who participated in career planning as a part of their orientation program, and a career workshop. Subjects, N=190 on first study and N=84 on second study, were then randomly assigned to the SDS…

  16. Error-rate estimation in discriminant analysis of non-linear longitudinal data: A comparison of resampling methods.

    PubMed

    de la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Núñez-Antón, Vicente

    2016-07-08

    Consider longitudinal observations across different subjects such that the underlying distribution is determined by a non-linear mixed-effects model. In this context, we look at the misclassification error rate for allocating future subjects using cross-validation, bootstrap algorithms (parametric bootstrap, leave-one-out, .632 and [Formula: see text]), and bootstrap cross-validation (which combines the first two approaches), and conduct a numerical study to compare the performance of the different methods. The simulation and comparisons in this study are motivated by real observations from a pregnancy study in which one of the main objectives is to predict normal versus abnormal pregnancy outcomes based on information gathered at early stages. Since in this type of studies it is not uncommon to have insufficient data to simultaneously solve the classification problem and estimate the misclassification error rate, we put special attention to situations when only a small sample size is available. We discuss how the misclassification error rate estimates may be affected by the sample size in terms of variability and bias, and examine conditions under which the misclassification error rate estimates perform reasonably well.

  17. High-rate lithium thionyl chloride cells

    NASA Technical Reports Server (NTRS)

    Goebel, F.

    1982-01-01

    A high-rate C cell with disc electrodes was developed to demonstrate current rates which are comparable to other primary systems. The tests performed established the limits of abuse beyond which the cell becomes hazardous. Tests include: impact, shock, and vibration tests; temperature cycling; and salt water immersion of fresh cells.

  18. Multichannel analyzers at high rates of input

    NASA Technical Reports Server (NTRS)

    Rudnick, S. J.; Strauss, M. G.

    1969-01-01

    Multichannel analyzer, used with a gating system incorporating pole-zero compensation, pile-up rejection, and baseline-restoration, achieves good resolution at high rates of input. It improves resolution, reduces tailing and rate-contributed continuum, and eliminates spectral shift.

  19. Deconvolution of high rate flicker electroretinograms.

    PubMed

    Alokaily, A; Bóhorquez, J; Özdamar, Ö

    2014-01-01

    Flicker electroretinograms are steady-state electroretinograms (ERGs) generated by high rate flash stimuli that produce overlapping periodic responses. When a flash stimulus is delivered at low rates, a transient response named flash ERG (FERG) representing the activation of neural structures within the outer retina is obtained. Although FERGs and flicker ERGs are used in the diagnosis of many retinal diseases, their waveform relationships have not been investigated in detail. This study examines this relationship by extracting transient FERGs from specially generated quasi steady-state flicker and ERGs at stimulation rates above 10 Hz and similarly generated conventional flicker ERGs. The ability to extract the transient FERG responses by deconvolving flicker responses to temporally jittered stimuli at high rates is investigated at varying rates. FERGs were obtained from seven normal subjects stimulated with LED-based displays, delivering steady-state and low jittered quasi steady-state responses at five rates (10, 15, 32, 50, 68 Hz). The deconvolution method enabled a successful extraction of "per stimulus" unit transient ERG responses for all high stimulation rates. The deconvolved FERGs were used successfully to synthesize flicker ERGs obtained at the same high stimulation rates.

  20. ISS Update: High Rate Communications System

    NASA Image and Video Library

    ISS Update Commentator Pat Ryan interviews Diego Serna, Communications and Tracking Officer, about the High Rate Communications System. Questions? Ask us on Twitter @NASA_Johnson and include the ha...

  1. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  2. Figures deduction method for mast valuating interpolation errors of encoder with high precision

    NASA Astrophysics Data System (ADS)

    Yi, Jie; An, Li-min; Liu, Chun-xia

    2011-08-01

    With the development of technology, especially the need of fast accurately running after and orientating the aim of horizon and air, the photoelectrical rotary encoder with high precision has become the research hotspot in the fields of international spaceflight and avigation, the errors evaluation of encoder with high precision is the one of the key technology that must to be resolved. For the encoder with high precision, the interpolation errors is the main factor which affects its precision. Existing interpolation errors detection adopts accurate apparatus such as little angle measurement apparatus and optics polyhedron, requesting under the strict laboratory condition to carry on. The detection method is also time-consuming, hard to tackle and easy to introduce detect errors. This paper mainly studies the fast evaluation method of interpolation errors of encoder with high precision which is applied to the working field. Taking the Lissajou's figure produced by moiré fringe as foundation, the paper sets up the radius vector's mathematical model to represent figure's form deviation, analyses the implied parameters information of moiré fringe, the relation of the radius vector deviation and interpolation errors in the figures and puts forward the method of interpolation errors figures evaluation. Adopting figure deduction method, and directly from harmonic component of radius vector deviation toward harmonic component of interpolation errors, the interpolation errors can be gotten in the paper. Through data collecting card, the Moiré fringe signal is transmitted into the computer, then, the computer storages the data, using figures evaluation method to analyses the data, drawing the curve of interpolation errors. Comparing with interpolation errors drawing from traditional detect method, the change trend of the interpolation errors curve is similar, peak-peak value is almost equality. The result of experiment indicates: the method of the paper can be applied to

  3. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  4. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    PubMed

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  5. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  6. An FPGA-Based High-Speed Error Resilient Data Aggregation and Control for High Energy Physics Experiment

    NASA Astrophysics Data System (ADS)

    Mandal, Swagata; Saini, Jogender; Zabołotny, Wojciech M.; Sau, Suman; Chakrabarti, Amlan; Chattopadhyay, Subhasis

    2017-03-01

    Due to the dramatic increase of data volume in modern high energy physics (HEP) experiments, a robust high-speed data acquisition (DAQ) system is very much needed to gather the data generated during different nuclear interactions. As the DAQ works under harsh radiation environment, there is a fair chance of data corruption due to various energetic particles like alpha, beta, or neutron. Hence, a major challenge in the development of DAQ in the HEP experiment is to establish an error resilient communication system between front-end sensors or detectors and back-end data processing computing nodes. Here, we have implemented the DAQ using field-programmable gate array (FPGA) due to some of its inherent advantages over the application-specific integrated circuit. A novel orthogonal concatenated code and cyclic redundancy check (CRC) have been used to mitigate the effects of data corruption in the user data. Scrubbing with a 32-b CRC has been used against error in the configuration memory of FPGA. Data from front-end sensors will reach to the back-end processing nodes through multiple stages that may add an uncertain amount of delay to the different data packets. We have also proposed a novel memory management algorithm that helps to process the data at the back-end computing nodes removing the added path delays. To the best of our knowledge, the proposed FPGA-based DAQ utilizing optical link with channel coding and efficient memory management modules can be considered as first of its kind. Performance estimation of the implemented DAQ system is done based on resource utilization, bit error rate, efficiency, and robustness to radiation.

  7. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  8. Dynamic evaluation system for interpolation errors in the encoder of high precision

    NASA Astrophysics Data System (ADS)

    Wan, Qiu-hua; Wu, Yong-zhi; Zhao, Chang-hai; Liang, Li-hui; Sun, Ying; Jiang, Yong

    2009-05-01

    In order to measure dynamic interpolation errors of photoelectric encoder of high precision, the dynamic evaluation system of interpolation errors is introduced. Firstly, the fine Moiré signal of encoder which is collected with the high-speed data gathering card into the computer is treated to equiangular data with the method of linear interpolation. Then, the analysis of harmonic wave with the FFT is processed. Compared with the standard signal, the dynamic interpolation errors of the encoder are calculated. Experimental results show that the precision of the dynamic evaluation system of interpolation errors is +/-0.1 %( pitch). The evaluation system is simple, fast, high precision, and can be used in the working field of the encoder.

  9. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  10. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  11. Turbulence structure at high shear rate

    NASA Technical Reports Server (NTRS)

    Lee, Moon Joo; Kim, John; Moin, Parviz

    1987-01-01

    The structure of homogeneous turbulence in the presence of a high shear rate is studied using results obtained from three-dimensional time-dependent numerical simulations of the Navier-Stokes equations on a grid of 512 x 128 x 128 node points. It is shown that high shear rate enhances the streamwise fluctuating motion to such an extent that a highly anisotropic turbulence state with a one-dimensional velocity field and two-dimensional small-scale turbulence develops asymptotically as total shear increases. Instantaneous velocity fields show that high shear rate in homogeneous turbulent shear flow produces structures which are similar to the streaks present in the viscous sublayer of turbulent boundary layers.

  12. A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware.

    PubMed

    Shi, Haixiang; Schmidt, Bertil; Liu, Weiguo; Müller-Wittig, Wolfgang

    2010-04-01

    Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this article, we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data so that error-free reads can be available before DNA fragment assembly, which is of high importance to many graph-based short-read assembly tools. The algorithm is based on spectral alignment and uses the Compute Unified Device Architecture (CUDA) programming model. To gain efficiency we are taking advantage of the CUDA texture memory using a space-efficient Bloom filter data structure for spectrum membership queries. We have tested the runtime and accuracy of our algorithm using real and simulated Illumina data for different read lengths, error rates, input sizes, and algorithmic parameters. Using a CUDA-enabled mass-produced GPU (available for less than US$400 at any local computer outlet), this results in speedups of 12-84 times for the parallelized error correction, and speedups of 3-63 times for both sequential preprocessing and parallelized error correction compared to the publicly available Euler-SR program. Our implementation is freely available for download from http://cuda-ec.sourceforge.net .

  13. Ultra High Strain Rate Nanoindentation Testing.

    PubMed

    Sudharshan Phani, Pardhasaradhi; Oliver, Warren Carl

    2017-06-17

    Strain rate dependence of indentation hardness has been widely used to study time-dependent plasticity. However, the currently available techniques limit the range of strain rates that can be achieved during indentation testing. Recent advances in electronics have enabled nanomechanical measurements with very low noise levels (sub nanometer) at fast time constants (20 µs) and high data acquisition rates (100 KHz). These capabilities open the doors for a wide range of ultra-fast nanomechanical testing, for instance, indentation testing at very high strain rates. With an accurate dynamic model and an instrument with fast time constants, step load tests can be performed which enable access to indentation strain rates approaching ballistic levels (i.e., 4000 1/s). A novel indentation based testing technique involving a combination of step load and constant load and hold tests that enables measurement of strain rate dependence of hardness spanning over seven orders of magnitude in strain rate is presented. A simple analysis is used to calculate the equivalent uniaxial response from indentation data and compared to the conventional uniaxial data for commercial purity aluminum. Excellent agreement is found between the indentation and uniaxial data over several orders of magnitude of strain rate.

  14. Error rates, PCR recombination, and sampling depth in HIV-1 whole genome deep sequencing.

    PubMed

    Zanini, Fabio; Brodin, Johanna; Albert, Jan; Neher, Richard A

    2016-12-27

    Deep sequencing is a powerful and cost-effective tool to characterize the genetic diversity and evolution of virus populations. While modern sequencing instruments readily cover viral genomes many thousand fold and very rare variants can in principle be detected, sequencing errors, amplification biases, and other artifacts can limit sensitivity and complicate data interpretation. For this reason, the number of studies using whole genome deep sequencing to characterize viral quasi-species in clinical samples is still limited. We have previously undertaken a large scale whole genome deep sequencing study of HIV-1 populations. Here we discuss the challenges, error profiles, control experiments, and computational test we developed to quantify the accuracy of variant frequency estimation.

  15. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  16. Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results

    SciTech Connect

    Clark, E.L.

    1994-07-01

    Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.

  17. Thrombus Formation at High Shear Rates.

    PubMed

    Casa, Lauren D C; Ku, David N

    2017-06-21

    The final common pathway in myocardial infarction and ischemic stroke is occlusion of blood flow from a thrombus forming under high shear rates in arteries. A high-shear thrombus forms rapidly and is distinct from the slow formation of coagulation that occurs in stagnant blood. Thrombosis at high shear rates depends primarily on the long protein von Willebrand factor (vWF) and platelets, with hemodynamics playing an important role in each stage of thrombus formation, including vWF binding, platelet adhesion, platelet activation, and rapid thrombus growth. The prediction of high-shear thrombosis is a major area of biofluid mechanics in which point-of-care testing and computational modeling are promising future directions for clinically relevant research. Further research in this area will enable identification of patients at high risk for arterial thrombosis, improve prevention and treatment based on shear-dependent biological mechanisms, and improve blood-contacting device design to reduce thrombosis risk.

  18. High Bit Rate Experiments Over ACTS

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A.; Gary, J. Patrick; Edelsen, Burt; Helm, Neil; Cohen, Judith; Shopbell, Patrick; Mechoso, C. Roberto; Chung-Chun; Farrara, M.; Spahr, Joseph

    1996-01-01

    This paper describes two high data rate experiments chat are being developed for the gigabit NASA Advanced Communications Technology Satellite (ACTS). The first is a telescience experiment that remotely acquires image data at the Keck telescope from the Caltech campus. The second is a distributed global climate application that is run between two supercomputer centers interconnected by ACTS. The implementation approach for each is described along with the expected results. Also. the ACTS high data rate (HDR) ground station is also described in detail.

  19. High Bit Rate Experiments Over ACTS

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A.; Gary, J. Patrick; Edelsen, Burt; Helm, Neil; Cohen, Judith; Shopbell, Patrick; Mechoso, C. Roberto; Chung-Chun; Farrara, M.; Spahr, Joseph

    1996-01-01

    This paper describes two high data rate experiments chat are being developed for the gigabit NASA Advanced Communications Technology Satellite (ACTS). The first is a telescience experiment that remotely acquires image data at the Keck telescope from the Caltech campus. The second is a distributed global climate application that is run between two supercomputer centers interconnected by ACTS. The implementation approach for each is described along with the expected results. Also. the ACTS high data rate (HDR) ground station is also described in detail.

  20. Orifice-induced pressure error studies in Langley 7- by 10-foot high-speed tunnel

    NASA Technical Reports Server (NTRS)

    Plentovich, E. B.; Gloss, B. B.

    1986-01-01

    For some time it has been known that the presence of a static pressure measuring hole will disturb the local flow field in such a way that the sensed static pressure will be in error. The results of previous studies aimed at studying the error induced by the pressure orifice were for relatively low Reynolds number flows. Because of the advent of high Reynolds number transonic wind tunnels, a study was undertaken to assess the magnitude of this error at high Reynolds numbers than previously published and to study a possible method of eliminating this pressure error. This study was conducted in the Langley 7- by 10-Foot High-Speed Tunnel on a flat plate. The model was tested at Mach numbers from 0.40 to 0.72 and at Reynolds numbers from 7.7 x 1,000,000 to 11 x 1,000,000 per meter (2.3 x 1,000,000 to 3.4 x 1,000,000 per foot), respectively. The results indicated that as orifice size increased, the pressure error also increased but that a porous metal (sintered metal) plug inserted in an orifice could greatly reduce the pressure error induced by the orifice.

  1. TMF ultra-high rate discharge performance

    SciTech Connect

    Nelson, B.

    1997-12-01

    BOLDER Technologies Corporation has developed a valve-regulated lead-acid product line termed Thin Metal Film (TMF{trademark}) technology. It is characterized by extremely thin plates and close plate spacing that facilitate high rates of charge and discharge with minimal temperature increases, at levels unachievable with other commercially-available battery technologies. This ultra-high rate performance makes TMF technology ideal for such applications as various types of engine start, high drain rate portable devices and high-current pulsing. Data are presented on very high current continuous and pulse discharges. Power and energy relationships at various discharge rates are explored and the fast-response characteristics of the BOLDER{reg_sign} cell are qualitatively defined. Short-duration recharge experiments will show that devices powered by BOLDER batteries can be in operation for more than 90% of an extended usage period with multiple fast recharges. The BOLDER cell is ideal for applications such as engine-start, a wide range of portable devices including power tools, hybrid electric vehicles and pulse-power devices. Applications such as this are very attractive, and are well served by TMF technology, but an area of great interest and excitement is ultrahigh power delivery in excess of 1 kW/kg.

  2. Children with High Functioning Autism show increased prefrontal and temporal cortex activity during error monitoring

    PubMed Central

    Goldberg, Melissa C.; Spinelli, Simona; Joel, Suresh; Pekar, James J.; Denckla, Martha B.; Mostofsky, Stewart H.

    2010-01-01

    Evidence exists for deficits in error monitoring in autism. These deficits may be particularly important because they may contribute to excessive perseveration and repetitive behavior in autism. We examined the neural correlates of error monitoring using fMRI in 8–12-year-old children with high-functioning autism (HFA, n=11) and typically developing children (TD, n=15) during performance of a Go/No-Go task by comparing the neural correlates of commission errors versus correct response inhibition trials. Compared to TD children, children with HFA showed increased BOLD fMRI signal in the anterior medial prefrontal cortex (amPFC) and the left superior temporal gyrus (STempG) during commission error (versus correct inhibition) trials. A follow-up region-of-interest analysis also showed increased BOLD signal in the right insula in HFA compared to TD controls. Our findings of increased amPFC and STempG activity in HFA, together with the increased activity in the insula, suggest a greater attention towards the internally-driven emotional state associated with making an error in children with HFA. Since error monitoring occurs across different cognitive tasks throughout daily life, an increased emotional reaction to errors may have important consequences for early learning processes. PMID:21151713

  3. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  4. Indirect measurement of a laser communications bit-error-rate reduction with low-order adaptive optics

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.; Canning, Douglas E.

    2003-07-01

    In experimental measurements of the bit-error rate for a laser communication system, we show improved performance with the implementation of low-order (tip/tilt) adaptive optics in a free-space link. With simulated atmospheric tilt injected by a conventional piezoelectric tilt mirror, an adaptive optics system with a Xinetics tilt mirror was used in a closed loop. The laboratory experiment replicated a monostatic propagation with a cooperative wave front beacon at the receiver. Owing to constraints in the speed of the processing hardware, the data is scaled to represent an actual propagation of a few kilometers under moderate scintillation conditions. We compare the experimental data and indirect measurement of the bit-error rate before correction and after correction, with a theoretical prediction.

  5. Indirect measurement of a laser communications bit-error-rate reduction with low-order adaptive optics.

    PubMed

    Tyson, Robert K; Canning, Douglas E

    2003-07-20

    In experimental measurements of the bit-error rate for a laser communication system, we show improved performance with the implementation of low-order (tip/tilt) adaptive optics in a free-space link. With simulated atmospheric tilt injected by a conventional piezoelectric tilt mirror, an adaptive optics system with a Xinetics tilt mirror was used in a closed loop. The laboratory experiment replicated a monostatic propagation with a cooperative wave front beacon at the receiver. Owing to constraints in the speed of the processing hardware, the data is scaled to represent an actual propagation of a few kilometers under moderate scintillation conditions. We compare the experimental data and indirect measurement of the bit-error rate before correction and after correction, with a theoretical prediction.

  6. Single Event Test Methodologies and System Error Rate Analysis for Triple Modular Redundant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael

    2010-01-01

    We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.

  7. Bit-Error-Rate-Minimizing Channel Shortening Using Post-FEQ Diversity Combining and a Genetic Algorithm

    DTIC Science & Technology

    2009-03-10

    AFIT/GE/ENG/09-01 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio APPROVED FOR...the United States Air Force, Department of Defense, or the United States Government. AFIT/GE/ENG/09-01 Bit-Error-Rate-Minimizing Channel Shortening...School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the

  8. Single Event Test Methodologies and System Error Rate Analysis for Triple Modular Redundant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael

    2010-01-01

    We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.

  9. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  10. High Resolution Measurement of the Glycolytic Rate

    PubMed Central

    Bittner, Carla X.; Loaiza, Anitsi; Ruminot, Iván; Larenas, Valeria; Sotelo-Hitschfeld, Tamara; Gutiérrez, Robin; Córdova, Alex; Valdebenito, Rocío; Frommer, Wolf B.; Barros, L. Felipe

    2010-01-01

    The glycolytic rate is sensitive to physiological activity, hormones, stress, aging, and malignant transformation. Standard techniques to measure the glycolytic rate are based on radioactive isotopes, are not able to resolve single cells and have poor temporal resolution, limitations that hamper the study of energy metabolism in the brain and other organs. A new method is described in this article, which makes use of a recently developed FRET glucose nanosensor to measure the rate of glycolysis in single cells with high temporal resolution. Used in cultured astrocytes, the method showed for the first time that glycolysis can be activated within seconds by a combination of glutamate and K+, supporting a role for astrocytes in neurometabolic and neurovascular coupling in the brain. It was also possible to make a direct comparison of metabolism in neurons and astrocytes lying in close proximity, paving the way to a high-resolution characterization of brain energy metabolism. Single-cell glycolytic rates were also measured in fibroblasts, adipocytes, myoblasts, and tumor cells, showing higher rates for undifferentiated cells and significant metabolic heterogeneity within cell types. This method should facilitate the investigation of tissue metabolism at the single-cell level and is readily adaptable for high-throughput analysis. PMID:20890447

  11. Type I Error Rates For A One Factor Within-Subjects Design With Missing Values

    PubMed Central

    Padilla, Miguel A.; Algina, James

    2006-01-01

    Missing data are a common problem in educational research. A promising technique, that can be implemented in SAS PROC MIXED and is therefore widely available, is to use maximum likelihood to estimate model parameters and base hypothesis tests on these estimates. However, it is not clear which test statistic in PROC MIXED performs better with missing data. The performance of the Hotelling-Lawley-McKeon and Kenward-Roger omnibus test statistics on the means for a single factor within-subject ANOVA are compared. The results indicate that the Kenward-Roger statistic performed better in terms of keeping the Type I error close to the nominal alpha level. PMID:16845436

  12. High rate, high reliability Li/SO2 cells

    NASA Astrophysics Data System (ADS)

    Chireau, R.

    1982-03-01

    The use of the lithium/sulfur dioxide system for aerospace applications is discussed. The high rate density in the system is compared to some primary systems: mercury zinc, silver zinc, and magnesium oxide. Estimates are provided of the storage life and shelf life of typical lithium sulfur batteries. The design of lithium cells is presented and criteria are given for improving the output of cells in order to achieve high rate and high reliability.

  13. A Framework for Interpreting Type I Error Rates from a Product‐Term Model of Interaction Applied to Quantitative Traits

    PubMed Central

    Province, Michael A.

    2015-01-01

    ABSTRACT Adequate control of type I error rates will be necessary in the increasing genome‐wide search for interactive effects on complex traits. After observing unexpected variability in type I error rates from SNP‐by‐genome interaction scans, we sought to characterize this variability and test the ability of heteroskedasticity‐consistent standard errors to correct it. We performed 81 SNP‐by‐genome interaction scans using a product‐term model on quantitative traits in a sample of 1,053 unrelated European Americans from the NHLBI Family Heart Study, and additional scans on five simulated datasets. We found that the interaction‐term genomic inflation factor (lambda) showed inflation and deflation that varied with sample size and allele frequency; that similar lambda variation occurred in the absence of population substructure; and that lambda was strongly related to heteroskedasticity but not to minor non‐normality of phenotypes. Heteroskedasticity‐consistent standard errors narrowed the range of lambda, with HC3 outperforming HC0, but in individual scans tended to create new P‐value outliers related to sparse two‐locus genotype classes. We explain the lambda variation as a result of non‐independence of test statistics coupled with stochastic biases in test statistics due to a failure of the test to reach asymptotic properties. We propose that one way to interpret lambda is by comparison to an empirical distribution generated from data simulated under the null hypothesis and without population substructure. We further conclude that the interaction‐term lambda should not be used to adjust test statistics and that heteroskedasticity‐consistent standard errors come with limitations that may outweigh their benefits in this setting. PMID:26659945

  14. High Rate for Type IC Supernovae

    SciTech Connect

    Muller, R.A.; Marvin-Newberg, H.J.; Pennypacker, Carl R.; Perlmutter, S.; Sasseen, T.P.; Smith, C.K.

    1991-09-01

    Using an automated telescope we have detected 20 supernovae in carefully documented observations of nearby galaxies. The supernova rates for late spiral (Sbc, Sc, Scd, and Sd) galaxies, normalized to a blue luminosity of 10{sup 10} L{sub Bsun}, are 0.4 h{sup 2}, 1.6 h{sup 2}, and 1.1 h{sup 2} per 100 years for SNe type la, Ic, and II. The rate for type Ic supernovae is significantly higher than found in previous surveys. The rates are not corrected for detection inefficiencies, and do not take into account the indications that the Ic supernovae are fainter on the average than the previous estimates; therefore the true rates are probably higher. The rates are not strongly dependent on the galaxy inclination, in contradiction to previous compilations. If the Milky Way is a late spiral, then the rate of Galactic supernovae is greater than 1 per 30 {+-} 7 years, assuming h = 0.75. This high rate has encouraging consequences for future neutrino and gravitational wave observatories.

  15. Attention Deficit/Hyperactivity Disorder (ADHD): age related change of completion time and error rates of Stroop test.

    PubMed

    Thursina, Cempaka; Ar Rochmah, Mawaddah; Nurputra, Dian Kesumapramudya; Harahap, Indra Sari Kusuma; Harahap, Nur Imma Fatimah; Sa'Adah, Nihayatus; Wibowo, Samekto; Sutarni, Sri; Sadewa, Ahmad Hamim; Nishimura, Noriyuki; Mandai, Tsurue; Iijima, Kazumoto; Nishio, Hisahide; Kitayama, Shinji

    2015-04-07

    Attention Deficit/Hyperactivity Disorder (ADHD) is a common neurobehavioral problem in children throughout the world. The Stroop test has been widely used for the evaluation of ADHD symptoms. However, the age-related change of the Stroop test results has not been fully clarified until now. Sixty-five ADHD and 70 age-matched control children aged 6-13 years were enrolled in this study. ADHD was diagnosed based on DSM-IV criteria. We examined the completion time and error rates of the Congruent Stroop test (CST) and Incongruent Stroop test (IST) in ADHD and control children. No significant difference was observed in the completion time for CST or IST between the ADHD and control children at 6-9 years old. However, ADHD children at 10-13 years old showed significantly delayed completion time for the CST and IST compared with controls of the same age. As for the error rates of the CST and IST, ADHD and control children at 6-9 years old showed no difference. However, error rates of CST and IST in the ADHD children at 10-13 years were significantly higher than those of control of the same age. Age may influence the results of Stroop test in ADHD children. For the ages of 10-13 years old, the Stroop test clearly separates ADHD children from control children, suggesting that it may be a useful screening tool for ADHD among preadolescent children.

  16. High incidence of technical errors involving the EEA circular stapler: a single institution experience.

    PubMed

    Offodile, Anaeze C; Feingold, Daniel L; Nasar, Abu; Whelan, Richard L; Arnell, Tracey D

    2010-03-01

    The use of stapling devices is now widespread in colorectal resections. However, the incidence and clinical consequence of technical error involving the circular stapler are still poorly characterized. We reviewed the operative reports and Web-based charts for all colon and rectal resections performed at our institution that used a circular stapler. Technical error was defined as any deviation from the normal technical performance of the circular stapler, including, but not limited to, surgeon misfiring, incomplete anastomosis (inadequate donuts or staple line defects), and primary device failure. The unpaired t- and chi-square tests were used for statistical analysis; p < 0.05. There were 349 colorectal resections performed and 67 (19%) featured a technical error. Thirty-two resections (9%) included an anastomotic error. The control group (n = 282) and the error group (n = 67) were comparable with regard to leaks, reoperation, suture line strictures, and hospital stay. The malfunction group had higher incidences of proximal diversions (34% versus 16%; p = 0.0003), ileus (24% versus 8%; p = 0.002), gastrointestinal bleeding (4% versus 0.4%; p = 0.023), and transfusion requirements (13% versus 4%; p = 0.004). Although proximal diversions in the error cohorts were also less likely to be planned (p < 0.001), reversal rates were similar in both groups (p = 0.28). The incidence of technical error involving the circular stapler is considerable. Technical error was found to be associated with a significantly higher risk of gastrointestinal bleeding, transfusions, and unplanned proximal diversions. Copyright 2010 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  17. [Hopes of high dose-rate radiotherapy].

    PubMed

    Fouillade, Charles; Favaudon, Vincent; Vozenin, Marie-Catherine; Romeo, Paul-Henri; Bourhis, Jean; Verrelle, Pierre; Devauchelle, Patrick; Patriarca, Annalisa; Heinrich, Sophie; Mazal, Alejandro; Dutreix, Marie

    2017-04-01

    In this review, we present the synthesis of the newly acquired knowledge concerning high dose-rate irradiations and the hopes that these new radiotherapy modalities give rise to. The results were presented at a recent symposium on the subject. Copyright © 2017. Published by Elsevier Masson SAS.

  18. Baltimore District Tackles High Suspension Rates

    ERIC Educational Resources Information Center

    Maxwell, Lesli A.

    2007-01-01

    This article reports on how the Baltimore District tackles its high suspension rates. Driven by an increasing belief that zero-tolerance disciplinary policies are ineffective, more educators are embracing strategies that do not exclude misbehaving students from school for offenses such as insubordination, disrespect, cutting class, tardiness, and…

  19. Baltimore District Tackles High Suspension Rates

    ERIC Educational Resources Information Center

    Maxwell, Lesli A.

    2007-01-01

    This article reports on how the Baltimore District tackles its high suspension rates. Driven by an increasing belief that zero-tolerance disciplinary policies are ineffective, more educators are embracing strategies that do not exclude misbehaving students from school for offenses such as insubordination, disrespect, cutting class, tardiness, and…

  20. Understanding High School Graduation Rates in Georgia

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  1. Understanding High School Graduation Rates in Oklahoma

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  2. Understanding High School Graduation Rates in Kentucky

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  3. Understanding High School Graduation Rates in Nevada

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  4. Understanding High School Graduation Rates in Kansas

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  5. Understanding High School Graduation Rates in Connecticut

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  6. Understanding High School Graduation Rates in Indiana

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  7. Understanding High School Graduation Rates in Alaska

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  8. Understanding High School Graduation Rates in Hawaii

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  9. Understanding High School Graduation Rates in Wisconsin

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  10. Understanding High School Graduation Rates in Utah

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  11. Understanding High School Graduation Rates in Alabama

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  12. Understanding High School Graduation Rates in Maryland

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  13. Understanding High School Graduation Rates in Tennessee

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  14. Understanding High School Graduation Rates in Nebraska

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  15. Understanding High School Graduation Rates in Missouri

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  16. Understanding High School Graduation Rates in Arizona

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  17. Understanding High School Graduation Rates in Montana

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  18. Understanding High School Graduation Rates in Iowa

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  19. Understanding High School Graduation Rates in Vermont

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  20. Understanding High School Graduation Rates in California

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  1. Understanding High School Graduation Rates in Mississippi

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  2. Understanding High School Graduation Rates in Ohio

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  3. Understanding High School Graduation Rates in Illinois

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  4. Understanding High School Graduation Rates in Louisiana

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  5. Understanding High School Graduation Rates in Virginia

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  6. Understanding High School Graduation Rates in Florida

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  7. Understanding High School Graduation Rates in Delaware

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  8. Understanding High School Graduation Rates in Idaho

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  9. Understanding High School Graduation Rates in Maine

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  10. Understanding High School Graduation Rates in Massachusetts

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  11. Understanding High School Graduation Rates in Michigan

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  12. Understanding High School Graduation Rates in Pennsylvania

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  13. Understanding High School Graduation Rates in Oregon

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  14. Understanding High School Graduation Rates in Washington

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  15. Understanding High School Graduation Rates in Minnesota

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  16. Understanding High School Graduation Rates in Wyoming

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  17. Understanding High School Graduation Rates in Texas

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  18. Understanding High School Graduation Rates in Arkansas

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  19. Understanding High School Graduation Rates in Colorado

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  20. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  1. A miniature high repetition rate shock tube.

    PubMed

    Tranter, R S; Lynch, P T

    2013-09-01

    A miniature high repetition rate shock tube with excellent reproducibility has been constructed to facilitate high temperature, high pressure, gas phase experiments at facilities such as synchrotron light sources where space is limited and many experiments need to be averaged to obtain adequate signal levels. The shock tube is designed to generate reaction conditions of T > 600 K, P < 100 bars at a cycle rate of up to 4 Hz. The design of the apparatus is discussed in detail, and data are presented to demonstrate that well-formed shock waves with predictable characteristics are created, repeatably. Two synchrotron-based experiments using this apparatus are also briefly described here, demonstrating the potential of the shock tube for research at synchrotron light sources.

  2. Role of high shear rate in thrombosis.

    PubMed

    Casa, Lauren D C; Deaton, David H; Ku, David N

    2015-04-01

    Acute arterial occlusions occur in high shear rate hemodynamic conditions. Arterial thrombi are platelet-rich when examined histologically compared with red blood cells in venous thrombi. Prior studies of platelet biology were not capable of accounting for the rapid kinetics and bond strengths necessary to produce occlusive thrombus under these conditions where the stasis condition of the Virchow triad is so noticeably absent. Recent experiments elucidate the unique pathway and kinetics of platelet aggregation that produce arterial occlusion. Large thrombi form from local release and conformational changes in von Willebrand factor under very high shear rates. The effect of high shear hemodynamics on thrombus growth has profound implications for the understanding of all acute thrombotic cardiovascular events as well as for vascular reconstructive techniques and vascular device design, testing, and clinical performance. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  3. Results of error correction techniques applied on two high accuracy coordinate measuring machines

    SciTech Connect

    Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R.; National Inst. of Standards and Technology, Gaithersburg, MD )

    1990-01-01

    The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.

  4. High strain rate behaviour of polypropylene microfoams

    NASA Astrophysics Data System (ADS)

    Gómez-del Río, T.; Garrido, M. A.; Rodríguez, J.; Arencón, D.; Martínez, A. B.

    2012-08-01

    Microcellular materials such as polypropylene foams are often used in protective applications and passive safety for packaging (electronic components, aeronautical structures, food, etc.) or personal safety (helmets, knee-pads, etc.). In such applications the foams which are used are often designed to absorb the maximum energy and are generally subjected to severe loadings involving high strain rates. The manufacture process to obtain polymeric microcellular foams is based on the polymer saturation with a supercritical gas, at high temperature and pressure. This method presents several advantages over the conventional injection moulding techniques which make it industrially feasible. However, the effect of processing conditions such as blowing agent, concentration and microfoaming time and/or temperature on the microstructure of the resulting microcellular polymer (density, cell size and geometry) is not yet set up. The compressive mechanical behaviour of several microcellular polypropylene foams has been investigated over a wide range of strain rates (0.001 to 3000 s-1) in order to show the effects of the processing parameters and strain rate on the mechanical properties. High strain rate tests were performed using a Split Hopkinson Pressure Bar apparatus (SHPB). Polypropylene and polyethylene-ethylene block copolymer foams of various densities were considered.

  5. Highly stable high-rate discriminator for nuclear counting

    NASA Technical Reports Server (NTRS)

    English, J. J.; Howard, R. H.; Rudnick, S. J.

    1969-01-01

    Pulse amplitude discriminator is specially designed for nuclear counting applications. At very high rates, the threshold is stable. The output-pulse width and the dead time change negligibly. The unit incorporates a provision for automatic dead-time correction.

  6. High-Rate Capable Floating Strip Micromegas

    NASA Astrophysics Data System (ADS)

    Bortfeldt, Jonathan; Bender, Michael; Biebel, Otmar; Danger, Helge; Flierl, Bernhard; Hertenberger, Ralf; Lösel, Philipp; Moll, Samuel; Parodi, Katia; Rinaldi, Ilaria; Ruschke, Alexander; Zibell, André

    2016-04-01

    We report on the optimization of discharge insensitive floating strip Micromegas (MICRO-MEsh GASeous) detectors, fit for use in high-energy muon spectrometers. The suitability of these detectors for particle tracking is shown in high-background environments and at very high particle fluxes up to 60 MHz/cm2. Measurement and simulation of the microscopic discharge behavior have demonstrated the excellent discharge tolerance. A floating strip Micromegas with an active area of 48 cm × 50 cm with 1920 copper anode strips exhibits in 120 GeV pion beams a spatial resolution of 50 μm at detection efficiencies above 95%. Pulse height, spatial resolution and detection efficiency are homogeneous over the detector. Reconstruction of particle track inclination in a single detector plane is discussed, optimum angular resolutions below 5° are observed. Systematic deviations of this μTPC-method are fully understood. The reconstruction capabilities for minimum ionizing muons are investigated in a 6.4 cm × 6.4 cm floating strip Micromegas under intense background irradiation of the whole active area with 20 MeV protons at a rate of 550 kHz. The spatial resolution for muons is not distorted by space charge effects. A 6.4 cm × 6.4 cm floating strip Micromegas doublet with low material budget is investigated in highly ionizing proton and carbon ion beams at particle rates between 2 MHz and 2 GHz. Stable operation up to the highest rates is observed, spatial resolution, detection efficiencies, the multi-hit and high-rate capability are discussed.

  7. Denoising DNA deep sequencing data-high-throughput sequencing errors and their correction.

    PubMed

    Laehnemann, David; Borkhardt, Arndt; McHardy, Alice Carolyn

    2016-01-01

    Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here.

  8. Phosphor thermometry at high repetition rates

    NASA Astrophysics Data System (ADS)

    Fuhrmann, N.; Brübach, J.; Dreizler, A.

    2013-09-01

    Phosphor thermometry is a semi-invasive surface temperature measurement technique utilizing the luminescence properties of thermographic phosphors. Typically these ceramic materials are coated onto the object of interest and are excited by a short UV laser pulse. Photomultipliers and high-speed camera systems are used to transiently detect the subsequently emitted luminescence decay point wise or two-dimensionally resolved. Based on appropriate calibration measurements, the luminescence lifetime is converted to temperature. Up to now, primarily Q-switched laser systems with repetition rates of 10 Hz were employed for excitation. Accordingly, this diagnostic tool was not applicable to resolve correlated temperature transients at time scales shorter than 100 ms. For the first time, the authors realized a high-speed phosphor thermometry system combining a highly repetitive laser in the kHz regime and a fast decaying phosphor. A suitable material was characterized regarding its temperature lifetime characteristic and precision. Additionally, the influence of laser power on the phosphor coating in terms of heating effects has been investigated. A demonstration of this high-speed technique has been conducted inside the thermally highly transient system of an optically accessible internal combustion engine. Temperatures have been measured with a repetition rate of one sample per crank angle degree at an engine speed of 1000 rpm. This experiment has proven that high-speed phosphor thermometry is a promising diagnostic tool for the resolution of surface temperature transients.

  9. The effect of administrative boundaries and geocoding error on cancer rates in California.

    PubMed

    Goldberg, Daniel W; Cockburn, Myles G

    2012-04-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. The Effect of Administrative Boundaries and Geocoding Error on Cancer Rates in California

    PubMed Central

    Goldberg, Daniel W.; Cockburn, Myles G.

    2012-01-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods. PMID:22469490

  11. Resident physicians' clinical training and error rate: the roles of autonomy, consultation, and familiarity with the literature.

    PubMed

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-03-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores the relationships between residents' error rates and three clinical training methods (1) progressive independence or level of autonomy, (2) consulting the physician on call, and (3) familiarity with up-to-date medical literature, and whether these relationships vary among the specialties of surgery and internal medicine and between novice and experienced residents. 142 Residents in 22 medical departments from two hospitals participated in the study. Results of hierarchical linear model analysis indicated that lower levels of autonomy, higher levels of consultation with the physician on call, and higher levels of familiarity with up-to-date medical literature were associated with lower levels of resident's error rates. The associations varied between internal and surgery specializations and novice and experienced residents. In conclusion, the study results suggested that the implicit curriculum that residents should be afforded autonomy and progressive independence with nominal supervision in accordance with their relevant skills and experience must be applied cautiously depending on specialization and experience. In addition, it is necessary to create a supportive and judgment free climate within the department that may reduce a resident's hesitation to consult the attending physician.

  12. Numerical errors in the real-height analysis of ionograms at high latitudes

    SciTech Connect

    Titheridge, J.E.

    1987-10-01

    A simple dual-range integration method for maintaining accuracy in the analysis of real-height ionograms at high latitudes up to a dip angle of 89 deg is presented. Numerical errors are reduced to zero for the start and valley calculations at all dip angles up to 89.9 deg. It is noted that the extreme errors which occur at high latitudes can be alternatively reduced by using a decreased value for the dip angle. An expression for the optimun dip angle for different integration orders and frequency intervals is given. 17 references.

  13. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    PubMed

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd.

  14. High strain rate characterization of polymers

    NASA Astrophysics Data System (ADS)

    Siviour, Clive R.

    2017-01-01

    This paper reviews the literature on the response of polymers to high strain rate deformation. The main focus is on the experimental techniques used to characterize this response. The paper includes a small number of examples as well as references to experimental data over a wide range of rates, which illustrate the key features of rate dependence in these materials; however this is by no means an exhaustive list. The aim of the paper is to give the reader unfamiliar with the subject an overview of the techniques available with sufficient references from which further information can be obtained. In addition to the `well established' techniques of the Hopkinson bar, Taylor Impact and Transverse impact, a discussion of the use of time-temperature superposition in interpreting and experimentally replicating high rate response is given, as is a description of new techniques in which mechanical parameters are derived by directly measuring wave propagation in specimens; these are particularly appropriate for polymers with low wave speeds. The vast topic of constitutive modelling is deliberately excluded from this review.

  15. High temperature electrochemical corrosion rate probes

    SciTech Connect

    Bullard, Sophie J.; Covino, Bernard S., Jr.; Holcomb, Gordon R.; Ziomek-Moroz, M.

    2005-09-01

    Corrosion occurs in the high temperature sections of energy production plants due to a number of factors: ash deposition, coal composition, thermal gradients, and low NOx conditions, among others. Electrochemical corrosion rate (ECR) probes have been shown to operate in high temperature gaseous environments that are similar to those found in fossil fuel combustors. ECR probes are rarely used in energy production plants at the present time, but if they were more fully understood, corrosion could become a process variable at the control of plant operators. Research is being conducted to understand the nature of these probes. Factors being considered are values selected for the Stern-Geary constant, the effect of internal corrosion, and the presence of conductive corrosion scales and ash deposits. The nature of ECR probes will be explored in a number of different atmospheres and with different electrolytes (ash and corrosion product). Corrosion rates measured using an electrochemical multi-technique capabilities instrument will be compared to those measured using the linear polarization resistance (LPR) technique. In future experiments, electrochemical corrosion rates will be compared to penetration corrosion rates determined using optical profilometry measurements.

  16. Can the Misinterpretation Amendment Rate Be Used as a Measure of Interpretive Error in Anatomic Pathology?: Implications of a Survey of the Directors of Anatomic and Surgical Pathology.

    PubMed

    Parkash, Vinita; Fadare, Oluwole; Dewar, Rajan; Nakhleh, Raouf; Cooper, Kumarasen

    2017-03-01

    A repeat survey of the Association of the Directors of Anatomic and Surgical Pathology, done 10 years after the original was used to assess trends and variability in classifying scenarios as errors, and the preferred post signout report modification for correcting error by the membership of the Association of the Directors of Anatomic and Surgical Pathology. The results were analyzed to inform on whether interpretive amendment rates might act as surrogate measures of interpretive error in pathology. An analyses of the responses indicated that primary level misinterpretations (benign to malignant and vice versa) were universally qualified as error; secondary-level misinterpretations or misclassifications were inconsistently labeled error. There was added variability in the preferred post signout report modification used to correct report alterations. The classification of a scenario as error appeared to correlate with severity of potential harm of the missed call, the perceived subjectivity of the diagnosis, and ambiguity of reporting terminology. Substantial differences in policies for error detection and optimal reporting format were documented between departments. In conclusion, the inconsistency in labeling scenarios as error, disagreement about the optimal post signout report modification for the correction of the error, and variability in error detection policies preclude the use of the misinterpretation amendment rate as a surrogate measure for error in anatomic pathology. There is little change in uniformity of definition, attitudes and perception of interpretive error in anatomic pathology in the last 10 years.

  17. Analytical Modeling of High Rate Processes.

    DTIC Science & Technology

    2007-11-02

    TYPE AND DATES COVERED 1 13 Apr 98 Final (01 Sep 94 - 31 Aug 97) 4. TITLE AND SUBTITLE 5 . FUNDING NUMBERS Analytical Modeling of High Rate Processes...20332- 8050 FROM: S. E. Jones, University Research Professor Department of Aerospace Engineering and Mechanics University of Alabama SUBJECT: Final...Mr. Sandor Augustus and Mr. Jeffrey A. Drinkard. There are no outstanding commitments. The balance in the account, as of July 31 , 1997, was $102,916.42

  18. HIGH ENERGY RATE EXTRUSION OF URANIUM

    DOEpatents

    Lewis, L.

    1963-07-23

    A method of extruding uranium at a high energy rate is described. Conditions during the extrusion are such that the temperature of the metal during extrusion reaches a point above the normal alpha to beta transition, but the metal nevertheless remains in the alpha phase in accordance with the Clausius- Clapeyron equation. Upon exiting from the die, the metal automatically enters the beta phase, after which the metal is permitted to cool. (AEC)

  19. High-rate systematic recursive convolutional encoders: minimal trellis and code search

    NASA Astrophysics Data System (ADS)

    Benchimol, Isaac; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.

    2012-12-01

    We consider high-rate systematic recursive convolutional encoders to be adopted as constituent encoders in turbo schemes. Douillard and Berrou showed that, despite its complexity, the construction of high-rate turbo codes by means of high-rate constituent encoders is advantageous over the construction based on puncturing rate-1/2 constituent encoders. To reduce the decoding complexity of high-rate codes, we introduce the construction of the minimal trellis for a systematic recursive convolutional encoding matrix. A code search is conducted and examples are provided which indicate that a more finely grained decoding complexity-error performance trade-off is obtained.

  20. Reserve, flowing electrolyte, high rate lithium battery

    NASA Astrophysics Data System (ADS)

    Puskar, M.; Harris, P.

    Flowing electrolyte Li/SOCl2 tests in single cell and multicell bipolar fixtures have been conducted, and measurements are presented for electrolyte flow rates, inlet and outlet temperatures, fixture temperatures at several points, and the pressure drop across the fixture. Reserve lithium batteries with flowing thionyl-chloride electrolytes are found to be capable of very high energy densities with usable voltages and capacities at current densities as high as 500 mA/sq cm. At this current density, a battery stack 10 inches in diameter is shown to produce over 60 kW of power while maintaining a safe operating temperature.

  1. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the

  2. Error quantification of abnormal extreme high waves in Operational Oceanographic System in Korea

    NASA Astrophysics Data System (ADS)

    Jeong, Sang-Hun; Kim, Jinah; Heo, Ki-Young; Park, Kwang-Soon

    2017-04-01

    In winter season, large-height swell-like waves have occurred on the East coast of Korea, causing property damages and loss of human life. It is known that those waves are generated by a local strong wind made by temperate cyclone moving to eastward in the East Sea of Korean peninsula. Because the waves are often occurred in the clear weather, in particular, the damages are to be maximized. Therefore, it is necessary to predict and forecast large-height swell-like waves to prevent and correspond to the coastal damages. In Korea, an operational oceanographic system (KOOS) has been developed by the Korea institute of ocean science and technology (KIOST) and KOOS provides daily basis 72-hours' ocean forecasts such as wind, water elevation, sea currents, water temperature, salinity, and waves which are computed from not only meteorological and hydrodynamic model (WRF, ROMS, MOM, and MOHID) but also wave models (WW-III and SWAN). In order to evaluate the model performance and guarantee a certain level of accuracy of ocean forecasts, a Skill Assessment (SA) system was established as a one of module in KOOS. It has been performed through comparison of model results with in-situ observation data and model errors have been quantified with skill scores. Statistics which are used in skill assessment are including a measure of both errors and correlations such as root-mean-square-error (RMSE), root-mean-square-error percentage (RMSE%), mean bias (MB), correlation coefficient (R), scatter index (SI), circular correlation (CC) and central frequency (CF) that is a frequency with which errors lie within acceptable error criteria. It should be utilized simultaneously not only to quantify an error but also to improve an accuracy of forecasts by providing a feedback interactively. However, in an abnormal phenomena such as high-height swell-like waves in the East coast of Korea, it requires more advanced and optimized error quantification method that allows to predict the abnormal

  3. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  4. Adjoint-field errors in high fidelity compressible turbulence simulations for sound control

    NASA Astrophysics Data System (ADS)

    Vishnampet, Ramanathan; Bodony, Daniel; Freund, Jonathan

    2013-11-01

    A consistent discrete adjoint for high-fidelity discretization of the three-dimensional Navier-Stokes equations is used to quantify the error in the sensitivity gradient predicted by the continuous adjoint method, and examine the aeroacoustic flow-control problem for free-shear-flow turbulence. A particular quadrature scheme for approximating the cost functional makes our discrete adjoint formulation for a fourth-order Runge-Kutta scheme with high-order finite differences practical and efficient. The continuous adjoint-based sensitivity gradient is shown to to be inconsistent due to discretization truncation errors, grid stretching and filtering near boundaries. These errors cannot be eliminated by increasing the spatial or temporal resolution since chaotic interactions lead them to become O (1) at the time of control actuation. Although this is a known behavior for chaotic systems, its effect on noise control is much harder to anticipate, especially given the different resolution needs of different parts of the turbulence and acoustic spectra. A comparison of energy spectra of the adjoint pressure fields shows significant error in the continuous adjoint at all wavenumbers, even though they are well-resolved. The effect of this error on the noise control mechanism is analyzed.

  5. Optical system error analysis and calibration method of high-accuracy star trackers.

    PubMed

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  6. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  7. Reducing Error Rates for Iris Image using higher Contrast in Normalization process

    NASA Astrophysics Data System (ADS)

    Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa

    2017-08-01

    Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.

  8. Error associated with model predictions of wildland fire rate of spread

    Treesearch

    Miguel G. Cruz; Martin E. Alexander

    2015-01-01

    How well can we expect to predict the spread rate of wildfires and prescribed fires? The degree of accuracy in model predictions of wildland fire behaviour characteristics are dependent on the model's applicability to a given situation, the validity of the model's relationships, and the reliability of the model input data (Alexander and Cruz 2013b#. We...

  9. Compensating inherent linear move water application errors using a variable rate irrigation system

    USDA-ARS?s Scientific Manuscript database

    Continuous move irrigation systems such as linear move and center pivot irrigate unevenly when applying conventional uniform water rates due to the towers/motors stop/advance pattern. The effect of the cart movement pattern on linear move water application is larger on the first two spans which intr...

  10. High rate pulse processing algorithms for microcalorimeters

    SciTech Connect

    Rabin, Michael; Hoover, Andrew S; Bacrania, Mnesh K; Tan, Hui; Breus, Dimitry; Henning, Wolfgang; Sabourov, Konstantin; Collins, Jeff; Warburton, William K; Dorise, Bertrand; Ullom, Joel N

    2009-01-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensor can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small to maintain good energy resolution, and pulse decay times are normally in the order of milliseconds due to slow thermal relaxation. Consequently, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. Large arrays, however, require as much pulse processing as possible to be performed at the front end of the readout electronics to avoid transferring large amounts of waveform data to a host computer for processing. In this paper, they present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in the readout electronics that they are also currently developing, is to achieve sufficiently good energy resolution for most applications while being (a) simple enough to be implemented in the readout electronics and (b) capable of processing overlapping pulses and thus achieving much higher output count rates than the rates that existing algorithms are currently achieving. Details of these algorithms are presented, and their performance was compared to that of the 'optimal filter' that is the dominant pulse processing algorithm in the cryogenic-detector community.

  11. High Strain Rate Behavior of Polyurea Compositions

    NASA Astrophysics Data System (ADS)

    Joshi, Vasant; Milby, Christopher

    2011-06-01

    Polyurea has been gaining importance in recent years due to its impact resistance properties. The actual compositions of this viscoelastic material must be tailored for specific use. It is therefore imperative to study the effect of variations in composition on the properties of the material. High-strain-rate response of three polyurea compositions with varying molecular weights has been investigated using a Split Hopkinson Pressure Bar arrangement equipped with titanium bars. The polyurea compositions were synthesized from polyamines (Versalink, Air Products) with a multi-functional isocyanate (Isonate 143L, Dow Chemical). Amines with molecular weights of 1000, 650, and a blend of 250/1000 have been used in the current investigation. The materials have been tested up to strain rates of 6000/s. Results from these tests have shown interesting trends on the high rate behavior. While higher molecular weight composition show lower yield, they do not show dominant hardening behavior. On the other hand, the blend of 250/1000 show higher load bearing capability but lower strain hardening effects than the 600 and 1000 molecular weight amine based materials. Refinement in experimental methods and comparison of results using aluminum Split Hopkinson Bar is presented.

  12. High strain rate behavior of polyurea compositions

    NASA Astrophysics Data System (ADS)

    Joshi, Vasant S.; Milby, Christopher

    2012-03-01

    High-strain-rate response of three polyurea compositions with varying molecular weights has been investigated using a Split Hopkinson Pressure Bar arrangement equipped with aluminum bars. Three polyurea compositions were synthesized from polyamines (Versalink, Air Products) with a multi-functional isocyanate (Isonate 143L, Dow Chemical). Amines with molecular weights of 1000, 650, and a blend of 250/1000 have been used in the current investigation. These materials have been tested to strain rates of over 6000/s. High strain rate results from these tests have shown varying trends as a function of increasing strain. While higher molecular weight composition show lower yield, they do not show dominant hardening behavior at lower strain. On the other hand, the blend of 250/1000 show higher load bearing capability but lower strain hardening effects than the 600 and 1000 molecular weight amine based materials. Results indicate that the initial increase in the modulus of the blend of 250/1000 may lead to the loss of strain hardening characteristics as the material is compressed to 50% strain, compared to 1000 molecular weight amine based material.

  13. High Strain Rate Behavior of Nanoporous Tantalum

    NASA Astrophysics Data System (ADS)

    Ruestes, Carlos J.; Bringa, Eduardo M.; Stukowski, Alexander; Rodriguez Nieva, Joaquin F.; Bertolino, Graciela; Tang, Yizhe; Meyers, Marc A.

    2012-02-01

    Nano-scale failure under extreme conditions is not well understood. In addition to porosity arising from mechanical failure at high strain rates, porous structures also develop due to radiation damage. Therefore, understanding the role of porosity on mechanical behavior is important for the assessment and development of materials like metallic foams, and materials for new fission and fusion reactors, with improved mechanical properties. We carry out molecular dynamics (MD) simulations of a Tantalum (a model body-centered cubic metal) crystal with a collection of nanovoids under compression. The effects of high strain rate, ranging from 10^7s-1 to 10^10s-1, on the stress strain curve and on dislocation activity are examined. We find massive total dislocation densities, and estimate a much lower density of mobile dislocations, due to the formation of junctions. Despite the large stress and strain rate, we do not observe twin formation, since nanopores are effective dislocation production sources. A significant fraction of dislocations survive unloading, unlike what happens in fcc metals, and future experiments might be able to study similar recovered samples and find clues to their plastic behavior during loading.

  14. High strain-rate magnetoelasticity in Galfenol

    NASA Astrophysics Data System (ADS)

    Domann, J. P.; Loeffler, C. M.; Martin, B. E.; Carman, G. P.

    2015-09-01

    This paper presents the experimental measurements of a highly magnetoelastic material (Galfenol) under impact loading. A Split-Hopkinson Pressure Bar was used to generate compressive stress up to 275 MPa at strain rates of either 20/s or 33/s while measuring the stress-strain response and change in magnetic flux density due to magnetoelastic coupling. The average Young's modulus (44.85 GPa) was invariant to strain rate, with instantaneous stiffness ranging from 25 to 55 GPa. A lumped parameters model simulated the measured pickup coil voltages in response to an applied stress pulse. Fitting the model to the experimental data provided the average piezomagnetic coefficient and relative permeability as functions of field strength. The model suggests magnetoelastic coupling is primarily insensitive to strain rates as high as 33/s. Additionally, the lumped parameters model was used to investigate magnetoelastic transducers as potential pulsed power sources. Results show that Galfenol can generate large quantities of instantaneous power (80 MW/m3 ), comparable to explosively driven ferromagnetic pulse generators (500 MW/m3 ). However, this process is much more efficient and can be cyclically carried out in the linear elastic range of the material, in stark contrast with explosively driven pulsed power generators.

  15. Rate Constants for Fine-structure Excitations in O–H Collisions with Error Bars Obtained by Machine Learning

    NASA Astrophysics Data System (ADS)

    Vieira, Daniel; Krems, Roman V.

    2017-02-01

    We present an approach using a combination of coupled channel scattering calculations with a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate constants for non-adiabatic transitions in inelastic atomic collisions to variations of the underlying adiabatic interaction potentials. Using this approach, we improve the previous computations of the rate constants for the fine-structure transitions in collisions of O({}3{P}j) with atomic H. We compute the error bars of the rate constants corresponding to 20% variations of the ab initio potentials and show that this method can be used to determine which of the individual adiabatic potentials are more or less important for the outcome of different fine-structure changing collisions.

  16. High strain rate deformation of layered nanocomposites.

    PubMed

    Lee, Jae-Hwang; Veysset, David; Singer, Jonathan P; Retsch, Markus; Saini, Gagan; Pezeril, Thomas; Nelson, Keith A; Thomas, Edwin L

    2012-01-01

    Insight into the mechanical behaviour of nanomaterials under the extreme condition of very high deformation rates and to very large strains is needed to provide improved understanding for the development of new protective materials. Applications include protection against bullets for body armour, micrometeorites for satellites, and high-speed particle impact for jet engine turbine blades. Here we use a microscopic ballistic test to report the responses of periodic glassy-rubbery layered block-copolymer nanostructures to impact from hypervelocity micron-sized silica spheres. Entire deformation fields are experimentally visualized at an exceptionally high resolution (below 10 nm) and we discover how the microstructure dissipates the impact energy via layer kinking, layer compression, extreme chain conformational flattening, domain fragmentation and segmental mixing to form a liquid phase. Orientation-dependent experiments show that the dissipation can be enhanced by 30% by proper orientation of the layers.

  17. High strain rate deformation of layered nanocomposites

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Hwang; Veysset, David; Singer, Jonathan P.; Retsch, Markus; Saini, Gagan; Pezeril, Thomas; Nelson, Keith A.; Thomas, Edwin L.

    2012-11-01

    Insight into the mechanical behaviour of nanomaterials under the extreme condition of very high deformation rates and to very large strains is needed to provide improved understanding for the development of new protective materials. Applications include protection against bullets for body armour, micrometeorites for satellites, and high-speed particle impact for jet engine turbine blades. Here we use a microscopic ballistic test to report the responses of periodic glassy-rubbery layered block-copolymer nanostructures to impact from hypervelocity micron-sized silica spheres. Entire deformation fields are experimentally visualized at an exceptionally high resolution (below 10 nm) and we discover how the microstructure dissipates the impact energy via layer kinking, layer compression, extreme chain conformational flattening, domain fragmentation and segmental mixing to form a liquid phase. Orientation-dependent experiments show that the dissipation can be enhanced by 30% by proper orientation of the layers.

  18. Civilian residential fire fatality rates: Six high-rate states versus six low-rate states

    NASA Astrophysics Data System (ADS)

    Hall, J. R., Jr.; Helzer, S. G.

    1983-08-01

    Results of an analysis of 1,600 fire fatalities occurring in six states with high fire-death rates and six states with low fire-death rates are presented. Reasons for the differences in rates are explored, with special attention to victim age, sex, race, and condition at time of ignition. Fire cause patterns are touched on only lightly but are addressed more extensively in the companion piece to this report, "Rural and Non-Rural Civilian Residential Fire Fatalities in Twelve States', NBSIR 82-2519.

  19. Reliability of perceived neighbourhood conditions and the effects of measurement error on self-rated health across urban and rural neighbourhoods.

    PubMed

    Pruitt, Sandi L; Jeffe, Donna B; Yan, Yan; Schootman, Mario

    2012-04-01

    Limited psychometric research has examined the reliability of self-reported measures of neighbourhood conditions, the effect of measurement error on associations between neighbourhood conditions and health, and potential differences in the reliabilities between neighbourhood strata (urban vs rural and low vs high poverty). We assessed overall and stratified reliability of self-reported perceived neighbourhood conditions using five scales (social and physical disorder, social control, social cohesion, fear) and four single items (multidimensional neighbouring). We also assessed measurement error-corrected associations of these conditions with self-rated health. Using random-digit dialling, 367 women without breast cancer (matched controls from a larger study) were interviewed twice, 2-3 weeks apart. Test-retest (intraclass correlation coefficients (ICC)/weighted κ) and internal consistency reliability (Cronbach's α) were assessed. Differences in reliability across neighbourhood strata were tested using bootstrap methods. Regression calibration corrected estimates for measurement error. All measures demonstrated satisfactory internal consistency (α ≥ 0.70) and either moderate (ICC/κ=0.41-0.60) or substantial (ICC/κ=0.61-0.80) test-retest reliability in the full sample. Internal consistency did not differ by neighbourhood strata. Test-retest reliability was significantly lower among rural (vs urban) residents for two scales (social control, physical disorder) and two multidimensional neighbouring items; test-retest reliability was higher for physical disorder and lower for one multidimensional neighbouring item among the high (vs low) poverty strata. After measurement error correction, the magnitude of associations between neighbourhood conditions and self-rated health were larger, particularly in the rural population. Research is needed to develop and test reliable measures of perceived neighbourhood conditions relevant to the health of rural populations.

  20. High frame-rate digital radiographic videography

    SciTech Connect

    King, N.S.P.; Cverna, F.H.; Albright, K.L.; Jaramillo, S.A.; Yates, G.J.; McDonald, T.E.; Flynn, M.J.; Tashman, S.

    1994-09-01

    High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100-microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.

  1. High-frame-rate digital radiographic videography

    NASA Astrophysics Data System (ADS)

    King, Nicholas S. P.; Cverna, Frank H.; Albright, Kevin L.; Jaramillo, Steven A.; Yates, George J.; McDonald, Thomas E.; Flynn, Michael J.; Tashman, Scott

    1994-10-01

    High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100 microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.

  2. Senior High School Students' Errors on the Use of Relative Words

    ERIC Educational Resources Information Center

    Bao, Xiaoli

    2015-01-01

    Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…

  3. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    The various methods of high bit transition density encoding are presented, their relative performance is compared in so far as error propagation characteristics, transition properties and system constraints are concerned. A computer simulation of the system using the specific PN code recommended, is included.

  4. Fuel droplet burning rates at high pressures.

    NASA Technical Reports Server (NTRS)

    Canada, G. S.; Faeth, G. M.

    1973-01-01

    Combustion of methanol, ethanol, propanol-1, n-pentane, n-heptane, and n-decane was observed in air under natural convection conditions, at pressures up to 100 atm. The droplets were simulated by porous spheres, with diameters in the range from 0.63 to 1.90 cm. The pressure levels of the tests were high enough so that near-critical combustion was observed for methanol and ethanol. Due to the high pressures, the phase-equilibrium models of the analysis included both the conventional low-pressure approach as well as high-pressure versions, allowing for real gas effects and the solubility of combustion-product gases in the liquid phase. The burning-rate predictions of the various theories were similar, and in fair agreement with the data. The high-pressure theory gave the best prediction for the liquid-surface temperatures of ethanol and propanol-1 at high pressure. The experiments indicated the approach of critical burning conditions for methanol and ethanol at pressures on the order of 80 to 100 atm, which was in good agreement with the predictions of both the low- and high-pressure analysis.

  5. Influence of beam wander on bit-error rate in a ground-to-satellite laser uplink communication system.

    PubMed

    Ma, Jing; Jiang, Yijun; Tan, Liying; Yu, Siyuan; Du, Wenhe

    2008-11-15

    Based on weak fluctuation theory and the beam-wander model, the bit-error rate of a ground-to-satellite laser uplink communication system is analyzed, in comparison with the condition in which beam wander is not taken into account. Considering the combined effect of scintillation and beam wander, optimum divergence angle and transmitter beam radius for a communication system are researched. Numerical results show that both of them increase with the increment of total link margin and transmitted wavelength. This work can benefit the ground-to-satellite laser uplink communication system design.

  6. Packet error rate analysis of OOK, DPIM, and PPM modulation schemes for ground-to-satellite laser uplink communications.

    PubMed

    Jiang, Yijun; Tao, Kunyu; Song, Yiwei; Fu, Sen

    2014-03-01

    Performance of on-off keying (OOK), digital pulse interval modulation (DPIM), and pulse position modulation (PPM) schemes are researched for ground-to-satellite laser uplink communications. Packet error rates of these modulation systems are compared, with consideration of the combined effect of intensity fluctuation and beam wander. Based on the numerical results, performances of different modulation systems are discussed. Optimum divergence angle and transmitted beam radius of different modulation systems are indicated and the relations of the transmitted laser power to them are analyzed. This work can be helpful for modulation scheme selection and system design in ground-to-satellite laser uplink communications.

  7. Accurate Bit-Error Rate Evaluation for TH-PPM Systems in Nakagami Fading Channels Using Moment Generating Functions

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Gunawan, Erry; Law, Choi Look; Teh, Kah Chan

    Analytical expressions based on the Gauss-Chebyshev quadrature (GCQ) rule technique are derived to evaluate the bit-error rate (BER) for the time-hopping pulse position modulation (TH-PPM) ultra-wide band (UWB) systems under a Nakagami-m fading channel. The analyses are validated by the simulation results and adopted to assess the accuracy of the commonly used Gaussian approximation (GA) method. The influence of the fading severity on the BER performance of TH-PPM UWB system is investigated.

  8. Evaluation of write error rate for voltage-driven dynamic magnetization switching in magnetic tunnel junctions with perpendicular magnetization

    NASA Astrophysics Data System (ADS)

    Shiota, Yoichi; Nozaki, Takayuki; Tamaru, Shingo; Yakushiji, Kay; Kubota, Hitoshi; Fukushima, Akio; Yuasa, Shinji; Suzuki, Yoshishige

    2016-01-01

    We investigated the write error rate (WER) for voltage-driven dynamic switching in magnetic tunnel junctions with perpendicular magnetization. We observed a clear oscillatory behavior of the switching probability with respect to the duration of pulse voltage, which reveals the precessional motion of magnetization during voltage application. We experimentally demonstrated WER as low as 4 × 10-3 at the pulse duration corresponding to a half precession period (˜1 ns). The comparison between the results of the experiment and simulation based on a macrospin model shows a possibility of ultralow WER (<10-15) under optimum conditions. This study provides a guideline for developing practical voltage-driven spintronic devices.

  9. The safety of electronic prescribing: manifestations, mechanisms, and rates of system-related errors associated with two commercial systems in hospitals

    PubMed Central

    Westbrook, Johanna I; Baysari, Melissa T; Li, Ling; Burke, Rosemary; Richardson, Katrina L; Day, Richard O

    2013-01-01

    Objectives To compare the manifestations, mechanisms, and rates of system-related errors associated with two electronic prescribing systems (e-PS). To determine if the rate of system-related prescribing errors is greater than the rate of errors prevented. Methods Audit of 629 inpatient admissions at two hospitals in Sydney, Australia using the CSC MedChart and Cerner Millennium e-PS. System related errors were classified by manifestation (eg, wrong dose), mechanism, and severity. A mechanism typology comprised errors made: selecting items from drop-down menus; constructing orders; editing orders; or failing to complete new e-PS tasks. Proportions and rates of errors by manifestation, mechanism, and e-PS were calculated. Results 42.4% (n=493) of 1164 prescribing errors were system-related (78/100 admissions). This result did not differ by e-PS (MedChart 42.6% (95% CI 39.1 to 46.1); Cerner 41.9% (37.1 to 46.8)). For 13.4% (n=66) of system-related errors there was evidence that the error was detected prior to study audit. 27.4% (n=135) of system-related errors manifested as timing errors and 22.5% (n=111) wrong drug strength errors. Selection errors accounted for 43.4% (34.2/100 admissions), editing errors 21.1% (16.5/100 admissions), and failure to complete new e-PS tasks 32.0% (32.0/100 admissions). MedChart generated more selection errors (OR=4.17; p=0.00002) but fewer new task failures (OR=0.37; p=0.003) relative to the Cerner e-PS. The two systems prevented significantly more errors than they generated (220/100 admissions (95% CI 180 to 261) vs 78 (95% CI 66 to 91)). Conclusions System-related errors are frequent, yet few are detected. e-PS require new tasks of prescribers, creating additional cognitive load and error opportunities. Dual classification, by manifestation and mechanism, allowed identification of design features which increase risk and potential solutions. e-PS designs with fewer drop-down menu selections may reduce error risk. PMID:23721982

  10. Microalgal separation from high-rate ponds

    SciTech Connect

    Nurdogan, Y.

    1988-01-01

    High rate ponding (HRP) processes are playing an increasing role in the treatment of organic wastewaters in sunbelt communities. Photosynthetic oxygenation by algae has proved to cost only one-seventh as much as mechanical aeration for activated sludge systems. During this study, an advanced HRP, which produces an effluent equivalent to tertiary treatment has been studied. It emphasizes not only waste oxidation but also algal separation and nutrient removal. This new system is herein called advanced tertiary high rate ponding (ATHRP). Phosphorus removal in HRP systems is normally low because algal uptake of phosphorus is about one percent of their 200-300 mg/L dry weights. Precipitation of calcium phosphates by autofluocculation also occurs in HRP at high pH levels, but it is generally not complete due to insufficient calcium concentration in the pond. In the case of Richmond where the studies were conducted, the sewage is very low in calcium. Therefore, enhancement of natural autoflocculation was studied by adding small amounts of lime to the pond. Through this simple procedure phosphorus and nitrogen removals were virtually complete justifying the terminology ATHRP.

  11. Phonetic and phonological errors in children with high functioning autism and Asperger syndrome.

    PubMed

    Cleland, Joanne; Gibbon, Fiona E; Peppé, Sue J E; O'Hare, Anne; Rutherford, Marion

    2010-02-01

    This study involved a qualitative analysis of speech errors in children with autism spectrum disorders (ASDs). Participants were 69 children aged 5-13 years; 30 had high functioning autism and 39 had Asperger syndrome. On a standardized test of articulation, the minority (12%) of participants presented with standard scores below the normal range, indicating a speech delay/disorder. Although all the other children had standard scores within the normal range, a sizeable proportion (33% of those with normal standard scores) presented with a small number of errors. Overall 41% of the group produced at least some speech errors. The speech of children with ASD was characterized by mainly developmental phonological processes (gliding, cluster reduction and final consonant deletion most frequently), but non-developmental error types (such as phoneme specific nasal emission and initial consonant deletion) were found both in children identified as performing below the normal range in the standardized speech test and in those who performed within the normal range. Non-developmental distortions occurred relatively frequently in the children with ASD and previous studies of adolescents and adults with ASDs shows similar errors, suggesting that they do not resolve over time. Whether or not speech disorders are related specifically to ASD, their presence adds an additional communication and social barrier and should be diagnosed and treated as early as possible in individual children.

  12. Reducing Systematic Centroid Errors Induced by Fiber Optic Faceplates in Intensified High-Accuracy Star Trackers

    PubMed Central

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  13. Reducing systematic centroid errors induced by fiber optic faceplates in intensified high-accuracy star trackers.

    PubMed

    Xiong, Kun; Jiang, Jie

    2015-05-26

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment.

  14. Orbit error correction on the high energy beam transport line at the KHIMA accelerator system

    NASA Astrophysics Data System (ADS)

    Park, Chawon; Yim, Heejoong; Hahn, Garam; An, Dong Hyun

    2016-09-01

    For the purpose of treatment of various cancers and medical research, a synchrotron based medical machine has been developed under the Korea Heavy Ion Medical Accelerator (KHIMA) project and is scheduled for use to treat patient at the beginning of 2018. The KHIMA synchrotron is designed to accelerate and extract carbon ion (proton) beams with various energies from 110 to 430 MeV/u (60 to 230 MeV). Studies on the lattice design and beam optics for the High Energy Beam Transport (HEBT) line at the KHIMA accelerator system have been carried out using the WinAgile and the MAD-X codes. Because magnetic field errors and misalignments introduce deviations from the design parameters, these error sources should be treated explicitly, and the sensitivity of the machine's lattice to different individual error sources should be considered. Various types of errors, both static and dynamic, have been taken into account and have been consequentially corrected with a dedicated correction algorithm by using the MAD-X program. Based on the error analysis, the optimized correction setup is decided, and the specifications for the correcting magnets of the HEBT lines are determined.

  15. Innovations in high rate condensate polishing systems

    SciTech Connect

    O`Brien, M.

    1995-01-01

    Test work is being conducted at two major east coast utilities to evaluate flow distribution in high flow rate condensate polishing service vessels. The work includes core sample data used to map the flow distribution in vessels as originally manufactured. Underdrain modifications for improved flow distribution are discussed with data that indicates performance increases of the service vessel following the modifications. The test work is on going, with preliminary data indicating that significant improvements in cycle run length are possible with underdrain modifications. The economic benefits of the above modifications are discussed.

  16. Error-estimation-guided rebuilding of de novo models increases the success rate of ab initio phasing.

    PubMed

    Shrestha, Rojan; Simoncini, David; Zhang, Kam Y J

    2012-11-01

    Recent advancements in computational methods for protein-structure prediction have made it possible to generate the high-quality de novo models required for ab initio phasing of crystallographic diffraction data using molecular replacement. Despite those encouraging achievements in ab initio phasing using de novo models, its success is limited only to those targets for which high-quality de novo models can be generated. In order to increase the scope of targets to which ab initio phasing with de novo models can be successfully applied, it is necessary to reduce the errors in the de novo models that are used as templates for molecular replacement. Here, an approach is introduced that can identify and rebuild the residues with larger errors, which subsequently reduces the overall C(α) root-mean-square deviation (CA-RMSD) from the native protein structure. The error in a predicted model is estimated from the average pairwise geometric distance per residue computed among selected lowest energy coarse-grained models. This score is subsequently employed to guide a rebuilding process that focuses on more error-prone residues in the coarse-grained models. This rebuilding methodology has been tested on ten protein targets that were unsuccessful using previous methods. The average CA-RMSD of the coarse-grained models was improved from 4.93 to 4.06 Å. For those models with CA-RMSD less than 3.0 Å, the average CA-RMSD was improved from 3.38 to 2.60 Å. These rebuilt coarse-grained models were then converted into all-atom models and refined to produce improved de novo models for molecular replacement. Seven diffraction data sets were successfully phased using rebuilt de novo models, indicating the improved quality of these rebuilt de novo models and the effectiveness of the rebuilding process. Software implementing this method, called MORPHEUS, can be downloaded from http://www.riken.jp/zhangiru/software.html.

  17. Cervix cancer brachytherapy: high dose rate.

    PubMed

    Miglierini, P; Malhaire, J-P; Goasduff, G; Miranda, O; Pradier, O

    2014-10-01

    Cervical cancer, although less common in industrialized countries, is the fourth most common cancer affecting women worldwide and the fourth leading cause of cancer death. In developing countries, these cancers are often discovered at a later stage in the form of locally advanced tumour with a poor prognosis. Depending on the stage of the disease, treatment is mainly based on a chemoradiotherapy followed by uterovaginal brachytherapy ending by a potential remaining tumour surgery or in principle for some teams. The role of irradiation is crucial to ensure a better local control. It has been shown that the more the delivered dose is important, the better the local results are. In order to preserve the maximum of organs at risk and to allow this dose escalation, brachytherapy (intracavitary and/or interstitial) has been progressively introduced. Its evolution and its progressive improvement have led to the development of high dose rate brachytherapy, the advantages of which are especially based on the possibility of outpatient treatment while maintaining the effectiveness of other brachytherapy forms (i.e., low dose rate or pulsed dose rate). Numerous innovations have also been completed in the field of imaging, leading to a progress in treatment planning systems by switching from two-dimensional form to a three-dimensional one. Image-guided brachytherapy allows more precise target volume delineation as well as an optimized dosimetry permitting a better coverage of target volumes.

  18. High resolution Ge/Li/ spectrometer reduces rate-dependent distortions at high counting rates

    NASA Technical Reports Server (NTRS)

    Brenner, R.; Larsen, R. N.; Mann, H. M.; Rudnick, S. J.; Sherman, I. S.; Strauss, M. G.

    1968-01-01

    Modified spectrometer system with a low-noise preamplifier reduces rate-dependent distortions at high counting rates, 25,000 counts per second. Pole-zero cancellation minimizes pulse undershoots due to multiple time constants, baseline restoration improves resolution and prevents spectral shifts.

  19. Slow-growing cells within isogenic populations have increased RNA polymerase error rates and DNA damage

    PubMed Central

    van Dijk, David; Dhar, Riddhiman; Missarova, Alsu M.; Espinar, Lorena; Blevins, William R.; Lehner, Ben; Carey, Lucas B.

    2015-01-01

    Isogenic cells show a large degree of variability in growth rate, even when cultured in the same environment. Such cell-to-cell variability in growth can alter sensitivity to antibiotics, chemotherapy and environmental stress. To characterize transcriptional differences associated with this variability, we have developed a method—FitFlow—that enables the sorting of subpopulations by growth rate. The slow-growing subpopulation shows a transcriptional stress response, but, more surprisingly, these cells have reduced RNA polymerase fidelity and exhibit a DNA damage response. As DNA damage is often caused by oxidative stress, we test the addition of an antioxidant, and find that it reduces the size of the slow-growing population. More generally, we find a significantly altered transcriptome in the slow-growing subpopulation that only partially resembles that of cells growing slowly due to environmental and culture conditions. Slow-growing cells upregulate transposons and express more chromosomal, viral and plasmid-borne transcripts, and thus explore a larger genotypic—and so phenotypic — space. PMID:26268986

  20. Should registrars be reporting after-hours CT scans? A calculation of error rate and the influencing factors in South Africa.

    PubMed

    Terreblanche, Owen D; Andronikou, Savvas; Hlabangana, Linda T; Brown, Taryn; Boshoff, Pieter E

    2012-02-01

    There is a heavy reliance on registrars for after-hours CT reporting with a resultant unavoidable error rate. To determine the after-hours CT reporting error rate by radiology registrars and influencing factors on this error rate. A 2-month prospective study was undertaken at two tertiary, level 1 trauma centers in Johannesburg, South Africa. Provisional CT reports issued by the registrar on call were reviewed by a qualified radiologist the following morning and information relating to the number, time and type of reporting errors made as well as the body region scanned, indication for the scan, year of training of the registrar, and workload during the call were recorded and analyzed. A total of 1477 CT scans were performed with an overall error rate of 17.1% and a major error rate of 7.7%. The error rate for 2nd, 3rd, and 4th year registrars was 19.4%, 15.1%, and 14.5%, respectively. A significant difference was found between the error rate in reporting trauma scans (15.8%) compared to non-trauma scans (19.2%) although the difference between emergency scans (16.9%) and elective scans (22.6%) was found to be not significant, a finding likely due to the low number of elective scans performed. Abdominopelvic scans elicited the highest number of errors (33.9%) compared to the other body regions such as head (16.5%) and cervical, thoracic, or lumbar spine (11.7%). Increasing workload resulted in a significant increase in error rate when analyzed with a generalized linear model. There was also a significant difference noted in the time of scan groups which we attributed to a workload effect. Missed findings were the most frequent errors seen (57.3%). We found an increasing error rate associated with increasing workload and marked increase in errors with the reporting of abdominopelvic scans. There was a decrease in the error rate when looking an increasing year of training although this there was only found to be significant difference between the 2nd and 3rd year

  1. High-Rate Digital Receiver Board

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder; Bialas, Thomas; Brambora, Clifford; Fisher, David

    2004-01-01

    A high-rate digital receiver (HRDR) implemented as a peripheral component interface (PCI) board has been developed as a prototype of compact, general-purpose, inexpensive, potentially mass-producible data-acquisition interfaces between telemetry systems and personal computers. The installation of this board in a personal computer together with an analog preprocessor enables the computer to function as a versatile, highrate telemetry-data-acquisition and demodulator system. The prototype HRDR PCI board can handle data at rates as high as 600 megabits per second, in a variety of telemetry formats, transmitted by diverse phase-modulation schemes that include binary phase-shift keying and various forms of quadrature phaseshift keying. Costing less than $25,000 (as of year 2003), the prototype HRDR PCI board supplants multiple racks of older equipment that, when new, cost over $500,000. Just as the development of standard network-interface chips has contributed to the proliferation of networked computers, it is anticipated that the development of standard chips based on the HRDR could contribute to reductions in size and cost and increases in performance of telemetry systems.

  2. High dose rate brachytherapy for oral cancer

    PubMed Central

    YamazakI, Hideya; Yoshida, Ken; Yoshioka, Yasuo; Shimizutani, Kimishige; Furukawa, Souhei; Koizumi, Masahiko; Ogawa, Kazuhiko

    2013-01-01

    Brachytherapy results in better dose distribution compared with other treatments because of steep dose reduction in the surrounding normal tissues. Excellent local control rates and acceptable side effects have been demonstrated with brachytherapy as a sole treatment modality, a postoperative method, and a method of reirradiation. Low-dose-rate (LDR) brachytherapy has been employed worldwide for its superior outcome. With the advent of technology, high-dose-rate (HDR) brachytherapy has enabled health care providers to avoid radiation exposure. This therapy has been used for treating many types of cancer such as gynecological cancer, breast cancer, and prostate cancer. However, LDR and pulsed-dose-rate interstitial brachytherapies have been mainstays for head and neck cancer. HDR brachytherapy has not become widely used in the radiotherapy community for treating head and neck cancer because of lack of experience and biological concerns. On the other hand, because HDR brachytherapy is less time-consuming, treatment can occasionally be administered on an outpatient basis. For the convenience and safety of patients and medical staff, HDR brachytherapy should be explored. To enhance the role of this therapy in treatment of head and neck lesions, we have reviewed its outcomes with oral cancer, including Phase I/II to Phase III studies, evaluating this technique in terms of safety and efficacy. In particular, our studies have shown that superficial tumors can be treated using a non-invasive mold technique on an outpatient basis without adverse reactions. The next generation of image-guided brachytherapy using HDR has been discussed. In conclusion, although concrete evidence is yet to be produced with a sophisticated study in a reproducible manner, HDR brachytherapy remains an important option for treatment of oral cancer. PMID:23179377

  3. Adults with Autism Show Increased Sensitivity to Outcomes at Low Error Rates during Decision-Making

    ERIC Educational Resources Information Center

    Minassian, Arpi; Paulus, Martin; Lincoln, Alan; Perry, William

    2007-01-01

    Decision-making is an important function that can be quantified using a two-choice prediction task. Individuals with Autistic Disorder (AD) often show highly restricted and repetitive behavior that may interfere with adaptive decision-making. We assessed whether AD adults showed repetitive behavior on the choice task that was unaffected by…

  4. Errors in administrative-reported ventilator-associated pneumonia rates: are never events really so?

    PubMed

    Thomas, Bradley W; Maxwell, Robert A; Dart, Benjamin W; Hartmann, Elizabeth H; Bates, Dustin L; Mejia, Vicente A; Smith, Philip W; Barker, Donald E

    2011-08-01

    Ventilator-associated pneumonia (VAP) is a common problem in an intensive care unit (ICU), although the incidence is not well established. This study aims to compare the VAP incidence as determined by the treating surgical intensivist with that detected by the hospital Infection Control Service (ICS). Trauma and surgical patients admitted to the surgical critical care service were prospectively evaluated for VAP during a 5-month time period. Collected data included the surgical intensivist's clinical VAP (SIS-VAP) assessment using Centers for Disease Control and Prevention (CDC) VAP criteria. As part of the hospital's VAP surveillance program, these patients' medical records were also reviewed by the ICS for VAP (ICS-VAP) using the same CDC VAP criteria. All patients suspected of having VAP underwent bronchioalveolar lavage (BAL). The SIS-VAP and ICS-VAP were then compared with BAL-VAP. Three hundred twenty-nine patients were admitted to the ICU during the study period. One hundred thirty-three were intubated longer than 48 hours and comprised our study population. Sixty-two patients underwent BAL evaluation for the presence of VAP on 89 occasions. SIS-VAP was diagnosed in 38 (28.5%) patients. ICS-VAP was identified in 11 (8.3%) patients (P < 0.001). The incidence of VAP by BAL criteria was 23.3 per cent. When compared with BAL, SIS-VAP had 61.3 per cent sensitivity and ICS-VAP had 29 per cent sensitivity. VAP rates reported by hospital administrative sources are significantly less accurate than physician-reported rates and dramatically underestimate the incidence of VAP. Proclaiming VAP as a never event for critically ill for surgical and trauma patients appears to be a fallacy.

  5. High counting rate resistive-plate chamber

    NASA Astrophysics Data System (ADS)

    Peskov, V.; Anderson, D. F.; Kwan, S.

    1993-05-01

    Parallel-plate avalanche chambers (PPAC) are widely used in physics experiments because they are fast (less than 1 ns) and have very simple construction: just two parallel metallic plates or mesh electrodes. Depending on the applied voltage they may work either in spark mode or avalanche mode. The advantage of the spark mode of operation is a large signal amplitude from the chamber, the disadvantage is that there is a large dead time (msec) for the entire chamber after an event. The main advantage of the avalanche mode is high rate capability 10(exp 5) counts/mm(sup 2). A resistive-plate chamber (RPC) is similar to the PPAC in construction except that one or both of the electrodes are made from high resistivity (greater than 10(exp 10) Omega(cm) materials. In practice RPC's are usually used in the spark mode. Resistive electrodes are charged by sparks, locally reducing the actual electric field in the gap. The size of the charged surface is about 10 mm(sup 2), leaving the rest of the detector unaffected. Therefore, the rate capability of such detectors in the spark mode is considerably higher than conventional spark counters. Among the different glasses tested the best results were obtained with electron type conductive glasses, which obey Ohm's law. Most of the work with such glasses was done with high pressure parallel-plate chambers (10 atm) for time-of-flight measurements. Resistive glasses have been expensive and produced only in small quantities. Now resistive glasses are commercially available, although they are still expensive in small scale production. From the positive experience of different groups working with the resistive glasses, it was decided to review the old idea to use this glass for the RPC. This work has investigated the possibility of using the RPC at 1 atm and in the avalanche mode. This has several advantages: simplicity of construction, high rate capability, low voltage operation, and the ability to work with non-flammable gases.

  6. Between-Batch Pharmacokinetic Variability Inflates Type I Error Rate in Conventional Bioequivalence Trials: A Randomized Advair Diskus Clinical Trial.

    PubMed

    Burmeister Getz, E; Carroll, K J; Mielke, J; Benet, L Z; Jones, B

    2017-03-01

    We previously demonstrated pharmacokinetic differences among manufacturing batches of a US Food and Drug Administration (FDA)-approved dry powder inhalation product (Advair Diskus 100/50) large enough to establish between-batch bio-inequivalence. Here, we provide independent confirmation of pharmacokinetic bio-inequivalence among Advair Diskus 100/50 batches, and quantify residual and between-batch variance component magnitudes. These variance estimates are used to consider the type I error rate of the FDA's current two-way crossover design recommendation. When between-batch pharmacokinetic variability is substantial, the conventional two-way crossover design cannot accomplish the objectives of FDA's statistical bioequivalence test (i.e., cannot accurately estimate the test/reference ratio and associated confidence interval). The two-way crossover, which ignores between-batch pharmacokinetic variability, yields an artificially narrow confidence interval on the product comparison. The unavoidable consequence is type I error rate inflation, to ∼25%, when between-batch pharmacokinetic variability is nonzero. This risk of a false bioequivalence conclusion is substantially higher than asserted by regulators as acceptable consumer risk (5%). © 2016 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of The American Society for Clinical Pharmacology and Therapeutics.

  7. Between‐Batch Pharmacokinetic Variability Inflates Type I Error Rate in Conventional Bioequivalence Trials: A Randomized Advair Diskus Clinical Trial

    PubMed Central

    Carroll, KJ; Mielke, J; Benet, LZ; Jones, B

    2016-01-01

    We previously demonstrated pharmacokinetic differences among manufacturing batches of a US Food and Drug Administration (FDA)‐approved dry powder inhalation product (Advair Diskus 100/50) large enough to establish between‐batch bio‐inequivalence. Here, we provide independent confirmation of pharmacokinetic bio‐inequivalence among Advair Diskus 100/50 batches, and quantify residual and between‐batch variance component magnitudes. These variance estimates are used to consider the type I error rate of the FDA's current two‐way crossover design recommendation. When between‐batch pharmacokinetic variability is substantial, the conventional two‐way crossover design cannot accomplish the objectives of FDA's statistical bioequivalence test (i.e., cannot accurately estimate the test/reference ratio and associated confidence interval). The two‐way crossover, which ignores between‐batch pharmacokinetic variability, yields an artificially narrow confidence interval on the product comparison. The unavoidable consequence is type I error rate inflation, to ∼25%, when between‐batch pharmacokinetic variability is nonzero. This risk of a false bioequivalence conclusion is substantially higher than asserted by regulators as acceptable consumer risk (5%). PMID:27727445

  8. Movement error rate for evaluation of machine learning methods for sEMG-based hand movement classification.

    PubMed

    Gijsberts, Arjan; Atzori, Manfredo; Castellini, Claudio; Muller, Henning; Caputo, Barbara

    2014-07-01

    There has been increasing interest in applying learning algorithms to improve the dexterity of myoelectric prostheses. In this work, we present a large-scale benchmark evaluation on the second iteration of the publicly released NinaPro database, which contains surface electromyography data for 6 DOF force activations as well as for 40 discrete hand movements. The evaluation involves a modern kernel method and compares performance of three feature representations and three kernel functions. Both the force regression and movement classification problems can be learned successfully when using a nonlinear kernel function, while the exp- χ(2) kernel outperforms the more popular radial basis function kernel in all cases. Furthermore, combining surface electromyography and accelerometry in a multimodal classifier results in significant increases in accuracy as compared to when either modality is used individually. Since window-based classification accuracy should not be considered in isolation to estimate prosthetic controllability, we also provide results in terms of classification mistakes and prediction delay. To this extent, we propose the movement error rate as an alternative to the standard window-based accuracy. This error rate is insensitive to prediction delays and it allows us therefore to quantify mistakes and delays as independent performance characteristics. This type of analysis confirms that the inclusion of accelerometry is superior, as it results in fewer mistakes while at the same time reducing prediction delay.

  9. An Analysis of the Contributing Factors to the Fiscal Year 1985 MCDOSET (Marine Corps Disbursing On-Site Examination Teams) Error Rates of the Marine Corps Infantry Battalion.

    DTIC Science & Technology

    1986-03-01

    RATXrE FiMgure 7. monetar Error Pate in Plation to the ’,u’ber of Additional Duties of the Personnel Officer. a൹ no relationship appears to exist...S x z zz z 00- - - .1 0 5 10 15 20 _25 30 35 40 MONETAR !Y ERROR R11ATE Figure 12. Monetary Error Rate in --,lation to the Number of MOS 0131

  10. Detecting glaucoma progression from localized rates of retinal changes in parametric and nonparametric statistical framework with type I error control.

    PubMed

    Balasubramanian, Madhusudhanan; Arias-Castro, Ery; Medeiros, Felipe A; Kriegman, David J; Bowd, Christopher; Weinreb, Robert N; Holst, Michael; Sample, Pamela A; Zangwill, Linda M

    2014-03-19

    We evaluated three new pixelwise rates of retinal height changes (PixR) strategies to reduce false-positive errors while detecting glaucomatous progression. Diagnostic accuracy of nonparametric PixR-NP cluster test (CT), PixR-NP single threshold test (STT), and parametric PixR-P STT were compared to statistic image mapping (SIM) using the Heidelberg Retina Tomograph. We included 36 progressing eyes, 210 nonprogressing patient eyes, and 21 longitudinal normal eyes from the University of California, San Diego (UCSD) Diagnostic Innovations in Glaucoma Study. Multiple comparison problem due to simultaneous testing of retinal locations was addressed in PixR-NP CT by controlling family-wise error rate (FWER) and in STT methods by Lehmann-Romano's k-FWER. For STT methods, progression was defined as an observed progression rate (ratio of number of pixels with significant rate of decrease; i.e., red-pixels, to disk size) > 2.5%. Progression criterion for CT and SIM methods was presence of one or more significant (P < 1%) red-pixel clusters within disk. Specificity in normals: CT = 81% (90%), PixR-NP STT = 90%, PixR-P STT = 90%, SIM = 90%. Sensitivity in progressing eyes: CT = 86% (86%), PixR-NP STT = 75%, PixR-P STT = 81%, SIM = 39%. Specificity in nonprogressing patient eyes: CT = 49% (55%), PixR-NP STT = 56%, PixR-P STT = 50%, SIM = 79%. Progression detected by PixR in nonprogressing patient eyes was associated with early signs of visual field change that did not yet meet our definition of glaucomatous progression. The PixR provided higher sensitivity in progressing eyes and similar specificity in normals than SIM, suggesting that PixR strategies can improve our ability to detect glaucomatous progression. Longer follow-up is necessary to determine whether nonprogressing eyes identified as progressing by these methods will develop glaucomatous progression. (ClinicalTrials.gov number, NCT00221897).

  11. Impact of automated dispensing cabinets on medication selection and preparation error rates in an emergency department: a prospective and direct observational before-and-after study.

    PubMed

    Fanning, Laura; Jones, Nick; Manias, Elizabeth

    2016-04-01

    The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.

  12. Assessment of error rates in acoustic monitoring with the R package monitoR

    USGS Publications Warehouse

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were  for song event detection.

  13. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  14. Application of high-rate cutting tools

    NASA Astrophysics Data System (ADS)

    Moriarty, John L., Jr.

    1989-03-01

    Widespread application of the newest high-rate cutting tools to the most appropriate jobs is slowed by the sheer magnitude of developments in tool types, materials, workpiece applications, and by the rapid pace of change. Therefore, a study of finishing and roughing sizes of coated carbide inserts having a variety of geometries for single point turning was completed. The cutting tools were tested for tool life, chip quality, and workpiece surface finish at various cutting conditions with medium alloy steel. An empirical wear-life data base was established, and a computer program was developed to facilitate technology transfer, assist selection of carbide insert grades, and provide machine operating parameters. A follow-on test program was implemented suitable for next generation coated carbides, rotary cutting tools, cutting fluids, and ceramic tool materials.

  15. VARIABLE SELECTION FOR QUALITATIVE INTERACTIONS IN PERSONALIZED MEDICINE WHILE CONTROLLING THE FAMILY-WISE ERROR RATE

    PubMed Central

    Gunter, Lacey; Zhu, Ji; Murphy, Susan

    2012-01-01

    For many years, subset analysis has been a popular topic for the biostatistics and clinical trials literature. In more recent years, the discussion has focused on finding subsets of genomes which play a role in the effect of treatment, often referred to as stratified or personalized medicine. Though highly sought after, methods for detecting subsets with altering treatment effects are limited and lacking in power. In this article we discuss variable selection for qualitative interactions with the aim to discover these critical patient subsets. We propose a new technique designed specifically to find these interaction variables among a large set of variables while still controlling for the number of false discoveries. We compare this new method against standard qualitative interaction tests using simulations and give an example of its use on data from a randomized controlled trial for the treatment of depression. PMID:22023676

  16. High variability in strain estimation errors when using a commercial ultrasound speckle tracking algorithm on tendon tissue.

    PubMed

    Fröberg, Åsa; Mårtensson, Mattias; Larsson, Matilda; Janerot-Sjöberg, Birgitta; D'Hooge, Jan; Arndt, Anton

    2016-10-01

    Ultrasound speckle tracking offers a non-invasive way of studying strain in the free Achilles tendon where no anatomical landmarks are available for tracking. This provides new possibilities for studying injury mechanisms during sport activity and the effects of shoes, orthotic devices, and rehabilitation protocols on tendon biomechanics. To investigate the feasibility of using a commercial ultrasound speckle tracking algorithm for assessing strain in tendon tissue. A polyvinyl alcohol (PVA) phantom, three porcine tendons, and a human Achilles tendon were mounted in a materials testing machine and loaded to 4% peak strain. Ultrasound long-axis cine-loops of the samples were recorded. Speckle tracking analysis of axial strain was performed using a commercial speckle tracking software. Estimated strain was then compared to reference strain known from the materials testing machine. Two frame rates and two region of interest (ROI) sizes were evaluated. Best agreement between estimated strain and reference strain was found in the PVA phantom (absolute error in peak strain: 0.21 ± 0.08%). The absolute error in peak strain varied between 0.72 ± 0.65% and 10.64 ± 3.40% in the different tendon samples. Strain determined with a frame rate of 39.4 Hz had lower errors than 78.6 Hz as was the case with a 22 mm compared to an 11 mm ROI. Errors in peak strain estimation showed high variability between tendon samples and were large in relation to strain levels previously described in the Achilles tendon. © The Foundation Acta Radiologica 2016.

  17. Consideration of wear rates at high velocity

    NASA Astrophysics Data System (ADS)

    Hale, Chad S.

    The development of the research presented here is one in which high velocity relative sliding motion between two bodies in contact has been considered. Overall, the wear environment is truly three-dimensional. The attempt to characterize three-dimensional wear was not economically feasible because it must be analyzed at the micro-mechanical level to get results. Thus, an engineering approximation was carried out. This approximation was based on a metallographic study identifying the need to include viscoplasticity constitutive material models, coefficient of friction, relationships between the normal load and velocity, and the need to understand wave propagation. A sled test run at the Holloman High Speed Test Track (HHSTT) was considered for the determination of high velocity wear rates. In order to adequately characterize high velocity wear, it was necessary to formulate a numerical model that contained all of the physical events present. The experimental results of a VascoMax 300 maraging steel slipper sliding on an AISI 1080 steel rail during a January 2008 sled test mission were analyzed. During this rocket sled test, the slipper traveled 5,816 meters in 8.14 seconds and reached a maximum velocity of 1,530 m/s. This type of environment was never considered previously in terms of wear evaluation. Each of the features of the metallography were obtained through micro-mechanical experimental techniques. The byproduct of this analysis is that it is now possible to formulate a model that contains viscoplasticity, asperity collisions, temperature and frictional features. Based on the observations of the metallographic analysis, these necessary features have been included in the numerical model, which makes use of a time-dynamic program which follows the movement of a slipper during its experimental test run. The resulting velocity and pressure functions of time have been implemented in the explicit finite element code, ABAQUS. Two-dimensional, plane strain models

  18. Performance Evaluation of High-Rate GPS Seismometers

    NASA Astrophysics Data System (ADS)

    Kato, T.; Ebinuma, T.

    2011-12-01

    High-rate GPS observations with higher than once-per-second sampling are getting increasingly important for seismology. Unlike a traditional seismometer which measures short period vibration using accelerometers, the GPS receiver can measure its antenna position directly and record long period seismic wave and permanent displacements as well. The high-rate GPS observations are expected to provide new insights in understanding the whole aspects of earthquake process. In this study, we investigated dynamic characteristics of the high-rate GPS receivers capable of outputting the observations at up to 50Hz. This higher output rate, however, doesn't mean higher dynamics range of the GPS observations. Since many GPS receivers are designed for low dynamics applications, such as static survey, personal and car navigation, the bandwidth of the loop filters tend to be narrower in order to reduce the noise level of the observations. The signal tracking loop works like a low-pass filter. Thus the narrower the bandwidth, the lower the dynamics range. In order to extend this dynamical limit, high-rate GPS receivers might use wider loop bandwidth for phase tracking. In this case, the GPS observations are degraded by higher noise level in return. In addition to the limitation of the loop bandwidth, higher acceleration due to earthquake may cause the steady state error in the signal tracking loop. As a result, kinematic solutions experience undesirable position offsets, or the receiver may lose the GPS signals in an extreme case. In order to examine those effects for the high-rate GPS observations, we made an experiment using a GPS signal simulator and several geodetic GPS receivers, including Trimble Net-R8, NovAtel OEMV, Topcon Net-G3A, and Javad SIGMA-G2T. We set up the zero-baseline simulation scenario in which the rover receiver was vibrating in a periodic motion with the frequency from 1Hz to 10Hz around the reference station. The amplitude of the motion was chosen to provide

  19. Consideration of Wear Rates at High Velocities

    DTIC Science & Technology

    2010-03-01

    models to reduce numerical errors during finite element simulations. Additionally, Cameron [7] and Cinnamon [9] used a filleted leading edge for CTH... Cinnamon et al. [9; 10; 11] performed flyer plate experiments to determine these constants, which are shown in Table 3.5. 3.3.4 Equation of State. An...EOS input set ********************************************** eos material1 ses grepxy1 * epoxy rail coating ( Cinnamon /Cameron) material2 ses iron

  20. Estimation of chromatic errors from broadband images for high contrast imaging

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Belikov, Ruslan

    2015-09-01

    Usage of an internal coronagraph with an adaptive optical system for wavefront correction for direct imaging of exoplanets is currently being considered for many mission concepts, including as an instrument addition to the WFIRST-AFTA mission to follow the James Web Space Telescope. The main technical challenge associated with direct imaging of exoplanets with an internal coronagraph is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, wavefront errors are usually estimated using probes on the DM. To date, most broadband lab demonstrations use narrowband filters to estimate the chromaticity of the wavefront error, but this reduces the photon flux per filter and requires a filter system. Here, we propose a method to estimate the chromaticity of wavefront errors using only a broadband image. This is achieved by using special DM probes that have sufficient chromatic diversity. As a case example, we simulate the retrieval of the spectrum of the central wavelength from broadband images for a simple shaped- pupil coronagraph with a conjugate DM and compute the resulting estimation error.

  1. The Influence of Relatives on the Efficiency and Error Rate of Familial Searching

    PubMed Central

    Rohlfs, Rori V.; Murphy, Erin; Song, Yun S.; Slatkin, Montgomery

    2013-01-01

    We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011), for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler® Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability () of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins) there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases. PMID:23967076

  2. High Data Rate Architecture (HiDRA)

    NASA Technical Reports Server (NTRS)

    Hylton, Alan; Raible, Daniel

    2016-01-01

    high-rate laser terminals. These must interface with the existing, aging data infrastructure. The High Data Rate Architecture (HiDRA) project is designed to provide networked store, carry, and forward capability to optimize data flow through both the existing radio frequency (RF) and new laser communications terminal. The networking capability is realized through the Delay Tolerant Networking (DTN) protocol, and is used for scheduling data movement as well as optimizing the performance of existing RF channels. HiDRA is realized as a distributed FPGA memory and interface controller that is itself controlled by a local computer running DTN software. Thus HiDRA is applicable to other arenas seeking to employ next-generation communications technologies, e.g. deep space. In this paper, we describe HiDRA and its far-reaching research implications.

  3. High false positive rates in common sensory threshold tests.

    PubMed

    Running, Cordelia A

    2015-02-01

    Large variability in thresholds to sensory stimuli is observed frequently even in healthy populations. Much of this variability is attributed to genetics and day-to-day fluctuation in sensitivity. However, false positives are also contributing to the variability seen in these tests. In this study, random number generation was used to simulate responses in threshold methods using different "stopping rules": ascending 2-alternative forced choice (AFC) with 5 correct responses; ascending 3-AFC with 3 or 4 correct responses; staircase 2-AFC with 1 incorrect up and 2 incorrect down, as well as 1 up 4 down and 5 or 7 reversals; staircase 3-AFC with 1 up 2 down and 5 or 7 reversals. Formulas are presented for rates of false positives in the ascending methods, and curves were generated for the staircase methods. Overall, the staircase methods generally had lower false positive rates, but these methods were influenced even more by number of presentations than ascending methods. Generally, the high rates of error in all these methods should encourage researchers to conduct multiple tests per individual and/or select a method that can correct for false positives, such as fitting a logistic curve to a range of responses.

  4. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  5. A Comparative Study of Heavy Ion and Proton Induced Bit Error Sensitivity and Complex Burst Error Modes in Commercially Available High Speed SiGe BiCMOS

    NASA Technical Reports Server (NTRS)

    Marshall, Paul; Carts, Marty; Campbell, Art; Reed, Robert; Ladbury, Ray; Seidleck, Christina; Currie, Steve; Riggs, Pam; Fritz, Karl; Randall, Barb

    2004-01-01

    A viewgraph presentation that reviews recent SiGe bit error test data for different commercially available high speed SiGe BiCMOS chips that were subjected to various levels of heavy ion and proton radiation. Results for the tested chips at different operating speeds are displayed in line graphs.

  6. A Comparative Study of Heavy Ion and Proton Induced Bit Error Sensitivity and Complex Burst Error Modes in Commercially Available High Speed SiGe BiCMOS

    NASA Technical Reports Server (NTRS)

    Marshall, Paul; Carts, Marty; Campbell, Art; Reed, Robert; Ladbury, Ray; Seidleck, Christina; Currie, Steve; Riggs, Pam; Fritz, Karl; Randall, Barb

    2004-01-01

    A viewgraph presentation that reviews recent SiGe bit error test data for different commercially available high speed SiGe BiCMOS chips that were subjected to various levels of heavy ion and proton radiation. Results for the tested chips at different operating speeds are displayed in line graphs.

  7. [High vaccination rates among children of Amsterdam].

    PubMed

    van der Wal, M F; Diepenmaat, A C; Pauw-Plomp, H; van Weert-Waltman, M L

    2001-01-20

    To examine if in Amsterdam there are social or cultural groups of children with a relatively low vaccination coverage for diphtheria, pertussis, tetanus and poliomyelitis (DPTP) and mumps, measles and rubella (MMR). Descriptive cross-sectional study. In the Department of Child Health Care of the Municipal Health Service of Amsterdam all 83,217 children aged 2-12 years living in Amsterdam on the 1st of January 2000 were analysed for vaccination and sociodemographic data collected routinely by the Department of Child Health Care. The sociodemographic data concerned sex, year of birth, country of birth of the mother and child, name of the school and postal code of the home address. Schools were grouped by (religious) affiliation on the basis of the Amsterdam school guide 1999/2000. Based on postal codes children were classified by the neighbourhoods in which they were living. Neighbourhoods were grouped by socio-economic status based on data from the Central Bureau for Statistics. The overall vaccination rates of DPTP and MMR were 92.4% and 93.5% respectively. No important variation in vaccination coverage was identified between more and less affluent neighbourhoods. The uptake rate among foreign children was sometimes slightly higher and sometimes slightly lower compared with native children. Especially foreign children born abroad (Surinam, Antilles, Morocco, Turkey) were not fully vaccinated: 70.9% were fully immunized for DPTP, 79.5% for MMR. Children who visited anthroposophical schools were considerably less frequently fully immunized compared with children visiting other schools: for DPTP and MMR 81.0 and 59.9% respectively versus 94.4 en 95.3% for children attending general municipal schools. The vaccination coverage was high in children in Amsterdam. Further improvement of the vaccination uptake might be achieved by a more outreaching attitude to children born abroad, and by more intensely informing sceptical parents about the benefits and (supposed) dangers

  8. Exact error rate analysis of equal gain and selection diversity for coherent free-space optical systems on strong turbulence channels.

    PubMed

    Niu, Mingbo; Cheng, Julian; Holzman, Jonathan F

    2010-06-21

    Exact error rate performances are studied for coherent free-space optical communication systems under strong turbulence with diversity reception. Equal gain and selection diversity are considered as practical schemes to mitigate turbulence. The exact bit-error rate for binary phase-shift keying and outage probability are developed for equal gain diversity. Analytical expressions are obtained for the bit-error rate of differential phase-shift keying and asynchronous frequency-shift keying, as well as for outage probability using selection diversity. Furthermore, we provide the closed-form expressions of diversity order and coding gain with both diversity receptions. The analytical results are verified by computer simulations and are suitable for rapid error rates calculation.

  9. The Effects of Type I Error Rate and Power of the ANCOVA "F" Test and Selected Alternatives under Nonnormality and Variance Heterogeneity.

    ERIC Educational Resources Information Center

    Rheinheimer, David C.; Penfield, Douglas A.

    2001-01-01

    Studied, through Monte Carlo simulation, the conditions for which analysis of covariance (ANCOVA) does not maintain adequate Type I error rates and power and evaluated some alternative tests. Discusses differences in ANCOVA robustness for balanced and unbalanced designs. (SLD)

  10. Every photon counts: improving low, mid, and high-spatial frequency errors on astronomical optics and materials with MRF

    NASA Astrophysics Data System (ADS)

    Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul

    2016-07-01

    fluid, called C30, has been developed to finish surfaces to ultra-low roughness (ULR) and has been used as the low removal rate fluid required for fine figure correction of mid-spatial frequency errors. This novel MRF fluid is able to achieve <4Å RMS on Nickel-plated Aluminum and even <1.5Å RMS roughness on Silicon, Fused Silica and other materials. C30 fluid is best utilized within a fine figure correction process to target mid-spatial frequency errors as well as smooth surface roughness 'for free' all in one step. In this paper we will discuss recent advancements in MRF technology and the ability to meet requirements for precision optics in low, mid and high spatial frequency regimes and how improved MRF performance addresses the need for achieving tight specifications required for astronomical optics.

  11. Photocathodes for High Repetition Rate Light Sources

    SciTech Connect

    Ben-Zvi, Ilan

    2014-04-20

    This proposal brought together teams at Brookhaven National Laboratory (BNL), Lawrence Berkeley National Laboratory (LBNL) and Stony Brook University (SBU) to study photocathodes for high repetition rate light sources such as Free Electron Lasers (FEL) and Energy Recovery Linacs (ERL). Below details the Principal Investigators and contact information. Each PI submits separately for a budget through his corresponding institute. The work done under this grant comprises a comprehensive program on critical aspects of the production of the electron beams needed for future user facilities. Our program pioneered in situ and in operando diagnostics for alkali antimonide growth. The focus is on development of photocathodes for high repetition rate Free Electron Lasers (FELs) and Energy Recovery Linacs (ERLs), including testing SRF photoguns, both normal-­conducting and superconducting. Teams from BNL, LBNL and Stony Brook University (SBU) led this research, and coordinated their work over a range of topics. The work leveraged a robust infrastructure of existing facilities and the support was used for carrying out the research at these facilities. The program concentrated in three areas: a) Physics and chemistry of alkali-­antimonide cathodes (BNL – LBNL) b) Development and testing of a diamond amplifier for photocathodes (SBU -­ BNL) c) Tests of both cathodes in superconducting RF photoguns (SBU) and copper RF photoguns (LBNL) Our work made extensive use of synchrotron radiation materials science techniques, such as powder-­ and single-­crystal diffraction, x-­ray fluorescence, EXAFS and variable energy XPS. BNL and LBNL have many complementary facilities at the two light sources associated with these laboratories (NSLS and ALS, respectively); use of these will be a major thrust of our program and bring our understanding of these complex materials to a new level. In addition, CHESS at Cornell will be used to continue seamlessly throughout the NSLS dark period and

  12. Advanced Communications Technology Satellite (ACTS) Fade Compensation Protocol Impact on Very Small-Aperture Terminal Bit Error Rate Performance

    NASA Technical Reports Server (NTRS)

    Cox, Christina B.; Coney, Thom A.

    1999-01-01

    The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.

  13. Outage Performance and Average Symbol Error Rate of M-QAM for Maximum Ratio Combining with Multiple Interferers

    NASA Astrophysics Data System (ADS)

    Ahn, Kyung Seung

    In this paper, we investigate the performance of maximum ratio combining (MRC) in the presence of multiple cochannel interferences over a flat Rayleigh fading channel. Closed-form expressions of signal-to-interference-plus-noise ratio (SINK), outage probability, and average symbol error rate (SER) of quadrature amplitude modulation (QAM) with Mary signaling are obtained for unequal-power interference-to-noise ratio (INR). We also provide an upper-bound for the average SER using moment generating function (MGF) of the SINR. Moreover, we quantify the array gain loss between pure MRC (MRC system in the absence of CCI) and MRC system in the presence of CCI. Finally, we verify our analytical results by numerical simulations.

  14. Advanced Communications Technology Satellite (ACTS) Fade Compensation Protocol Impact on Very Small-Aperture Terminal Bit Error Rate Performance

    NASA Technical Reports Server (NTRS)

    Cox, Christina B.; Coney, Thom A.

    1999-01-01

    The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.

  15. Choice of Reference Sequence and Assembler for Alignment of Listeria monocytogenes Short-Read Sequence Data Greatly Influences Rates of Error in SNP Analyses

    PubMed Central

    Pightling, Arthur W.; Petronella, Nicholas; Pagotto, Franco

    2014-01-01

    The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should

  16. Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1998-01-01

    In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.

  17. Separable and Error-Free Reversible Data Hiding in Encrypted Image with High Payload

    PubMed Central

    Yin, Zhaoxia; Luo, Bin; Hong, Wien

    2014-01-01

    This paper proposes a separable reversible data-hiding scheme in encrypted image which offers high payload and error-free data extraction. The cover image is partitioned into nonoverlapping blocks and multigranularity encryption is applied to obtain the encrypted image. The data hider preprocesses the encrypted image and randomly selects two basic pixels in each block to estimate the block smoothness and indicate peak points. Additional data are embedded into blocks in the sorted order of block smoothness by using local histogram shifting under the guidance of the peak points. At the receiver side, image decryption and data extraction are separable and can be free to choose. Compared to previous approaches, the proposed method is simpler in calculation while offering better performance: larger payload, better embedding quality, and error-free data extraction, as well as image recovery. PMID:24977214

  18. Separable and error-free reversible data hiding in encrypted image with high payload.

    PubMed

    Yin, Zhaoxia; Luo, Bin; Hong, Wien

    2014-01-01

    This paper proposes a separable reversible data-hiding scheme in encrypted image which offers high payload and error-free data extraction. The cover image is partitioned into nonoverlapping blocks and multigranularity encryption is applied to obtain the encrypted image. The data hider preprocesses the encrypted image and randomly selects two basic pixels in each block to estimate the block smoothness and indicate peak points. Additional data are embedded into blocks in the sorted order of block smoothness by using local histogram shifting under the guidance of the peak points. At the receiver side, image decryption and data extraction are separable and can be free to choose. Compared to previous approaches, the proposed method is simpler in calculation while offering better performance: larger payload, better embedding quality, and error-free data extraction, as well as image recovery.

  19. Timing jitter's influence on the symbol error rate performance of the L-ary pulse position modulation free-space optical link in atmospheric turbulent channels with pointing errors

    NASA Astrophysics Data System (ADS)

    Li, Yatian; Geng, Tianwen; Ma, Shuang; Gao, Shijie; Gao, Huibin

    2017-03-01

    An analytical approach is proposed to evaluate the impact of timing jitter on the error performance of the L-ary pulse position modulation (L-PPM) free-space optical (FSO) link, under the gamma-gamma (ΓΓ) turbulence with pointing errors. The expression of the conditional symbol error rate (SER) for a certain timing jitter is developed while the Gaussian-Hermite polynomial approximation is utilized to derive the closed-form expression of the average SER by the jitter's variance. It is discovered that the timing jitter contributes to an error floor on the SER in both Monte-Carlo simulations and the theoretical results. Also, the jitter with small variance could be tolerable. What is more, the PPM system with lower order is more sensitive to the timing jitter than the higher ones.

  20. High resolution, high rate x-ray spectrometer

    DOEpatents

    Goulding, F.S.; Landis, D.A.

    1983-07-14

    It is an object of the invention to provide a pulse processing system for use with detected signals of a wide dynamic range which is capable of very high counting rates, with high throughput, with excellent energy resolution and a high signal-to-noise ratio. It is a further object to provide a pulse processing system wherein the fast channel resolving time is quite short and substantially independent of the energy of the detected signals. Another object is to provide a pulse processing system having a pile-up rejector circuit which will allow the maximum number of non-interfering pulses to be passed to the output. It is also an object of the invention to provide new methods for generating substantially symmetrically triangular pulses for use in both the main and fast channels of a pulse processing system.

  1. High rate PLD of diamond-like-carbon utilizing high repetition rate visible lasers

    SciTech Connect

    McLean, W. II; Fehring, E.J.; Dragon, E.P.; Warner, B.E.

    1994-09-15

    Pulsed Laser Deposition (PLD) has been shown to be an effective method for producing a wide variety of thin films of high-value-added materials. The high average powers and high pulse repetition frequencies of lasers under development at LLNL make it possible to scale-up PLD processes that have been demonstrated in small systems in a number of university, government, and private laboratories to industrially meaningful, economically feasible technologies. A copper vapor laser system at LLNL has been utilized to demonstrate high rate PLD of high quality diamond-like-carbon (DLC) from graphite targets. The deposition rates for PLD obtained with a 100 W laser were {approx} 2000 {mu}m{center_dot}cm{sup 2}/h, or roughly 100 times larger than those reported by chemical vapor deposition (CVD) or physical vapor deposition (PVD) methods. Good adhesion of thin (up to 2 pm) films has been achieved on a small number of substrates that include SiO{sub 2} and single crystal Si. Present results indicate that the best quality DLC films can be produced at optimum rates at power levels and wavelengths compatible with fiber optic delivery systems. If this is also true of other desirable coating systems, this PLD technology could become an extremely attractive industrial tool for high value added coatings.

  2. Quantifying the Representation Error of Land Biosphere Models using High Resolution Footprint Analyses and UAS Observations

    NASA Astrophysics Data System (ADS)

    Hanson, C. V.; Schmidt, A.; Law, B. E.; Moore, W.

    2015-12-01

    The validity of land biosphere model outputs rely on accurate representations of ecosystem processes within the model. Typically, a vegetation or land cover type for a given area (several Km squared or larger resolution), is assumed to have uniform properties. The limited spacial and temporal resolution of models prevents resolving finer scale heterogeneous flux patterns that arise from variations in vegetation. This representation error must be quantified carefully if models are informed through data assimilation in order to assign appropriate weighting of model outputs and measurement data. The representation error is usually only estimated or ignored entirely due to the difficulty in determining reasonable values. UAS based gas sensors allow measurements of atmospheric CO2 concentrations with unprecedented spacial resolution, providing a means of determining the representation error for CO2 fluxes empirically. In this study we use three dimensional CO2 concentration data in combination with high resolution footprint analyses in order to quantify the representation error for modelled CO2 fluxes for typical resolutions of regional land biosphere models. CO2 concentration data were collected using an Atlatl X6A hexa-copter, carrying a highly calibrated closed path infra-red gas analyzer based sampling system with an uncertainty of ≤ ±0.2 ppm CO2. Gas concentration data was mapped in three dimensions using the UAS on-board position data and compared to footprints generated using WRF 3.61. Chad Hanson, Oregon State University, Corvallis, OR Andres Schmidt, Oregon State University, Corvallis, OR Bev Law, Oregon State University, Corvallis, OR

  3. Detecting Glaucoma Progression From Localized Rates of Retinal Changes in Parametric and Nonparametric Statistical Framework With Type I Error Control

    PubMed Central

    Balasubramanian, Madhusudhanan; Arias-Castro, Ery; Medeiros, Felipe A.; Kriegman, David J.; Bowd, Christopher; Weinreb, Robert N.; Holst, Michael; Sample, Pamela A.; Zangwill, Linda M.

    2014-01-01

    Purpose. We evaluated three new pixelwise rates of retinal height changes (PixR) strategies to reduce false-positive errors while detecting glaucomatous progression. Methods. Diagnostic accuracy of nonparametric PixR-NP cluster test (CT), PixR-NP single threshold test (STT), and parametric PixR-P STT were compared to statistic image mapping (SIM) using the Heidelberg Retina Tomograph. We included 36 progressing eyes, 210 nonprogressing patient eyes, and 21 longitudinal normal eyes from the University of California, San Diego (UCSD) Diagnostic Innovations in Glaucoma Study. Multiple comparison problem due to simultaneous testing of retinal locations was addressed in PixR-NP CT by controlling family-wise error rate (FWER) and in STT methods by Lehmann-Romano's k-FWER. For STT methods, progression was defined as an observed progression rate (ratio of number of pixels with significant rate of decrease; i.e., red-pixels, to disk size) > 2.5%. Progression criterion for CT and SIM methods was presence of one or more significant (P < 1%) red-pixel clusters within disk. Results. Specificity in normals: CT = 81% (90%), PixR-NP STT = 90%, PixR-P STT = 90%, SIM = 90%. Sensitivity in progressing eyes: CT = 86% (86%), PixR-NP STT = 75%, PixR-P STT = 81%, SIM = 39%. Specificity in nonprogressing patient eyes: CT = 49% (55%), PixR-NP STT = 56%, PixR-P STT = 50%, SIM = 79%. Progression detected by PixR in nonprogressing patient eyes was associated with early signs of visual field change that did not yet meet our definition of glaucomatous progression. Conclusions. The PixR provided higher sensitivity in progressing eyes and similar specificity in normals than SIM, suggesting that PixR strategies can improve our ability to detect glaucomatous progression. Longer follow-up is necessary to determine whether nonprogressing eyes identified as progressing by these methods will develop glaucomatous progression. (ClinicalTrials.gov number, NCT00221897.) PMID:24519427

  4. PS foams at high pressure drop rates

    NASA Astrophysics Data System (ADS)

    Tammaro, Daniele; De Maio, Attilio; Carbone, Maria Giovanna Pastore; Di Maio, Ernesto; Iannace, Salvatore

    2014-05-01

    In this paper, we report data on PS foamed at 100 °C after CO2 saturation at 10 MPa in a new physical foaming batch that achieves pressure drop rates up to 120 MPa/s. Results show how average cell size of the foam nicely fit a linear behavior with the pressure drop rate in a double logarithmic plot. Furthermore, foam density initially decreases with the pressure drop rate, attaining a constant value at pressure drop rates higher than 40 MPa/s. Interestingly, furthermore, we observed that the shape of the pressure release curve has a large effect on the final foam morphology, as observed in tests in which the maximum pressure release rate was kept constant but the shape of the curve changed. These results allow for a fine tuning of the foam density and morphology for specific applications.

  5. High voltage high repetition rate pulse using Marx topology

    NASA Astrophysics Data System (ADS)

    Hakki, A.; Kashapov, N.

    2015-06-01

    The paper describes Marx topology using MOSFET transistors. Marx circuit with 10 stages has been done, to obtain pulses about 5.5KV amplitude, and the width of the pulses was about 30μsec with a high repetition rate (PPS > 100), Vdc = 535VDC is the input voltage for supplying the Marx circuit. Two Ferrite ring core transformers were used to control the MOSFET transistors of the Marx circuit (the first transformer to control the charging MOSFET transistors, the second transformer to control the discharging MOSFET transistors).

  6. Measurement of low bit-error-rates of adiabatic quantum-flux-parametron logic using a superconductor voltage driver

    NASA Astrophysics Data System (ADS)

    Takeuchi, Naoki; Suzuki, Hideo; Yoshikawa, Nobuyuki

    2017-05-01

    Adiabatic quantum-flux-parametron (AQFP) is an energy-efficient superconductor logic. The advantage of AQFP is that the switching energy can be reduced by lowering operation frequencies or by increasing the quality factors of Josephson junctions, while keeping the energy barrier height much larger than thermal energy. In other words, both low energy dissipation and low bit error rates (BERs) can be achieved. In this paper, we report the first measurement results of the low BERs of AQFP logic. We used a superconductor voltage driver with a stack of dc superconducting-quantum-interference-devices to amplify the logic signals of AQFP gates into mV-range voltage signals for the BER measurement. Our measurement results showed 3.3 dB and 2.6 dB operation margins, in which BERs were less than 10-20, for 1 Gbps and 2 Gbps data rates, respectively. While the observed BERs were very low, the estimated switching energy for the 1-Gbps operation was only 2 zJ or 30kBT, where kB is the Boltzmann's constant and T is the temperature. Unlike conventional non-adiabatic logic, BERs are not directly associated with switching energy in AQFP.

  7. Weighted partial least squares based on the error and variance of the recovery rate in calibration set

    NASA Astrophysics Data System (ADS)

    Yu, Shaohui; Xiao, Xue; Ding, Hong; Xu, Ge; Li, Haixia; Liu, Jing

    2017-08-01

    The quantitative analysis is very difficult for the emission-excitation fluorescence spectroscopy of multi-component mixtures whose fluorescence peaks are serious overlapping. As an effective method for the quantitative analysis, partial least squares can extract the latent variables from both the independent variables and the dependent variables, so it can model for multiple correlations between variables. However, there are some factors that usually affect the prediction results of partial least squares, such as the noise, the distribution and amount of the samples in calibration set etc. This work focuses on the problems in the calibration set that are mentioned above. Firstly, the outliers in the calibration set are removed by leave-one-out cross-validation. Then, according to two different prediction requirements, the EWPLS method and the VWPLS method are proposed. The independent variables and dependent variables are weighted in the EWPLS method by the maximum error of the recovery rate and weighted in the VWPLS method by the maximum variance of the recovery rate. Three organic matters with serious overlapping excitation-emission fluorescence spectroscopy are selected for the experiments. The step adjustment parameter, the iteration number and the sample amount in the calibration set are discussed. The results show the EWPLS method and the VWPLS method are superior to the PLS method especially for the case of small samples in the calibration set.

  8. Bit error rate estimation for galvanic-type intra-body communication using experimental eye-diagram and jitter characteristics.

    PubMed

    Li, Jia Wen; Chen, Xi Mei; Pun, Sio Hang; Mak, Peng Un; Gao, Yue Ming; Vai, Mang I; Du, Min

    2013-01-01

    Bit error rate (BER), which indicates the reliability of communicate channel, is one of the most important values in all kinds of communication system, including intra-body communication (IBC). In order to know more about IBC channel, this paper presents a new method of BER estimation for galvanic-type IBC using experimental eye-diagram and jitter characteristics. To lay the foundation for our methodology, the fundamental relationships between eye-diagram, jitter and BER are first reviewed. Then experiments based on human lower arm IBC are carried out using quadrature phase shift keying (QPSK) modulation scheme and 500 KHz carries frequency. In our IBC experiments, the symbol rate is from 10 Ksps to 100 Ksps, with two transmitted power settings, 0 dBm and -5 dBm. Finally, the BER results were obtained after calculation by experimental data through the relationships among eye-diagram, jitter and BER. These results are then compared with theoretical values and they show good agreement, especially when SNR is between 6 dB to 11 dB. Additionally, these results demonstrate assuming the noise of galvanic-type IBC channel as Additive White Gaussian Noise (AWGN) in previous study is applicable.

  9. HIgh Rate X-ray Fluorescence Detector

    SciTech Connect

    Grudberg, Peter Matthew

    2013-04-30

    The purpose of this project was to develop a compact, modular multi-channel x-ray detector with integrated electronics. This detector, based upon emerging silicon drift detector (SDD) technology, will be capable of high data rate operation superior to the current state of the art offered by high purity germanium (HPGe) detectors, without the need for liquid nitrogen. In addition, by integrating the processing electronics inside the detector housing, the detector performance will be much less affected by the typically noisy electrical environment of a synchrotron hutch, and will also be much more compact than current systems, which can include a detector involving a large LN2 dewar and multiple racks of electronics. The combined detector/processor system is designed to match or exceed the performance and features of currently available detector systems, at a lower cost and with more ease of use due to the small size of the detector. In addition, the detector system is designed to be modular, so a small system might just have one detector module, while a larger system can have many you can start with one detector module, and add more as needs grow and budget allows. The modular nature also serves to simplify repair. In large part, we were successful in achieving our goals. We did develop a very high performance, large area multi-channel SDD detector, packaged with all associated electronics, which is easy to use and requires minimal external support (a simple power supply module and a closed-loop water cooling system). However, we did fall short of some of our stated goals. We had intended to base the detector on modular, large-area detectors from Ketek GmbH in Munich, Germany; however, these were not available in a suitable time frame for this project, so we worked instead with pnDetector GmbH (also located in Munich). They were able to provide a front-end detector module with six 100 m^2 SDD detectors (two monolithic arrays of three elements each) along with

  10. Bipolar high-repetition-rate high-voltage nanosecond pulser

    SciTech Connect

    Tian Fuqiang; Wang Yi; Shi Hongsheng; Lei Qingquan

    2008-06-15

    The pulser designed is mainly used for producing corona plasma in waste water treatment system. Also its application in study of dielectric electrical properties will be discussed. The pulser consists of a variable dc power source for high-voltage supply, two graded capacitors for energy storage, and the rotating spark gap switch. The key part is the multielectrode rotating spark gap switch (MER-SGS), which can ensure wider range modulation of pulse repetition rate, longer pulse width, shorter pulse rise time, remarkable electrical field distortion, and greatly favors recovery of the gap insulation strength, insulation design, the life of the switch, etc. The voltage of the output pulses switched by the MER-SGS is in the order of 3-50 kV with pulse rise time of less than 10 ns and pulse repetition rate of 1-3 kHz. An energy of 1.25-125 J per pulse and an average power of up to 10-50 kW are attainable. The highest pulse repetition rate is determined by the driver motor revolution and the electrode number of MER-SGS. Even higher voltage and energy can be switched by adjusting the gas pressure or employing N{sub 2} as the insulation gas or enlarging the size of MER-SGS to guarantee enough insulation level.

  11. Bipolar high-repetition-rate high-voltage nanosecond pulser.

    PubMed

    Tian, Fuqiang; Wang, Yi; Shi, Hongsheng; Lei, Qingquan

    2008-06-01

    The pulser designed is mainly used for producing corona plasma in waste water treatment system. Also its application in study of dielectric electrical properties will be discussed. The pulser consists of a variable dc power source for high-voltage supply, two graded capacitors for energy storage, and the rotating spark gap switch. The key part is the multielectrode rotating spark gap switch (MER-SGS), which can ensure wider range modulation of pulse repetition rate, longer pulse width, shorter pulse rise time, remarkable electrical field distortion, and greatly favors recovery of the gap insulation strength, insulation design, the life of the switch, etc. The voltage of the output pulses switched by the MER-SGS is in the order of 3-50 kV with pulse rise time of less than 10 ns and pulse repetition rate of 1-3 kHz. An energy of 1.25-125 J per pulse and an average power of up to 10-50 kW are attainable. The highest pulse repetition rate is determined by the driver motor revolution and the electrode number of MER-SGS. Even higher voltage and energy can be switched by adjusting the gas pressure or employing N(2) as the insulation gas or enlarging the size of MER-SGS to guarantee enough insulation level.

  12. [Assessment of the usefulness to use a software supervising continuous infusion rates of drugs administered with pumps in ICU and estimation of the frequency of rate of administration errors].

    PubMed

    Cayot-Constantin, S; Constantin, J-M; Perez, J-P; Chevallier, P; Clapson, P; Bazin, J-E

    2010-03-01

    To assess the usefulness and the feasibility to use a software supervising continuous infusion rates of drugs administered with pumps in ICU. Follow-up of practices and inquiry in three intensive care units. Guardrails software(TM) of reassurance of the regulations of the rates of pumps (AsenaGH, Alaris). First, evaluation and quantification of the number of infusion-rates adjustments reaching the maximal superior limit (considered as infusion-rate-errors stopped by the software). Secondly, appreciate the acceptance by staffs to such a system by a blinded questionnaire and a quantification of the number of dataset pumps programs performed with the software. The number of administrations started with the pumps of the study in the three services (11 beds) during the period of study was 63,069 and 42,694 of them (67.7 %) used the software. The number of potential errors of continuous infusion rates was 11, corresponding to a rate of infusion-rate errors of 26/100,000. KCl and insulin were concerned in two and five cases, respectively. Eighty percent of the nurses estimated that infusion-rate-errors were rare or exceptional but potentially harmful. Indeed, they considered that software supervising the continuous infusion rates of pumps could improve safety. The risk of infusion-rate-errors of drugs administered continuously with pump in ICU is rare but potentially harmful. A software that controlled the continuous infusion rates could be useful. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.

  13. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  14. High data rate systems for the future

    NASA Technical Reports Server (NTRS)

    Chitwood, John

    1991-01-01

    Information systems in the next century will transfer data at rates that are much greater than those in use today. Satellite based communication systems will play an important role in networking users. Typical data rates; use of microwave, millimeter wave, or optical systems; millimeter wave communication technology; modulators/exciters; solid state power amplifiers; beam waveguide transmission systems; low noise receiver technology; optical communication technology; and the potential commercial applications of these technologies are discussed.

  15. Dosimetry modeling for focal high-dose-rate prostate brachytherapy.

    PubMed

    Mason, Josh; Al-Qaisieh, Bashar; Bownes, Peter; Thwaites, David; Henry, Ann

    2014-01-01

    The dosimetry of focal high-dose-rate prostate brachytherapy was assessed. Dose volume histogram parameters, robustness to source position errors, and Monte Carlo (MC) simulations were compared for whole-gland (WG), hemi-gland (HEMI), and ultra-focal (UF) treatment plans. Tumor volumes were delineated based on MRI and template biopsy results for 9 patients. WG, HEMI, and UF plans were produced assuming 19 Gy single fraction monotherapy treatments. For UF plans, a 6-mm margin was applied to the visible tumor to create a focal-planning target volume (F-PTV). Systematic source position shifts of 1-4 mm were applied to assess plan robustness. The dosimetric impact of steel catheters was assessed using MC simulation. Mean D90 and V100 were 20.4 Gy and 97.9% for prostate in WG plans, 22.2 Gy and 98.1% for hemi-prostate in HEMI plans, and 23.0 Gy and 98.2% for F-PTV in UF plans. Mean urethra D10 was 20.3, 19.7, and 9.2 Gy in WG, HEMI, and UF plans, respectively. Mean rectal D2cc was 12.5, 9.8, and 4.6 Gy in WG, HEMI, and UF plans, respectively. Focal treatment plans were sensitive to source position errors-2 mm systematic shifts reduced mean prostate D90 by 0.7%, hemi-prostate D90 by 2.6%, and F-PTV D90 by 8.3% in WG, HEMI, and UF plans, respectively. MC simulation results were similar for all plan types with most dose volume histogram parameters reduced by <2%. HEMI and UF treatments can achieve higher D90 values compared with WG treatments with reduced organ at risk dose. Focal treatments are more sensitive to systematic source position errors than WG treatments. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  16. An integrated CMOS high data rate transceiver for video applications

    NASA Astrophysics Data System (ADS)

    Yaping, Liang; Dazhi, Che; Cheng, Liang; Lingling, Sun

    2012-07-01

    This paper presents a 5 GHz CMOS radio frequency (RF) transceiver built with 0.18 μm RF-CMOS technology by using a proprietary protocol, which combines the new IEEE 802.11n features such as multiple-in multiple-out (MIMO) technology with other wireless technologies to provide high data rate robust real-time high definition television (HDTV) distribution within a home environment. The RF frequencies cover from 4.9 to 5.9 GHz: the industrial, scientific and medical (ISM) band. Each RF channel bandwidth is 20 MHz. The transceiver utilizes a direct up transmitter and low-IF receiver architecture. A dual-quadrature direct up conversion mixer is used that achieves better than 35 dB image rejection without any on chip calibration. The measurement shows a 6 dB typical receiver noise figure and a better than 33 dB transmitter error vector magnitude (EVM) at -3 dBm output power.

  17. A High-Precision Instrument for Mapping of Rotational Errors in Rotary Stages

    SciTech Connect

    Xu, W.; Lauer, K.; Chu, Y.; Nazaretski, E.

    2014-11-02

    A rotational stage is a key component of every X-ray instrument capable of providing tomographic or diffraction measurements. To perform accurate three-dimensional reconstructions, runout errors due to imperfect rotation (e.g. circle of confusion) must be quantified and corrected. A dedicated instrument capable of full characterization and circle of confusion mapping in rotary stages down to the sub-10 nm level has been developed. A high-stability design, with an array of five capacitive sensors, allows simultaneous measurements of wobble, radial and axial displacements. The developed instrument has been used for characterization of two mechanical stages which are part of an X-ray microscope.

  18. On verifying a high-level design. [cost and error analysis

    NASA Technical Reports Server (NTRS)

    Mathew, Ben; Wehbeh, Jalal A.; Saab, Daniel G.

    1993-01-01

    An overview of design verification techniques is presented, and some of the current research in high-level design verification is described. Formal hardware description languages that are capable of adequately expressing the design specifications have been developed, but some time will be required before they can have the expressive power needed to be used in real applications. Simulation-based approaches are more useful in finding errors in designs than they are in proving the correctness of a certain design. Hybrid approaches that combine simulation with other formal design verification techniques are argued to be the most promising over the short term.

  19. A High-Precision Instrument for Mapping of Rotational Errors in Rotary Stages

    DOE PAGES

    Xu, W.; Lauer, K.; Chu, Y.; ...

    2014-11-02

    A rotational stage is a key component of every X-ray instrument capable of providing tomographic or diffraction measurements. To perform accurate three-dimensional reconstructions, runout errors due to imperfect rotation (e.g. circle of confusion) must be quantified and corrected. A dedicated instrument capable of full characterization and circle of confusion mapping in rotary stages down to the sub-10 nm level has been developed. A high-stability design, with an array of five capacitive sensors, allows simultaneous measurements of wobble, radial and axial displacements. The developed instrument has been used for characterization of two mechanical stages which are part of an X-ray microscope.

  20. Optimal error estimates for high order Runge-Kutta methods applied to evolutionary equations

    SciTech Connect

    McKinney, W.R.

    1989-01-01

    Fully discrete approximations to 1-periodic solutions of the Generalized Korteweg de-Vries and the Cahn-Hilliard equations are analyzed. These approximations are generated by an Implicit Runge-Kutta method for the temporal discretization and a Galerkin Finite Element method for the spatial discretization. Furthermore, these approximations may be of arbitrarily high order. In particular, it is shown that the well-known order reduction phenomenon afflicting Implicit Runge Kutta methods does not occur. Numerical results supporting these optimal error estimates for the Korteweg-de Vries equation and indicating the existence of a slow motion manifold for the Cahn-Hilliard equation are also provided.