Science.gov

Sample records for high error rates

  1. A forward error correction technique using a high-speed, high-rate single chip codec

    NASA Technical Reports Server (NTRS)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    1989-01-01

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  2. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors

    PubMed Central

    Bányai, László; Patthy, László

    2016-01-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation. PMID:27476717

  3. High rates of phasing errors in highly polymorphic species with low levels of linkage disequilibrium.

    PubMed

    Bukowicki, Marek; Franssen, Susanne U; Schlötterer, Christian

    2016-07-01

    Short read sequencing of diploid individuals does not permit the direct inference of the sequence on each of the two homologous chromosomes. Although various phasing software packages exist, they were primarily tailored for and tested on human data, which differ from other species in factors that influence phasing, such as SNP density, amounts of linkage disequilibrium (LD) and sample sizes. Despite becoming increasingly popular for other species, the reliability of phasing in non-human data has not been evaluated to a sufficient extent. We scrutinized the phasing accuracy for Drosophila melanogaster, a species with high polymorphism levels and reduced LD relative to humans. We phased two D. melanogaster populations and compared the results to the known haplotypes. The performance increased with size of the reference panel and was highest when the reference panel and phased individuals were from the same population. Full genomic SNP data and inclusion of sequence read information also improved phasing. Despite humans and Drosophila having similar switch error rates between polymorphic sites, the distances between switch errors were much shorter in Drosophila with only fragments <300-1500 bp being correctly phased with ≥95% confidence. This suggests that the higher SNP density cannot compensate for the higher recombination rate in D. melanogaster. Furthermore, we show that populations that have gone through demographic events such as bottlenecks can be phased with higher accuracy. Our results highlight that statistically phased data are particularly error prone in species with large population sizes or populations lacking suitable reference panels. PMID:26929272

  4. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  5. High speed and adaptable error correction for megabit/s rate quantum key distribution

    NASA Astrophysics Data System (ADS)

    Dixon, A. R.; Sato, H.

    2014-12-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  6. High-speed communication detector characterization by bit error rate measurements

    NASA Technical Reports Server (NTRS)

    Green, S. I.

    1978-01-01

    Performance data taken on several candidate high data rate laser communications photodetectors is presented. Measurements of bit error rate versus signal level were made in both a 1064 nm system at 400 Mbps and a 532 nm system at 500 Mbps. RCA silicon avalanche photodiodes are superior at 1064 nm, but the Rockwell hybrid 3-5 avalanche photodiode preamplifiers offer potentially superior performance. Varian dynamic crossed field photomultipliers are superior at 532 nm, however, the RCA silicon avalanche photodiode is a close contender.

  7. Bit error rate performance of Image Processing Facility high density tape recorders

    NASA Technical Reports Server (NTRS)

    Heffner, P.

    1981-01-01

    The Image Processing Facility at the NASA/Goddard Space Flight Center uses High Density Tape Recorders (HDTR's) to transfer high volume image data and ancillary information from one system to another. For ancillary information, it is required that very low bit error rates (BER's) accompany the transfers. The facility processes about 10 to the 11th bits of image data per day from many sensors, involving 15 independent processing systems requiring the use of HDTR's. When acquired, the 16 HDTR's offered state-of-the-art performance of 1 x 10 to the -6th BER as specified. The BER requirement was later upgraded in two steps: (1) incorporating data randomizing circuitry to yield a BER of 2 x 10 to the -7th and (2) further modifying to include a bit error correction capability to attain a BER of 2 x 10 to the -9th. The total improvement factor was 500 to 1. Attention is given here to the background, technical approach, and final results of these modifications. Also discussed are the format of the data recorded by the HDTR, the magnetic tape format, the magnetic tape dropout characteristics as experienced in the Image Processing Facility, the head life history, and the reliability of the HDTR's.

  8. High-rate error-correction codes for the optical atmospheric channel

    NASA Astrophysics Data System (ADS)

    Anguita, Jaime A.; Djordjevic, Ivan B.; Neifeld, Mark A.; Vasic, Bane V.

    2005-08-01

    We evaluate two error correction systems based on low-density parity-check (LDPC) codes for free-space optical (FSO) communication channels subject to atmospheric turbulence. We simulate the effect of turbulence on the received signal by modeling the channel with a gamma-gamma distribution. We compare the bit-error rate performance of these codes with the performance of Reed-Solomon codes of similar rate and obtain coding gains from 3 to 14 dB depending on the turbulence conditions.

  9. Design of bit error rate tester based on a high speed bit and sequence synchronization

    NASA Astrophysics Data System (ADS)

    Wang, Xuanmin; Zhao, Xiangmo; Zhang, Lichuan; Zhang, Yinglong

    2013-03-01

    In traditional BER (Bit Error Rate) tester, bit synchronization applied digital PLL and sequence synchronization utilized sequence's correlation.It resulted in a low speed on bit and sequence synchronization. this paper came up new method to realize bit and sequence synchronization .which were Bit-edge-tracking method and Immitting-sequence method.The BER tester based on FPGA was designed.The functions of inserting error-bit and removing the false sequence synchronization were added. The results of Debuging and simulating display that the time to realize bit synchronization is less than a bit width, the lagged time of the tracking bit pulse is 1/8 of the code cycle,and there is only a M sequence's cycle to realize sequence synchronization.This new BER tester has many advantages,such as a short time to realize bit and sequence synchronization,no false sequence synchronization,testing the ability of the receiving port's error -correcting and a simple hareware.

  10. “Missed” Mild Cognitive Impairment: High False-Negative Error Rate Based on Conventional Diagnostic Criteria

    PubMed Central

    Edmonds, Emily C.; Delano-Wood, Lisa; Jak, Amy J.; Galasko, Douglas R.; Salmon, David P.; Bondi, Mark W.

    2016-01-01

    Mild cognitive impairment (MCI) is typically diagnosed using subjective complaints, screening measures, clinical judgment, and a single memory score. Our prior work has shown that this method is highly susceptible to false-positive diagnostic errors. We examined whether the criteria also lead to “false-negative” errors by diagnostically reclassifying 520 participants using novel actuarial neuropsychological criteria. Results revealed a false-negative error rate of 7.1%. Participants’ neuropsychological performance, cerebrospinal fluid biomarkers, and rate of decline provided evidence that an MCI diagnosis is warranted. The impact of “missed” cases of MCI has direct relevance to clinical practice, research studies, and clinical trials of prodromal Alzheimer's disease. PMID:27031477

  11. Unacceptably High Error Rates in Vitek 2 Testing of Cefepime Susceptibility in Extended-Spectrum-β-Lactamase-Producing Escherichia coli

    PubMed Central

    Rhodes, Nathaniel J.; Richardson, Chad L.; Heraty, Ryan; Liu, Jiajun; Malczynski, Michael; Qi, Chao

    2014-01-01

    While a lack of concordance is known between gold standard MIC determinations and Vitek 2, the magnitude of the discrepancy and its impact on treatment decisions for extended-spectrum-β-lactamase (ESBL)-producing Escherichia coli are not. Clinical isolates of ESBL-producing E. coli were collected from blood, tissue, and body fluid samples from January 2003 to July 2009. Resistance genotypes were identified by PCR. Primary analyses evaluated the discordance between Vitek 2 and gold standard methods using cefepime susceptibility breakpoint cutoff values of 8, 4, and 2 μg/ml. The discrepancies in MICs between the methods were classified per convention as very major, major, and minor errors. Sensitivity, specificity, and positive and negative predictive values for susceptibility classifications were calculated. A total of 304 isolates were identified; 59% (179) of the isolates carried blaCTX-M, 47% (143) carried blaTEM, and 4% (12) carried blaSHV. At a breakpoint MIC of 8 μg/ml, Vitek 2 produced a categorical agreement of 66.8% and exhibited very major, major, and minor error rates of 23% (20/87 isolates), 5.1% (8/157 isolates), and 24% (73/304), respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 8 μg/ml were 94.9%, 61.2%, 72.3%, and 91.8%, respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 2 μg/ml were 83.8%, 65.3%, 41%, and 93.3%, respectively. Vitek 2 results in unacceptably high error rates for cefepime compared to those of agar dilution for ESBL-producing E. coli. Clinicians should be wary of making treatment decisions on the basis of Vitek 2 susceptibility results for ESBL-producing E. coli. PMID:24752253

  12. Improvement of Bit Error Rate in Holographic Data Storage Using the Extended High-Frequency Enhancement Filter

    NASA Astrophysics Data System (ADS)

    Kim, Do-Hyung; Cho, Janghyun; Moon, Hyungbae; Jeon, Sungbin; Park, No-Cheol; Yang, Hyunseok; Park, Kyoung-Su; Park, Young-Pil

    2013-09-01

    Optimized image restoration is suggested in angular-multiplexing-page-based holographic data storage. To improve the bit error rate (BER), an extended high frequency enhancement filter is recalculated from the point spread function (PSF) and Gaussian mask as the image restoration filter. Using the extended image restoration filter, the proposed system reduces the number of processing steps compared with the image upscaling method and provides better performance in BER and SNR. Numerical simulations and experiments were performed to verify the proposed method. The proposed system exhibited a marked improvement in BER from 0.02 to 0.002 for a Nyquist factor of 1.1, and from 0.006 to 0 for a Nyquist factor of 1.2. Moreover, more than 3 times faster performance in calculation time was achieved compared with image restoration with PSF upscaling owing to the reductions in the number of system process and calculation load.

  13. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities

    PubMed Central

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-01-01

    Introduction Sound is among the significant environmental factors for people’s health, and it has an important role in both physical and psychological injuries, and it also affects individuals’ performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. Methods This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Results Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant’s performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). Conclusion This study found that a sound level of 110 dB had an important effect on the individuals’ performances, i.e., the performances were decreased. PMID:27123216

  14. Bit-error-rate testing of high-power 30-GHz traveling wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Fujikawa, Gene

    1986-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30 GHz, 200 W, coupled-cavity traveling wave tubes (TWTs). The transmission effects of each TWT were investigated on a band-limited, 220 Mb/sec SMSK signal. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20 GHz technology development program. The approach taken to test the 30 GHz tubes is described and the resultant test data are discussed. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  15. Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.

    1987-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  16. Adaptation of bit error rate by coding

    NASA Astrophysics Data System (ADS)

    Marguinaud, A.; Sorton, G.

    1984-07-01

    The use of coding in spacecraft wideband communication to reduce power transmission, save bandwith, and lower antenna specifications was studied. The feasibility of a coder decoder functioning at a bit rate of 10 Mb/sec with a raw bit error rate (BER) of 0.001 and an output BER of 0.000000001 is demonstrated. A single block code protection, and two coding levels protection are examined. A single level protection BCH code with 5 errors correction capacity, 16% redundancy, and interleaving depth 4 giving a coded block of 1020 bits is simple to implement, but has BER = 0.000000007. A single level BCH code with 7 errors correction capacity and 12% redundancy meets specifications, but is more difficult to implement. Two level protection with 9% BCH outer and 10% BCH inner codes, both levels with 3 errors correction capacity and 8% redundancy for a coded block of 7050 bits is the most complex, but offers performance advantages.

  17. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    PubMed Central

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572

  18. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase.

    PubMed

    McInerney, Peter; Adams, Paul; Hadi, Masood Z

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572

  19. Forward error correction and spatial diversity techniques for high-data-rate MILSATCOM over a slow-fading, nuclear-disturbed channel

    NASA Astrophysics Data System (ADS)

    Paul, Heywood I.; Meader, Charles B.; Lyons, Daniel A.; Ayers, David R.

    Forward error correction (FEC) and spatial diversity techniques are considered for improving the reliability of high-data-rate military satellite communication (MILSATCOM) over a slow-fading, nuclear-disturbed channel. Slow fading, which occurs when the channel decorrelation time is much greater than the transmitted symbol interval, is characterized by deep fades and, without special precautions, long bursts of errors over high-data-rate communication links. Using the widely accepted Defense Nuclear Agency (DNA) nuclear-scintillated channel model, the authors derive performance tradeoffs among required interleaver storage, FEC, spatial diversity, and link signal-to-noise ratio for differential binary phase shift keying (DBPSK) in the slow-fading environment. Spatial diversity is found to yield impressive gains without the large memory storage and transmission relay requirements associated with interleaving.

  20. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGESBeta

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore » rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  1. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. PMID:26888900

  2. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Content of Error Rate Reports. 98.102 Section 98... DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report—At a minimum, States, the District of Columbia and Puerto Rico shall submit an initial error...

  3. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  4. Improved coded optical communication error rates using joint detection receivers

    NASA Astrophysics Data System (ADS)

    Dutton, Zachary; Guha, Saikat; Chen, Jian; Habif, Jonathan; Lazarus, Richard

    2012-02-01

    It is now known that coherent state (laser light) modulation is sufficient to reach the ultimate quantum limit (the Holevo bound) for classical communication capacity. However, all current optical communication systems are fundamentally limited in capacity because they perform measurements on single symbols at a time. To reach the Holevo bound, joint quantum measurements over long symbol blocks will be required. We recently proposed and demonstrated the ``conditional pulse nulling'' (CPN) receiver -- which acts jointly on the time slots of a pulse-position-modulation (PPM) codeword by employing pulse nulling and quantum feedforward -- and demonstrated a 2.3 dB improvement in error rate over direct detection (DD). In a communication system coded error rates are made arbitrary small by employing an outer code (such as Reed-Solomon (RS)). Here we analyze RS coding of PPM errors with both DD and CPN receivers and calculate the outer code length requirements. We find the improved PPM error rates with the CPN translates into >10 times improvement in the required outer code length at high rates. This advantage also translates increase the range for a given coding complexity. In addition, we present results for outer coded error rates of our recently proposed ``Green Machine'' which realizes a joint detection advantage for binary phase shift keyed (BPSK) modulation.

  5. Error Growth Rate in the MM5 Model

    NASA Astrophysics Data System (ADS)

    Ivanov, S.; Palamarchuk, J.

    2006-12-01

    The goal of this work is to estimate model error growth rates in simulations of the atmospheric circulation by the MM5 model all the way from the short range to the medium range and beyond. The major topics are addressed to: (i) search the optimal set of parameterization schemes; (ii) evaluate the spatial structure and scales of the model error for various atmospheric fields; (iii) determine geographical regions where model errors are largest; (iv) define particular atmospheric patterns contributing to the fast and significant model error growth. Results are presented for geopotential, temperature, relative humidity and horizontal wind components fields on standard surfaces over the Atlantic-European region during winter 2002. Various combinations of parameterization schemes for cumulus, PBL, moisture and radiation are used to identify which one provides a lesser difference between the model state and analysis. The comparison of the model fields is carried out versus ERA-40 reanalysis of the ECMWF. Results show that the rate, at which the model error grows as well as its magnitude, varies depending on the forecast range, atmospheric variable and level. The typical spatial scale and structure of the model error also depends on the particular atmospheric variable. The distribution of the model error over the domain can be separated in two parts: the steady and transient. The first part is associated with a few high mountain regions including Greenland, where model error is larger. The transient model error mainly moves along with areas of high gradients in the atmospheric flow. Acknowledgement: This study has been supported by NATO Science for Peace grant #981044. The MM5 modelling system used in this study has been provided by UCAR. ERA-40 re-analysis data have been obtained from the ECMWF data server.

  6. Controlling type-1 error rates in whole effluent toxicity testing

    SciTech Connect

    Smith, R.; Johnson, S.C.

    1995-12-31

    A form of variability, called the dose x test interaction, has been found to affect the variability of the mean differences from control in the statistical tests used to evaluate Whole Effluent Toxicity Tests for compliance purposes. Since the dose x test interaction is not included in these statistical tests, the assumed type-1 and type-2 error rates can be incorrect. The accepted type-1 error rate for these tests is 5%. Analysis of over 100 Ceriodaphnia, fathead minnow and sea urchin fertilization tests showed that when the test x dose interaction term was not included in the calculations the type-1 error rate was inflated to as high as 20%. In a compliance setting, this problem may lead to incorrect regulatory decisions. Statistical tests are proposed that properly incorporate the dose x test interaction variance.

  7. Defining Error Rates and Power for Detecting Answer Copying.

    ERIC Educational Resources Information Center

    Wollack, James A.; Cohen, Allan S.; Serlin, Ronald C.

    2001-01-01

    Developed a family wise approach for evaluating the significance of copying indices designed to hold the Type I error rate constant for each examinee. Examined the Type I error rate and power of two indices under a variety of copying situations. Results indicate the superiority of a family wise definition of Type I error rate over a pair-wise…

  8. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public... Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart apply to the fifty States, the District of Columbia and Puerto Rico. (b) Generally—States, the...

  9. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Error Rate Report. 98.100 Section 98.100 Public... Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart apply to the fifty States, the District of Columbia and Puerto Rico. (b) Generally—States, the...

  10. Neutron-induced soft error rate measurements in semiconductor memories

    NASA Astrophysics Data System (ADS)

    Ünlü, Kenan; Narayanan, Vijaykrishnan; Çetiner, Sacit M.; Degalahal, Vijay; Irwin, Mary J.

    2007-08-01

    Soft error rate (SER) testing of devices have been performed using the neutron beam at the Radiation Science and Engineering Center at Penn State University. The soft error susceptibility for different memory chips working at different technology nodes and operating voltages is determined. The effect of 10B on SER as an in situ excess charge source is observed. The effect of higher-energy neutrons on circuit operation will be published later. Penn State Breazeale Nuclear Reactor was used as the neutron source in the experiments. The high neutron flux allows for accelerated testing of the SER phenomenon. The experiments and analyses have been performed only on soft errors due to thermal neutrons. Various memory chips manufactured by different vendors were tested at various supply voltages and reactor power levels. The effect of 10B reaction caused by thermal neutron absorption on SER is discussed.

  11. Logical error rate in the Pauli twirling approximation

    PubMed Central

    Katabarwa, Amara; Geller, Michael R.

    2015-01-01

    The performance of error correction protocols are necessary for understanding the operation of potential quantum computers, but this requires physical error models that can be simulated efficiently with classical computers. The Gottesmann-Knill theorem guarantees a class of such error models. Of these, one of the simplest is the Pauli twirling approximation (PTA), which is obtained by twirling an arbitrary completely positive error channel over the Pauli basis, resulting in a Pauli channel. In this work, we test the PTA’s accuracy at predicting the logical error rate by simulating the 5-qubit code using a 9-qubit circuit with realistic decoherence and unitary gate errors. We find evidence for good agreement with exact simulation, with the PTA overestimating the logical error rate by a factor of 2 to 3. Our results suggest that the PTA is a reliable predictor of the logical error rate, at least for low-distance codes. PMID:26419417

  12. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.

  13. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval; Wallman, Joel; Sanders, Barry

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.

  14. The Rate of Physicochemical Incompatibilities, Administration Errors. Factors Correlating with Nurses' Errors.

    PubMed

    Fahimi, Fanak; Sefidani Forough, Aida; Taghikhani, Sepideh; Saliminejad, Leila

    2015-01-01

    Medication errors are commonly encountered in hospital setting. Intravenous medications pose particular risks because of their greater complexity and the multiple steps required in their preparation, administration and monitoring. We aimed to determine the rate of errors during the preparation and administration phase of intravenous medications and the correlation of these errors with the demographics of nurses involved in the process. One hundred patients who were receiving IV medications were monitored by a trained pharmacist. The researcher accompanied the nurses during the preparation and administration process of IV medications. Collected data were compared with the acceptable guidelines. A checklist was filled for each IV medication. Demographic data of the nurses were collected as well. A total of 454 IV medications were recorded. Inappropriate administration rate constituted a large proportion of errors in our study (35.3%). No significant or life threatening drug interaction was recorded during the study. Evaluating the impact of the nurses' demographic characteristics on the incidence of medication errors showed that there is a direct correlation between nurses' employment status and the rate of medication errors, while other characteristics did not show a significant impact on the rate of administration errors. Administration errors were significantly higher in temporary 1-year contract group than other groups (p-value < 0.0001). Study results show that there should be more vigilance on administration rate of IV medications to prevent negative consequences especially by pharmacists. Optimizing the working conditions of nurses may play a crucial role. PMID:26185509

  15. The Rate of Physicochemical Incompatibilities, Administration Errors. Factors Correlating with Nurses' Errors

    PubMed Central

    Fahimi, Fanak; Sefidani Forough, Aida; Taghikhani, Sepideh; Saliminejad, Leila

    2015-01-01

    Medication errors are commonly encountered in hospital setting. Intravenous medications pose particular risks because of their greater complexity and the multiple steps required in their preparation, administration and monitoring. We aimed to determine the rate of errors during the preparation and administration phase of intravenous medications and the correlation of these errors with the demographics of nurses involved in the process. One hundred patients who were receiving IV medications were monitored by a trained pharmacist. The researcher accompanied the nurses during the preparation and administration process of IV medications. Collected data were compared with the acceptable guidelines. A checklist was filled for each IV medication. Demographic data of the nurses were collected as well. A total of 454 IV medications were recorded. Inappropriate administration rate constituted a large proportion of errors in our study (35.3%). No significant or life threatening drug interaction was recorded during the study. Evaluating the impact of the nurses’ demographic characteristics on the incidence of medication errors showed that there is a direct correlation between nurses’ employment status and the rate of medication errors, while other characteristics did not show a significant impact on the rate of administration errors. Administration errors were significantly higher in temporary 1-year contract group than other groups (p-value < 0.0001). Study results show that there should be more vigilance on administration rate of IV medications to prevent negative consequences especially by pharmacists. Optimizing the working conditions of nurses may play a crucial role. PMID:26185509

  16. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  17. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  18. Hypercorrection of High Confidence Errors in Children

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2012-01-01

    Three experiments investigated whether the hypercorrection effect--the finding that errors committed with high confidence are easier, rather than more difficult, to correct than are errors committed with low confidence--occurs in grade school children as it does in young adults. All three experiments showed that Grade 3-6 children hypercorrected…

  19. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  20. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency

    PubMed Central

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2013-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 2nd graders and 974 3rd graders. Participants were assessed using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and the Woodcock Reading Mastery Test (WRMT) Passage Comprehension subtest. Results from this study further illuminate the significant relationships between error rate, oral reading fluency, and reading comprehension performance, and grade-specific guidelines for appropriate error rate levels. Low oral reading fluency and high error rates predict the level of passage comprehension performance. For second grade students below benchmark, a fall assessment error rate of 28% predicts that student comprehension performance will be below average. For third grade students below benchmark, the fall assessment cut point is 14%. Instructional implications of the findings are discussed. PMID:24319307

  1. Dose error from deviation of dwell time and source position for high dose-rate 192Ir in remote afterloading system

    PubMed Central

    Okamoto, Hiroyuki; Aikawa, Ako; Wakita, Akihisa; Yoshio, Kotaro; Murakami, Naoya; Nakamura, Satoshi; Hamada, Minoru; Abe, Yoshihisa; Itami, Jun

    2014-01-01

    The influence of deviations in dwell times and source positions for 192Ir HDR-RALS was investigated. The potential dose errors for various kinds of brachytherapy procedures were evaluated. The deviations of dwell time ΔT of a 192Ir HDR source for the various dwell times were measured with a well-type ionization chamber. The deviations of source position ΔP were measured with two methods. One is to measure actual source position using a check ruler device. The other is to analyze peak distances from radiographic film irradiated with 20 mm gap between the dwell positions. The composite dose errors were calculated using Gaussian distribution with ΔT and ΔP as 1σ of the measurements. Dose errors depend on dwell time and distance from the point of interest to the dwell position. To evaluate the dose error in clinical practice, dwell times and point of interest distances were obtained from actual treatment plans involving cylinder, tandem-ovoid, tandem-ovoid with interstitial needles, multiple interstitial needles, and surface-mold applicators. The ΔT and ΔP were 32 ms (maximum for various dwell times) and 0.12 mm (ruler), 0.11 mm (radiographic film). The multiple interstitial needles represent the highest dose error of 2%, while the others represent less than approximately 1%. Potential dose error due to dwell time and source position deviation can depend on kinds of brachytherapy techniques. In all cases, the multiple interstitial needles is most susceptible. PMID:24566719

  2. Experimental quantum error correction with high fidelity

    NASA Astrophysics Data System (ADS)

    Zhang, Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-01

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.81.2152 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from ɛ to ˜ɛ2. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  3. Experimental quantum error correction with high fidelity

    SciTech Connect

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-15

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from {epsilon} to {approx}{epsilon}{sup 2}. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  4. Theoretical Accuracy for ESTL Bit Error Rate Tests

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin

    1998-01-01

    "Bit error rate" [BER] for the purposes of this paper is the fraction of binary bits which are inverted by passage through a communication system. BER can be measured for a block of sample bits by comparing a received block with the transmitted block and counting the erroneous bits. Bit Error Rate [BER] tests are the most common type of test used by the ESTL for evaluating system-level performance. The resolution of the test is obvious: the measurement cannot be resolved more finely than 1/N, the number of bits tested. The tolerance is not. This paper examines the measurement accuracy of the bit error rate test. It is intended that this information will be useful in analyzing data taken in the ESTL. This paper is divided into four sections and follows a logically ordered presentation, with results developed before they are evaluated. However, first-time readers will derive the greatest benefit from this paper by skipping the lengthy section devoted to analysis, and treating it as reference material. The analysis performed in this paper is based on a Probability Density Function [PDF] which is developed with greater detail in a past paper, Theoretical Accuracy for ESTL Probability of Acquisition Tests, EV4-98-609.

  5. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  6. Empirical assessment of sequencing errors for high throughput pyrosequencing data

    PubMed Central

    2013-01-01

    Background Sequencing-by-synthesis technologies significantly improve over the Sanger method in terms of speed and cost per base. However, they still usually fail to compete in terms of read length and quality. Current high-throughput implementations of the pyrosequencing technique yield reads whose length approach those of the capillary electrophoresis method. A less obvious question is whether their quality is affected by platform-specific sequencing errors. Results We present an empirical study aimed at assessing the quality and characterising sequencing errors for high throughput pyrosequencing data. We have developed a procedure for extracting sequencing error data from genome assemblies and study their characteristics, in particular the length distribution of indel gaps and their relation to the sequence contexts where they occur. We used this procedure to analyse data from three prokaryotic genomes sequenced with the GS FLX technology. We also compared two models previously employed with success for peptide sequence alignment. Conclusions We observed an overall very low error rate in the analysed data, with indel errors being much more abundant than substitutions. We also observed a dependence between the length of the gaps and that of the homopolymer context where they occur. As with protein alignments, a power-law model seems to approximate the indel errors more accurately, although the results are not so conclusive as to justify a depart from the commonly used affine gap penalty scheme. In whichever case, however, our procedure can be used to estimate more realistic error model parameters. PMID:23339526

  7. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  8. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  9. Improving bit error rate through multipath differential demodulation

    NASA Astrophysics Data System (ADS)

    Lize, Yannick Keith; Christen, Louis; Nuccio, Scott; Willner, Alan E.; Kashyap, Raman

    2007-02-01

    Differential phase shift keyed transmission (DPSK) is currently under serious consideration as a deployable datamodulation format for high-capacity optical communication systems due mainly to its 3 dB OSNR advantage over intensity modulation. However DPSK OSNR requirements are still 3 dB higher than its coherent counter part, PSK. Some strategies have been proposed to reduce this penalty through multichip soft detection but the improvement is limited to 0.3dB at BER 10-3. Better performance is expected from other soft-detection schemes using feedback control but the implementation is not straight forward. We present here an optical multipath error correction technique for differentially encoded modulation formats such as differential-phase-shift-keying (DPSK) and differential polarization shift keying (DPolSK) for fiber-based and free-space communication. This multipath error correction method combines optical and electronic logic gates. The scheme can easily be implemented using commercially available interferometers and high speed logic gates and does not require any data overhead therefore does not affect the effective bandwidth of the transmitted data. It is not merely compatible but also complementary to error correction codes commonly used in optical transmission systems such as forward-error-correction (FEC). The technique consists of separating the demodulation at the receiver in multiple paths. Each path consists of a Mach-Zehnder interferometer with an integer bit delay and a different delay is used in each path. Some basic logical operations follow and the three paths are compared using a simple majority vote algorithm. Receiver sensitivity is improved by 0.35 dB in simulations and 1.5 dB experimentally at BER of 10-3.

  10. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  11. Error Rates and Channel Capacities in Multipulse PPM

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Moision, Bruce

    2007-01-01

    A method of computing channel capacities and error rates in multipulse pulse-position modulation (multipulse PPM) has been developed. The method makes it possible, when designing an optical PPM communication system, to determine whether and under what conditions a given multipulse PPM scheme would be more or less advantageous, relative to other candidate modulation schemes. In conventional M-ary PPM, each symbol is transmitted in a time frame that is divided into M time slots (where M is an integer >1), defining an M-symbol alphabet. A symbol is represented by transmitting a pulse (representing 1) during one of the time slots and no pulse (representing 0 ) during the other M 1 time slots. Multipulse PPM is a generalization of PPM in which pulses are transmitted during two or more of the M time slots.

  12. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  13. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    SciTech Connect

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  14. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  15. Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers

    NASA Technical Reports Server (NTRS)

    Ha, Eunho; North, Gerald R.

    1995-01-01

    Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.

  16. High-dimensional bolstered error estimation

    PubMed Central

    Sima, Chao; Braga-Neto, Ulisses M.; Dougherty, Edward R.

    2011-01-01

    Motivation: In small-sample settings, bolstered error estimation has been shown to perform better than cross-validation and competitively with bootstrap with regard to various criteria. The key issue for bolstering performance is the variance setting for the bolstering kernel. Heretofore, this variance has been determined in a non-parametric manner from the data. Although bolstering based on this variance setting works well for small feature sets, results can deteriorate for high-dimensional feature spaces. Results: This article computes an optimal kernel variance depending on the classification rule, sample size, model and feature space, both the original number and the number remaining after feature selection. A key point is that the optimal variance is robust relative to the model. This allows us to develop a method for selecting a suitable variance to use in real-world applications where the model is not known, but the other factors in determining the optimal kernel are known. Availability: Companion website at http://compbio.tgen.org/paper_supp/high_dim_bolstering Contact: edward@mail.ece.tamu.edu PMID:21914630

  17. Testing Theories of Transfer Using Error Rate Learning Curves.

    PubMed

    Koedinger, Kenneth R; Yudelson, Michael V; Pavlik, Philip I

    2016-07-01

    We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions. PMID:27230694

  18. High Dimensional Variable Selection with Error Control.

    PubMed

    Kim, Sangjin; Halabi, Susan

    2016-01-01

    Background. The iterative sure independence screening (ISIS) is a popular method in selecting important variables while maintaining most of the informative variables relevant to the outcome in high throughput data. However, it not only is computationally intensive but also may cause high false discovery rate (FDR). We propose to use the FDR as a screening method to reduce the high dimension to a lower dimension as well as controlling the FDR with three popular variable selection methods: LASSO, SCAD, and MCP. Method. The three methods with the proposed screenings were applied to prostate cancer data with presence of metastasis as the outcome. Results. Simulations showed that the three variable selection methods with the proposed screenings controlled the predefined FDR and produced high area under the receiver operating characteristic curve (AUROC) scores. In applying these methods to the prostate cancer example, LASSO and MCP selected 12 and 8 genes and produced AUROC scores of 0.746 and 0.764, respectively. Conclusions. We demonstrated that the variable selection methods with the sequential use of FDR and ISIS not only controlled the predefined FDR in the final models but also had relatively high AUROC scores. PMID:27597974

  19. High Dimensional Variable Selection with Error Control

    PubMed Central

    2016-01-01

    Background. The iterative sure independence screening (ISIS) is a popular method in selecting important variables while maintaining most of the informative variables relevant to the outcome in high throughput data. However, it not only is computationally intensive but also may cause high false discovery rate (FDR). We propose to use the FDR as a screening method to reduce the high dimension to a lower dimension as well as controlling the FDR with three popular variable selection methods: LASSO, SCAD, and MCP. Method. The three methods with the proposed screenings were applied to prostate cancer data with presence of metastasis as the outcome. Results. Simulations showed that the three variable selection methods with the proposed screenings controlled the predefined FDR and produced high area under the receiver operating characteristic curve (AUROC) scores. In applying these methods to the prostate cancer example, LASSO and MCP selected 12 and 8 genes and produced AUROC scores of 0.746 and 0.764, respectively. Conclusions. We demonstrated that the variable selection methods with the sequential use of FDR and ISIS not only controlled the predefined FDR in the final models but also had relatively high AUROC scores. PMID:27597974

  20. The effects of digitizing rate and phase distortion errors on the shock response spectrum

    NASA Technical Reports Server (NTRS)

    Wise, J. H.

    1983-01-01

    Some of the methods used for acquisition and digitization of high-frequency transients in the analysis of pyrotechnic events, such as explosive bolts for spacecraft separation, are discussed with respect to the reduction of errors in the computed shock response spectrum. Equations are given for maximum error as a function of the sampling rate, phase distortion, and slew rate, and the effects of the characteristics of the filter used are analyzed. A filter is noted to exhibit good passband amplitude, phase response, and response to a step function is a compromise between the flat passband of the elliptic filter and the phase response of the Bessel filter; it is suggested that it be used with a sampling rate of 10f (5 percent).

  1. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  2. An Examination of Negative Halo Error in Ratings.

    ERIC Educational Resources Information Center

    Lance, Charles E.; And Others

    1990-01-01

    A causal model of halo error (HE) is derived. Three hypotheses are formulated to explain findings of negative HE. It is suggested that apparent negative HE may have been misinferred from existing correlational measures of HE, and that positive HE is more prevalent than had previously been thought. (SLD)

  3. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... cases in the sample with an error compared to the total number of cases in the sample; (2) Percentage of... the sample with an improper payment compared to the total number of cases in the sample; (3... improper payments in the sample compared to the total dollar amount of payments made in the sample;...

  4. Reducing error rates in straintronic multiferroic nanomagnetic logic by pulse shaping

    NASA Astrophysics Data System (ADS)

    Munira, Kamaram; Xie, Yunkun; Nadri, Souheil; Forgues, Mark B.; Salehi Fashami, Mohammad; Atulasimha, Jayasimha; Bandyopadhyay, Supriyo; Ghosh, Avik W.

    2015-06-01

    Dipole-coupled nanomagnetic logic (NML), where nanomagnets (NMs) with bistable magnetization states act as binary switches and information is transferred between them via dipole-coupling and Bennett clocking, is a potential replacement for conventional transistor logic since magnets dissipate less energy than transistors when they switch in a logic circuit. Magnets are also ‘non-volatile’ and hence can store the results of a computation after the computation is over, thereby doubling as both logic and memory—a feat that transistors cannot achieve. However, dipole-coupled NML is much more error-prone than transistor logic at room temperature (\\gt 1%) because thermal noise can easily disrupt magnetization dynamics. Here, we study a particularly energy-efficient version of dipole-coupled NML known as straintronic multiferroic logic (SML) where magnets are clocked/switched with electrically generated mechanical strain. By appropriately ‘shaping’ the voltage pulse that generates strain, we show that the error rate in SML can be reduced to tolerable limits. We describe the error probabilities associated with various stress pulse shapes and discuss the trade-off between error rate and switching speed in SML.The lowest error probability is obtained when a ‘shaped’ high voltage pulse is applied to strain the output NM followed by a low voltage pulse. The high voltage pulse quickly rotates the output magnet’s magnetization by 90° and aligns it roughly along the minor (or hard) axis of the NM. Next, the low voltage pulse produces the critical strain to overcome the shape anisotropy energy barrier in the NM and produce a monostable potential energy profile in the presence of dipole coupling from the neighboring NM. The magnetization of the output NM then migrates to the global energy minimum in this monostable profile and completes a 180° rotation (magnetization flip) with high likelihood.

  5. A minimum-error, energy-constrained neural code is an instantaneous-rate code.

    PubMed

    Johnson, Erik C; Jones, Douglas L; Ratnam, Rama

    2016-04-01

    Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals. PMID:26922680

  6. Finding the right coverage: the impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates.

    PubMed

    Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah

    2016-07-01

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown. Here, we estimated genotyping error rates in SNPs genotyped with double digest RAD sequencing from Mendelian incompatibilities in known mother-offspring dyads of Hoffman's two-toed sloth (Choloepus hoffmanni) across a range of coverage and sequence quality criteria, for both reference-aligned and de novo-assembled data sets. Genotyping error rates were more sensitive to coverage than sequence quality and low coverage yielded high error rates, particularly in de novo-assembled data sets. For example, coverage ≥5 yielded median genotyping error rates of ≥0.03 and ≥0.11 in reference-aligned and de novo-assembled data sets, respectively. Genotyping error rates declined to ≤0.01 in reference-aligned data sets with a coverage ≥30, but remained ≥0.04 in the de novo-assembled data sets. We observed approximately 10- and 13-fold declines in the number of loci sampled in the reference-aligned and de novo-assembled data sets when coverage was increased from ≥5 to ≥30 at quality score ≥30, respectively. Finally, we assessed the effects of genotyping coverage on a common population genetic application, parentage assignments, and showed that the proportion of incorrectly assigned maternities was relatively high at low coverage. Overall, our results suggest that the trade-off between sample size and genotyping error rates be considered prior to building sequencing libraries, reporting genotyping error rates become standard practice, and that effects of genotyping errors on inference be evaluated in restriction-enzyme-based SNP studies. PMID:26946083

  7. A Simple Approximation for the Symbol Error Rate of Triangular Quadrature Amplitude Modulation

    NASA Astrophysics Data System (ADS)

    Duy, Tran Trung; Kong, Hyung Yun

    In this paper, we consider the error performance of the regular triangular quadrature amplitude modulation (TQAM). In particular, using an accurate exponential bound of the complementary error function, we derive a simple approximation for the average symbol error rate (SER) of TQAM over Additive White Gaussian Noise (AWGN) and fading channels. The accuracy of our approach is verified by some simulation results.

  8. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  9. Controlling Type I Error Rate in Evaluating Differential Item Functioning for Four DIF Methods: Use of Three Procedures for Adjustment of Multiple Item Testing

    ERIC Educational Resources Information Center

    Kim, Jihye

    2010-01-01

    In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…

  10. High accuracy optical rate sensor

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, J.

    1990-01-01

    Optical rate sensors, in particular CCD arrays, will be used on Space Station Freedom to track stars in order to provide inertial attitude reference. An algorithm to provide attitude rate information by directly manipulating the sensor pixel intensity output is presented. The star image produced by a sensor in the laboratory is modeled. Simulated, moving star images are generated, and the algorithm is applied to this data for a star moving at a constant rate. The algorithm produces accurate derived rate of the above data. A step rate change requires two frames for the output of the algorithm to accurately reflect the new rate. When zero mean Gaussian noise with a standard deviation of 5 is added to the simulated data of a star image moving at a constant rate, the algorithm derives the rate with an error of 1.9 percent at a rate of 1.28 pixels per frame.

  11. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    ERIC Educational Resources Information Center

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  12. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  13. Conserved rates and patterns of transcription errors across bacterial growth states and lifestyles.

    PubMed

    Traverse, Charles C; Ochman, Howard

    2016-03-22

    Errors that occur during transcription have received much less attention than the mutations that occur in DNA because transcription errors are not heritable and usually result in a very limited number of altered proteins. However, transcription error rates are typically several orders of magnitude higher than the mutation rate. Also, individual transcripts can be translated multiple times, so a single error can have substantial effects on the pool of proteins. Transcription errors can also contribute to cellular noise, thereby influencing cell survival under stressful conditions, such as starvation or antibiotic stress. Implementing a method that captures transcription errors genome-wide, we measured the rates and spectra of transcription errors in Escherichia coli and in endosymbionts for which mutation and/or substitution rates are greatly elevated over those of E. coli Under all tested conditions, across all species, and even for different categories of RNA sequences (mRNA and rRNAs), there were no significant differences in rates of transcription errors, which ranged from 2.3 × 10(-5) per nucleotide in mRNA of the endosymbiont Buchnera aphidicola to 5.2 × 10(-5) per nucleotide in rRNA of the endosymbiont Carsonella ruddii The similarity of transcription error rates in these bacterial endosymbionts to that in E. coli (4.63 × 10(-5) per nucleotide) is all the more surprising given that genomic erosion has resulted in the loss of transcription fidelity factors in both Buchnera and Carsonella. PMID:26884158

  14. Design and verification of a bit error rate tester in Altera FPGA for optical link developments

    NASA Astrophysics Data System (ADS)

    Cao, T.; Chang, J.; Gong, D.; Liu, C.; Liu, T.; Xiang, A.; Ye, J.

    2010-12-01

    This paper presents a custom bit error rate (BER) tester implementation in an Altera Stratix II GX signal integrity development kit. This BER tester deploys a parallel to serial pseudo random bit sequence (PRBS) generator, a bit and link status error detector and an error logging FIFO. The auto-correlation pattern enables receiver synchronization without specifying protocol at the physical layer. The error logging FIFO records both bit error data and link operation events. The tester's BER and data acquisition functions are utilized in a proton test of a 5 Gbps serializer. Experimental and data analysis results are discussed.

  15. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    PubMed

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data. PMID:25540125

  16. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    PubMed

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments. PMID:23270978

  17. Optimal GSTDN/TDRSS bit error rate evaluation using limited sample sizes

    NASA Technical Reports Server (NTRS)

    Coffey, R. E.; Lawrence, G. M.; Stuart, J. R.

    1982-01-01

    Statistical studies of telemetry errors were made on data from the Solar Mesosphere Explorer (SME). Examination of frame sync words, as received at the ground station, indicated a wide spread of Bit Error Rates (BER) among stations. A study of the distribution of errors per station pass, however, showed that there was a tendency for the station software to add an even number of spurious errors to the count. A count of wild points in science data, rejecting drop-outs and other system errors, yielded an average random BER of 3.1 x 10 to the -6 with 99% confidence limits of 2.6 and 3.8 x 10 to the -6. The system errors are typically 5 to 100 times more frequent than the truly random errors.

  18. Design and Verification of an FPGA-based Bit Error Rate Tester

    NASA Astrophysics Data System (ADS)

    Xiang, Annie; Gong, Datao; Hou, Suen; Liu, Chonghan; Liang, Futian; Liu, Tiankuan; Su, Da-Shung; Teng, Ping-Kun; Ye, Jingbo

    Bit error rate (BER) is the principle measure of performance of a data transmission link. With the integration of high-speed transceivers inside a field programmable gate array (FPGA), the BER testing can now be handled by transceiver-enabled FPGA hardware. This provides a cheaper alternative to dedicated table-top equipment and offers the flexibility of test customization and data analysis. This paper presents a BER tester implementation based on the Altera Stratix II GX and IV GT development boards. The architecture of the tester is described. Lab test results and field test data analysis are discussed. The Stratix II GX tester operates at up to 5 Gbps and the Stratix IV GT tester operates at up to 10 Gbps, both in 4 duplex channels. The tester deploys a pseudo random bit sequence (PRBS) generator and detector, a transceiver controller, and an error logger. It also includes a computer interface for data acquisition and user configuration. The tester's functionality was validated and its performance characterized in a point-to-point serial optical link setup. BER vs. optical receiver sensitivity was measured to emulate stressed link conditions. The Stratix II GX tester was also used in a proton test on a custom designed serializer chip to record and analyse radiation-induced errors.

  19. Bit-Error-Rate Performance of a Gigabit Ethernet O-CDMA Technology Demonstrator (TD)

    SciTech Connect

    Hernandez, V J; Mendez, A J; Bennett, C V; Lennon, W J

    2004-07-09

    An O-CDMA TD based on 2-D (wavelength/time) codes is described, with bit-error-rate (BER) and eye-diagram measurements given for eight users. Simulations indicate that the TD can support 32 asynchronous users.

  20. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  1. Exact error rate analysis of free-space optical communications with spatial diversity over Gamma-Gamma atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Li, Kangning; Tan, Liying; Yu, Siyuan; Cao, Yubin

    2016-02-01

    The error rate performances and outage probabilities of free-space optical (FSO) communications with spatial diversity are studied for Gamma-Gamma turbulent environments. Equal gain combining (EGC) and selection combining (SC) diversity are considered as practical schemes to mitigate turbulence. The exact bit-error rate (BER) expression and outage probability are derived for direct detection EGC multiple aperture receiver system. BER performances and outage probabilities are analyzed and compared for different number of sub-apertures each having aperture area A with EGC and SC techniques. BER performances and outage probabilities of a single monolithic aperture and multiple aperture receiver system with the same total aperture area are compared under thermal-noise-limited and background-noise-limited conditions. It is shown that multiple aperture receiver system can greatly improve the system communication performances. And these analytical tools are useful in providing highly accurate error rate estimation for FSO communication systems.

  2. Error estimation for delta VLBI angle and angle rate measurements over baselines between a ground station and a geosynchronous orbiter

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1982-01-01

    Baselines between a ground station and a geosynchronous orbiter provide high resolution Delta VLBI data which is beyond the capability of ground-based interferometry. The effects of possible error sources on such Delta VLBI data for the determination of spacecraft angle and angle rate are investigated. For comparison, the effects on spacecraft-only VLBI are also studied.

  3. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors

    PubMed Central

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Introduction: Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. Materials and methods: This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. Results: A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. Conclusions: The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples. PMID:25351356

  4. Bit error rate investigation of spin-transfer-switched magnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Wang, Zihui; Zhou, Yuchen; Zhang, Jing; Huai, Yiming

    2012-10-01

    A method is developed to enable a fast bit error rate (BER) characterization of spin-transfer-torque magnetic random access memory magnetic tunnel junction (MTJ) cells without integrating with complementary metal-oxide semiconductor circuit. By utilizing the reflected signal from the devices under test, the measurement setup allows a fast measurement of bit error rates at >106, writing events per second. It is further shown that this method provides a time domain capability to examine the MTJ resistance states during a switching event, which can assist write error analysis in great detail. BER of a set of spin-transfer-torque MTJ cells has been evaluated by using this method, and bit error free operation (down to 10-8) for optimized in-plane MTJ cells has been demonstrated.

  5. Compensatory and Noncompensatory Information Integration and Halo Error in Performance Rating Judgments.

    ERIC Educational Resources Information Center

    Kishor, Nand

    1992-01-01

    The relationship between compensatory and noncompensatory information integration and the intensity of the halo effect in performance rating was studied. Seventy University of British Columbia (Canada) students rated 27 teacher profiles. That the way performance information is mentally integrated affects the intensity of halo error was supported.…

  6. A stochastic node-failure network with individual tolerable error rate at multiple sinks

    NASA Astrophysics Data System (ADS)

    Huang, Cheng-Fu; Lin, Yi-Kuei

    2014-05-01

    Many enterprises consider several criteria during data transmission such as availability, delay, loss, and out-of-order packets from the service level agreements (SLAs) point of view. Hence internet service providers and customers are gradually focusing on tolerable error rate in transmission process. The internet service provider should provide the specific demand and keep a certain transmission error rate by their SLAs to each customer. This paper is mainly to evaluate the system reliability that the demand can be fulfilled under the tolerable error rate at all sinks by addressing a stochastic node-failure network (SNFN), in which each component (edge or node) has several capacities and a transmission error rate. An efficient algorithm is first proposed to generate all lower boundary points, the minimal capacity vectors satisfying demand and tolerable error rate for all sinks. Then the system reliability can be computed in terms of such points by applying recursive sum of disjoint products. A benchmark network and a practical network in the United States are demonstrated to illustrate the utility of the proposed algorithm. The computational complexity of the proposed algorithm is also analyzed.

  7. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Astrophysics Data System (ADS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-09-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  8. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-01-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  9. High Rate Digital Demodulator ASIC

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder; Sheikh, Salman; Koubek, Steve; Hoy, Scott; Gray, Andrew

    1998-01-01

    The architecture of High Rate (600 Mega-bits per second) Digital Demodulator (HRDD) ASIC capable of demodulating BPSK and QPSK modulated data is presented in this paper. The advantages of all-digital processing include increased flexibility and reliability with reduced reproduction costs. Conventional serial digital processing would require high processing rates necessitating a hardware implementation in other than CMOS technology such as Gallium Arsenide (GaAs) which has high cost and power requirements. It is more desirable to use CMOS technology with its lower power requirements and higher gate density. However, digital demodulation of high data rates in CMOS requires parallel algorithms to process the sampled data at a rate lower than the data rate. The parallel processing algorithms described here were developed jointly by NASA's Goddard Space Flight Center (GSFC) and the Jet Propulsion Laboratory (JPL). The resulting all-digital receiver has the capability to demodulate BPSK, QPSK, OQPSK, and DQPSK at data rates in excess of 300 Mega-bits per second (Mbps) per channel. This paper will provide an overview of the parallel architecture and features of the HRDR ASIC. In addition, this paper will provide an over-view of the implementation of the hardware architectures used to create flexibility over conventional high rate analog or hybrid receivers. This flexibility includes a wide range of data rates, modulation schemes, and operating environments. In conclusion it will be shown how this high rate digital demodulator can be used with an off-the-shelf A/D and a flexible analog front end, both of which are numerically computer controlled, to produce a very flexible, low cost high rate digital receiver.

  10. Type-II generalized family-wise error rate formulas with application to sample size determination.

    PubMed

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26914402

  11. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets.

    PubMed

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  12. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets

    PubMed Central

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W.; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  13. Methylphenidate improves diminished error and feedback sensitivity in ADHD: An evoked heart rate analysis.

    PubMed

    Groen, Yvonne; Mulder, Lambertus J M; Wijers, Albertus A; Minderaa, Ruud B; Althaus, Monika

    2009-09-01

    Attention Deficit Hyperactivity Disorder (ADHD) is a developmental disorder that has previously been related to a decreased sensitivity to errors and feedback. Supplementary to the traditional performance measures, this study uses autonomic measures to study this decreased sensitivity in ADHD and the modulating effects of medication. Children with ADHD, on and off Methylphenidate (Mph), and typically developing (TD) children performed a selective attention task with three feedback conditions: reward, punishment and no feedback. Evoked Heart Rate (EHR) responses were computed for correct and error trials. All groups performed more efficiently with performance feedback than without. EHR analyses, however, showed that enhanced EHR decelerations on error trials seen in TD children, were absent in the medication-free ADHD group for all feedback conditions. The Mph-treated ADHD group showed 'normalised' EHR decelerations to errors and error feedback, depending on the feedback condition. This study provides further evidence for a decreased physiological responsiveness to errors and error feedback in children with ADHD and for a modulating effect of Mph. PMID:19464338

  14. Estimation of the minimum mRNA splicing error rate in vertebrates.

    PubMed

    Skandalis, A

    2016-01-01

    The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons. PMID:26811995

  15. Parallel Transmission Pulse Design with Explicit Control for the Specific Absorption Rate in the Presence of Radiofrequency Errors

    PubMed Central

    Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L.; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L.; Guerin, Bastien

    2016-01-01

    Purpose A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. Methods The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors (“worst-case SAR”) is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Results Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled “worst-case SAR” in the presence of errors of this magnitude at minor cost of the excitation profile quality. Conclusion Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. PMID:26147916

  16. High performance interconnection between high data rate networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.

    1992-01-01

    The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.

  17. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  18. The Impact of Statistically Adjusting for Rater Effects on Conditional Standard Errors of Performance Ratings

    ERIC Educational Resources Information Center

    Raymond, Mark R.; Harik, Polina; Clauser, Brian E.

    2011-01-01

    Prior research indicates that the overall reliability of performance ratings can be improved by using ordinary least squares (OLS) regression to adjust for rater effects. The present investigation extends previous work by evaluating the impact of OLS adjustment on standard errors of measurement ("SEM") at specific score levels. In addition, a…

  19. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    2001-01-01

    Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…

  20. 20 CFR 602.43 - No incentives or sanctions based on specific error rates.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false No incentives or sanctions based on specific error rates. 602.43 Section 602.43 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR QUALITY CONTROL IN THE FEDERAL-STATE UNEMPLOYMENT INSURANCE SYSTEM Quality Control...

  1. 20 CFR 602.43 - No incentives or sanctions based on specific error rates.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false No incentives or sanctions based on specific error rates. 602.43 Section 602.43 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR QUALITY CONTROL IN THE FEDERAL-STATE UNEMPLOYMENT INSURANCE SYSTEM Quality Control...

  2. Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.

    2010-01-01

    We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.

  3. Error and Uncertainty in High-resolution Quantitative Sediment Budgets

    NASA Astrophysics Data System (ADS)

    Grams, P. E.; Schmidt, J. C.; Topping, D. J.; Yackulic, C. B.

    2012-12-01

    Sediment budgets are a fundamental tool in fluvial geomorphology. The power of the sediment budget is in the explicit coupling of sediment flux and sediment storage through the Exner equation for bed sediment conservation. Thus, sediment budgets may be calculated either from the divergence of the sediment flux or from measurements of morphologic change. Until recently, sediment budgets were typically calculated using just one of these methods, and often with sparse data. Recent advances in measurement methods for sediment transport have made it possible to measure sediment flux at much higher temporal resolution, while advanced methods for high-resolution topographic and bathymetric mapping have made it possible to measure morphologic change with much greater spatial resolution. Thus, it is now possible to measure all terms of a sediment budget and more thoroughly evaluate uncertainties in measurement methods and sampling strategies. However, measurements of sediment flux and morphologic change involve different types of uncertainty that are encountered over different time and space scales. Three major factors contribute uncertainty to sediment budgets computed from measurements of sediment flux. These are measurement error, the accumulation of error over time, and physical processes that cause systematic bias. In the absence of bias, uncertainty is proportional to measurement error and the ratio of fluxes at the two measurement stations. For example, if the ratio between measured sediment fluxes is more than 0.8, measurement uncertainty must be less than 10 percent in order to calculate a meaningful sediment budget. Systematic bias in measurements of flux can introduce much larger uncertainty. The uncertainties in sediment budgets computed from morphologic measurements fall into three similar categories. These are measurement error, the spatial and temporal propagation of error, and physical processes that cause bias when measurements are interpolated or

  4. High Frequency of Imprinted Methylation Errors in Human Preimplantation Embryos

    PubMed Central

    White, Carlee R.; Denomme, Michelle M.; Tekpetey, Francis R.; Feyles, Valter; Power, Stephen G. A.; Mann, Mellissa R. W.

    2015-01-01

    Assisted reproductive technologies (ARTs) represent the best chance for infertile couples to conceive, although increased risks for morbidities exist, including imprinting disorders. This increased risk could arise from ARTs disrupting genomic imprints during gametogenesis or preimplantation. The few studies examining ART effects on genomic imprinting primarily assessed poor quality human embryos. Here, we examined day 3 and blastocyst stage, good to high quality, donated human embryos for imprinted SNRPN, KCNQ1OT1 and H19 methylation. Seventy-six percent day 3 embryos and 50% blastocysts exhibited perturbed imprinted methylation, demonstrating that extended culture did not pose greater risk for imprinting errors than short culture. Comparison of embryos with normal and abnormal methylation didn’t reveal any confounding factors. Notably, two embryos from male factor infertility patients using donor sperm harboured aberrant methylation, suggesting errors in these embryos cannot be explained by infertility alone. Overall, these results indicate that ART human preimplantation embryos possess a high frequency of imprinted methylation errors. PMID:26626153

  5. The Impact of Sex of the Speaker, Sex of the Rater and Profanity Type of Language Trait Errors in Speech Evaluation: A Test of the Rating Error Paradigm.

    ERIC Educational Resources Information Center

    Bock, Douglas G.; And Others

    1984-01-01

    This study (1) demonstrates the negative impact of profanity in a public speech and (2) sheds light on the conceptualization of the term "rating error." Implications for classroom teaching are discussed. (PD)

  6. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  7. High Data Rate Instrument Study

    NASA Technical Reports Server (NTRS)

    Schober, Wayne; Lansing, Faiza; Wilson, Keith; Webb, Evan

    1999-01-01

    The High Data Rate Instrument Study was a joint effort between the Jet Propulsion Laboratory (JPL) and the Goddard Space Flight Center (GSFC). The objectives were to assess the characteristics of future high data rate Earth observing science instruments and then to assess the feasibility of developing data processing systems and communications systems required to meet those data rates. Instruments and technology were assessed for technology readiness dates of 2000, 2003, and 2006. The highest data rate instruments are hyperspectral and synthetic aperture radar instruments which are capable of generating 3.2 Gigabits per second (Gbps) and 1.3 Gbps, respectively, with a technology readiness date of 2003. These instruments would require storage of 16.2 Terebits (Tb) of information (RF communications case of two orbits of data) or 40.5 Tb of information (optical communications case of five orbits of data) with a technology readiness date of 2003. Onboard storage capability in 2003 is estimated at 4 Tb; therefore, all the data created cannot be stored without processing or compression. Of the 4 Tb of stored data, RF communications can only send about one third of the data to the ground, while optical communications is estimated at 6.4 Tb across all three technology readiness dates of 2000, 2003, and 2006 which were used in the study. The study includes analysis of the onboard processing and communications technologies at these three dates and potential systems to meet the high data rate requirements. In the 2003 case, 7.8% of the data can be stored and downlinked by RF communications while 10% of the data can be stored and downlinked with optical communications. The study conclusion is that only 1 to 10% of the data generated by high data rate instruments will be sent to the ground from now through 2006 unless revolutionary changes in spacecraft design and operations such as intelligent data extraction are developed.

  8. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  9. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    SciTech Connect

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  10. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  11. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  12. Automatic generation control of a hydrothermal system with new area control error considering generation rate constraint

    SciTech Connect

    Das, D.; Nanda, J.; Kothari, M.L.; Kothari, D.P. )

    1990-01-01

    The paper presents an analysis of the automatic generation control based on a new area control error strategy for an interconnected hydrothermal system in the discrete-mode considering generation rate constraints (GRCs). The investigations reveal that the system dynamic performances following a step load perturbation in either of the areas with constrained optimum gain settings and unconstrained optimum gain settings are not much different, hence optimum controller settings can be achieved without considering GRCs in the mathematical model.

  13. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    NASA Technical Reports Server (NTRS)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  14. Safety Aspects of Pulsed Dose Rate Brachytherapy: Analysis of Errors in 1,300 Treatment Sessions

    SciTech Connect

    Koedooder, Kees Wieringen, Niek van; Grient, Hans N.B. van der; Herten, Yvonne R.J. van; Pieters, Bradley R.; Blank, Leo

    2008-03-01

    Purpose: To determine the safety of pulsed-dose-rate (PDR) brachytherapy by analyzing errors and technical failures during treatment. Methods and Materials: More than 1,300 patients underwent treatment with PDR brachytherapy, using five PDR remote afterloaders. Most patients were treated with consecutive pulse schemes, also outside regular office hours. Tumors were located in the breast, esophagus, prostate, bladder, gynecology, anus/rectum, orbit, head/neck, with a miscellaneous group of small numbers, such as the lip, nose, and bile duct. Errors and technical failures were analyzed for 1,300 treatment sessions, for which nearly 20,000 pulses were delivered. For each tumor localization, the number and type of occurring errors were determined, as were which localizations were more error prone than others. Results: By routinely using the built-in dummy check source, only 0.2% of all pulses showed an error during the phase of the pulse when the active source was outside the afterloader. Localizations treated using flexible catheters had greater error frequencies than those treated with straight needles or rigid applicators. Disturbed pulse frequencies were in the range of 0.6% for the anus/rectum on a classic version 1 afterloader to 14.9% for orbital tumors using a version 2 afterloader. Exceeding the planned overall treatment time by >10% was observed in only 1% of all treatments. Patients received their dose as originally planned in 98% of all treatments. Conclusions: According to the experience in our institute with 1,300 PDR treatments, we found that PDR is a safe brachytherapy treatment modality, both during and outside of office hours.

  15. Error rates for nanopore discrimination among cytosine, methylcytosine, and hydroxymethylcytosine along individual DNA strands.

    PubMed

    Schreiber, Jacob; Wescoe, Zachary L; Abu-Shumays, Robin; Vivian, John T; Baatar, Baldandorj; Karplus, Kevin; Akeson, Mark

    2013-11-19

    Cytosine, 5-methylcytosine, and 5-hydroxymethylcytosine were identified during translocation of single DNA template strands through a modified Mycobacterium smegmatis porin A (M2MspA) nanopore under control of phi29 DNA polymerase. This identification was based on three consecutive ionic current states that correspond to passage of modified or unmodified CG dinucleotides and their immediate neighbors through the nanopore limiting aperture. To establish quality scores for these calls, we examined ~3,300 translocation events for 48 distinct DNA constructs. Each experiment analyzed a mixture of cytosine-, 5-methylcytosine-, and 5-hydroxymethylcytosine-bearing DNA strands that contained a marker that independently established the correct cytosine methylation status at the target CG of each molecule tested. To calculate error rates for these calls, we established decision boundaries using a variety of machine-learning methods. These error rates depended upon the identity of the bases immediately 5' and 3' of the targeted CG dinucleotide, and ranged from 1.7% to 12.2% for a single-pass read. We estimate that Q40 values (0.01% error rates) for methylation status calls could be achieved by reading single molecules 5-19 times depending upon sequence context. PMID:24167260

  16. Evaluating the Type II error rate in a sediment toxicity classification using the Reference Condition Approach.

    PubMed

    Rodriguez, Pilar; Maestre, Zuriñe; Martinez-Madrid, Maite; Reynoldson, Trefor B

    2011-01-17

    Sediments from 71 river sites in Northern Spain were tested using the oligochaete Tubifex tubifex (Annelida, Clitellata) chronic bioassay. 47 sediments were identified as reference primarily from macroinvertebrate community characteristics. The data for the toxicological endpoints were examined using non-metric MDS. Probability ellipses were constructed around the reference sites in multidimensional space to establish a classification for assessing test-sediments into one of three categories (Non Toxic, Potentially Toxic, and Toxic). The construction of such probability ellipses sets the Type I error rate. However, we also wished to include in the decision process for identifying pass-fail boundaries the degree of disturbance required to be detected, and the likelihood of being wrong in detecting that disturbance (i.e. the Type II error). Setting the ellipse size to use based on Type I error does not include any consideration of the probability of Type II error. To do this, the toxicological response observed in the reference sediments was manipulated by simulating different degrees of disturbance (simpacted sediments), and measuring the Type II error rate for each set of the simpacted sediments. From this procedure, the frequency at each probability ellipse of identifying impairment using sediments with known level of disturbance is quantified. Thirteen levels of disturbance and seven probability ellipses were tested. Based on the results the decision boundary for Non Toxic and Potentially Toxic was set at the 80% probability ellipse, and the boundary for Potentially Toxic and Toxic at the 95% probability ellipse. Using this approach, 9 test sediments were classified as Toxic, 2 as Potentially Toxic, and 13 as Non Toxic. PMID:20980065

  17. Accuracy assessment of high-rate GPS measurements for seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  18. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  19. Influence of wave-front aberrations on bit error rate in inter-satellite laser communications

    NASA Astrophysics Data System (ADS)

    Yang, Yuqiang; Han, Qiqi; Tan, Liying; Ma, Jing; Yu, Siyuan; Yan, Zhibin; Yu, Jianjie; Zhao, Sheng

    2011-06-01

    We derive the bit error rate (BER) of inter-satellite laser communication (lasercom) links with on-off-keying systems in the presence of both wave-front aberrations and pointing error, but without considering the noise of the detector. Wave-front aberrations induced by receiver terminal have no influence on the BER, while wave-front aberrations induced by transmitter terminal will increase the BER. The BER depends on the area S which is truncated out by the threshold intensity of the detector (such as APD) on the intensity function in the receiver plane, and changes with root mean square (RMS) of wave-front aberrations. Numerical results show that the BER rises with the increasing of RMS value. The influences of Astigmatism, Coma, Curvature and Spherical aberration on the BER are compared. This work can benefit the design of lasercom system.

  20. Preliminary error budget for an optical ranging system: Range, range rate, and differenced range observables

    NASA Technical Reports Server (NTRS)

    Folkner, W. M.; Finger, M. H.

    1990-01-01

    Future missions to the outer solar system or human exploration of Mars may use telemetry systems based on optical rather than radio transmitters. Pulsed laser transmission can be used to deliver telemetry rates of about 100 kbits/sec with an efficiency of several bits for each detected photon. Navigational observables that can be derived from timing pulsed laser signals are discussed. Error budgets are presented based on nominal ground stations and spacecraft-transceiver designs. Assuming a pulsed optical uplink signal, two-way range accuracy may approach the few centimeter level imposed by the troposphere uncertainty. Angular information can be achieved from differenced one-way range using two ground stations with the accuracy limited by the length of the available baseline and by clock synchronization and troposphere errors. A method of synchronizing the ground station clocks using optical ranging measurements is presented. This could allow differenced range accuracy to reach the few centimeter troposphere limit.

  1. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.

  2. Performance monitoring following total sleep deprivation: effects of task type and error rate.

    PubMed

    Renn, Ryan P; Cote, Kimberly A

    2013-04-01

    There is a need to understand the neural basis of performance deficits that result from sleep deprivation. Performance monitoring tasks generate response-locked event-related potentials (ERPs), generated from the anterior cingulate cortex (ACC) located in the medial surface of the frontal lobe that reflect error processing. The outcome of previous research on performance monitoring during sleepiness has been mixed. The purpose of this study was to evaluate performance monitoring in a controlled study of experimental sleep deprivation using a traditional Flanker task, and to broaden this examination using a response inhibition task. Forty-nine young adults (24 male) were randomly assigned to a total sleep deprivation or rested control group. The sleep deprivation group was slower on the Flanker task and less accurate on a Go/NoGo task compared to controls. General attentional impairments were evident in stimulus-locked ERPs for the sleep deprived group: P300 was delayed on Flanker trials and smaller to Go-stimuli. Further, N2 was smaller to NoGo stimuli, and the response-locked ERN was smaller on both tasks, reflecting neurocognitive impairment during performance monitoring. In the Flanker task, higher error rate was associated with smaller ERN amplitudes for both groups. Examination of ERN amplitude over time showed that it attenuated in the rested control group as error rate increased, but such habituation was not apparent in the sleep deprived group. Poor performing sleep deprived individuals had a larger Pe response than controls, possibly indicating perseveration of errors. These data provide insight into the neural underpinnings of performance failure during sleepiness and have implications for workplace and driving safety. PMID:23384887

  3. High rate manure supernatant digestion.

    PubMed

    Bergland, Wenche Hennie; Dinamarca, Carlos; Toradzadegan, Mehrdad; Nordgård, Anna Synnøve Røstad; Bakke, Ingrid; Bakke, Rune

    2015-06-01

    The study shows that high rate anaerobic digestion may be an efficient way to obtain sustainable energy recovery from slurries such as pig manure. High process capacity and robustness to 5% daily load increases are observed in the 370 mL sludge bed AD reactors investigated. The supernatant from partly settled, stored pig manure was fed at rates giving hydraulic retention times, HRT, gradually decreased from 42 to 1.7 h imposing a maximum organic load of 400 g COD L(-1) reactor d(-1). The reactors reached a biogas production rate of 97 g COD L(-1) reactor d(-1) at the highest load at which process stress signs were apparent. The yield was ∼0.47 g COD methane g(-1) CODT feed at HRT above 17 h, gradually decreasing to 0.24 at the lowest HRT (0.166 NL CH4 g(-1) CODT feed decreasing to 0.086). Reactor pH was innately stable at 8.0 ± 0.1 at all HRTs with alkalinity between 9 and 11 g L(-1). The first stress symptom occurred as reduced methane yield when HRT dropped below 17 h. When HRT dropped below 4 h the propionate removal stopped. The yield from acetate removal was constant at 0.17 g COD acetate removed per g CODT substrate. This robust methanogenesis implies that pig manure supernatant, and probably other similar slurries, can be digested for methane production in compact and effective sludge bed reactors. Denaturing gradient gel electrophoresis (DGGE) analysis indicated a relatively fast adaptation of the microbial communities to manure and implies that non-adapted granular sludge can be used to start such sludge bed bioreactors. PMID:25776915

  4. Assessment of type I error rate associated with dose-group switching in a longitudinal Alzheimer trial.

    PubMed

    Habteab Ghebretinsae, Aklilu; Molenberghs, Geert; Dmitrienko, Alex; Offen, Walt; Sethuraman, Gopalan

    2014-01-01

    In clinical trials, there always is the possibility to use data-driven adaptation at the end of a study. There prevails, however, concern on whether the type I error rate of the trial could be inflated with such design, thus, necessitating multiplicity adjustment. In this project, a simulation experiment was set up to assess type I error rate inflation associated with switching dose group as a function of dropout rate at the end of the study, where the primary analysis is in terms of a longitudinal outcome. This simulation is inspired by a clinical trial in Alzheimer's disease. The type I error rate was assessed under a number of scenarios, in terms of differing correlations between efficacy and tolerance, different missingness mechanisms, and different probabilities of switching. A collection of parameter values was used to assess sensitivity of the analysis. Results from ignorable likelihood analysis show that the type I error rate with and without switching was approximately the posited error rate for the various scenarios. Under last observation carried forward (LOCF), the type I error rate blew up both with and without switching. The type I error inflation is clearly connected to the criterion used for switching. While in general switching, in a way related to the primary endpoint, may impact the type I error, this was not the case for most scenarios in the longitudinal Alzheimer trial setting under consideration, where patients are expected to worsen over time. PMID:24697817

  5. Phase error compensation methods for high-accuracy profile measurement

    NASA Astrophysics Data System (ADS)

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Zhang, Zonghua; Jiang, Hao; Yin, Yongkai; Huang, Shujun

    2016-04-01

    In a phase-shifting algorithm-based fringe projection profilometry, the nonlinear intensity response, called the gamma effect, of the projector-camera setup is a major source of error in phase retrieval. This paper proposes two novel, accurate approaches to realize both active and passive phase error compensation based on a universal phase error model which is suitable for a arbitrary phase-shifting step. The experimental results on phase error compensation and profile measurement of standard components verified the validity and accuracy of the two proposed approaches which are robust when faced with changeable measurement conditions.

  6. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.

  7. Symbol error rate bound of DPSK modulation system in directional wave propagation

    NASA Astrophysics Data System (ADS)

    Hua, Jingyu; Zhuang, Changfei; Zhao, Xiaomin; Li, Gang; Meng, Qingmin

    This paper presents a new approach to determine the symbol error rate (SER) bound of differential phase shift keying (DPSK) systems in a directional fading channel, where the von Mises distribution is used to illustrate the non-isotropic angle of arrival (AOA). Our approach relies on the closed-form expression of the phase difference probability density function (pdf) in coherent fading channels and leads to expressions of the DPSK SER bound involving a single finite-range integral which can be readily evaluated numerically. Moreover, the simulation yields results consistent with numerical computation.

  8. Digitally modulated bit error rate measurement system for microwave component evaluation

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary Jo W.; Budinger, James M.

    1989-01-01

    The NASA Lewis Research Center has developed a unique capability for evaluation of the microwave components of a digital communication system. This digitally modulated bit-error-rate (BER) measurement system (DMBERMS) features a continuous data digital BER test set, a data processor, a serial minimum shift keying (SMSK) modem, noise generation, and computer automation. Application of the DMBERMS has provided useful information for the evaluation of existing microwave components and of design goals for future components. The design and applications of this system for digitally modulated BER measurements are discussed.

  9. High Resolution, High Frame Rate Video Technology

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Papers and working group summaries presented at the High Resolution, High Frame Rate Video (HHV) Workshop are compiled. HHV system is intended for future use on the Space Shuttle and Space Station Freedom. The Workshop was held for the dual purpose of: (1) allowing potential scientific users to assess the utility of the proposed system for monitoring microgravity science experiments; and (2) letting technical experts from industry recommend improvements to the proposed near-term HHV system. The following topics are covered: (1) State of the art in the video system performance; (2) Development plan for the HHV system; (3) Advanced technology for image gathering, coding, and processing; (4) Data compression applied to HHV; (5) Data transmission networks; and (6) Results of the users' requirements survey conducted by NASA.

  10. Investigation on the bit error rate performance of 40Gb/s space optical communication system based on BPSK scheme

    NASA Astrophysics Data System (ADS)

    Li, Mi; Li, Bowen; Zhang, Xuping; Song, Yuejiang; Liu, Jia; Tu, Guojie

    2015-08-01

    Space optical communication technique is attracting increasingly more attention because it owns advantages such as high security and great communication quality compared with microwave communication. As the space optical communication develops, people have already achieved the communication at data rate of Gb/s currently. The next generation for space optical system have goal of the higher data rate of 40Gb/s. However, the traditional optical communication system cannot satisfy it when the data rate of system is at such high extent. This paper will introduce ground optical communication system of 40Gb/s data rate as to achieve the space optical communication at high data rate. Speaking of the data rate of 40Gb/s, we must apply waveguide modulator to modulate the optical signal and magnify this signal by laser amplifier. Moreover, the more sensitive avalanche photodiode (APD) will be as the detector to increase the communication quality. Based on communication system above, we analyze character of communication quality in downlink of space optical communication system when data rate is at the level of 40Gb/s. The bit error rate (BER) performance, an important factor to justify communication quality, versus some parameter ratios is discussed. From results, there exists optimum ratio of gain factor and divergence angle, which shows the best BER performance. We can also increase ratio of receiving diameter and divergence angle for better communication quality. These results can be helpful to comprehend the character of optical communication system at high data rate and contribute to the system design.

  11. The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.

    PubMed

    Fadaee, Shannon B; Migliaccio, Americo A

    2016-04-01

    The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation. PMID:26715411

  12. Analytical Evaluation of Bit Error Rate Performance of a Free-Space Optical Communication System with Receive Diversity Impaired by Pointing Error

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2015-06-01

    Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.

  13. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    NASA Astrophysics Data System (ADS)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  14. Patch diameter limitation due to high chirp rates in focused SAR images

    NASA Astrophysics Data System (ADS)

    Doerry, Armin W.

    1994-10-01

    Polar-format processed synthetic aperture radar (SAR) images have a limited focused patch diameter that results from unmitigated phase errors. Very high chirp rates, encountered with fine-resolution short-pulse radars, exasperate the problem via a residual video phase error term. This letter modifies the traditional maximum patch diameter expression to include effects of very high chirp rates.

  15. Carbon and sediment accumulation in the Everglades (USA) during the past 4000 years: rates, drivers, and sources of error

    USGS Publications Warehouse

    Glaser, Paul H.; Volin, John C.; Givnish, Thomas J.; Hansen, Barbara C. S.; Stricker, Craig A.

    2012-01-01

    Tropical and sub-tropical wetlands are considered to be globally important sources for greenhouse gases but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida, to assess these problems and determine the factors that could govern carbon accumulation in this large sub-tropical wetland. Accelerator mass spectroscopy dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion ( 0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1 ) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands.

  16. Peat Accumulation in the Everglades (USA) during the Past 4000 Years: Rates, Drivers, and Sources of Error

    NASA Astrophysics Data System (ADS)

    Glaser, P. H.; Volin, J. C.; Givnish, T. J.; Hansen, B. C.; Stricker, C. A.

    2012-12-01

    Tropical and sub-tropical wetlands are considered to be globally important sources for greenhouse gases but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida, to assess these problems and determine the factors that could govern carbon accumulation in this large sub-tropical wetland. AMS-14C dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion ( 0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1 ) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands

  17. Carbon and sediment accumulation in the Everglades (USA) during the past 4000 years: Rates, drivers, and sources of error

    NASA Astrophysics Data System (ADS)

    Glaser, Paul H.; Volin, John C.; Givnish, Thomas J.; Hansen, Barbara C. S.; Stricker, Craig A.

    2012-09-01

    Tropical and subtropical wetlands are considered to be globally important sources of greenhouse gases, but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida in order to assess these problems and determine the factors that could govern carbon accumulation in this large subtropical wetland. Accelerator mass spectroscopy dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion (0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands.

  18. Anti-saccade error rates as a measure of attentional bias in cocaine dependent subjects.

    PubMed

    Dias, Nadeeka R; Schmitz, Joy M; Rathnayaka, Nuvan; Red, Stuart D; Sereno, Anne B; Moeller, F Gerard; Lane, Scott D

    2015-10-01

    Cocaine-dependent (CD) subjects show attentional bias toward cocaine-related cues, and this form of cue-reactivity may be predictive of craving and relapse. Attentional bias has previously been assessed by models that present drug-relevant stimuli and measure physiological and behavioral reactivity (often reaction time). Studies of several CNS diseases outside of substance use disorders consistently report anti-saccade deficits, suggesting a compromise in the interplay between higher-order cortical processes in voluntary eye control (i.e., anti-saccades) and reflexive saccades driven more by involuntary midbrain perceptual input (i.e., pro-saccades). Here, we describe a novel attentional-bias task developed by using measurements of saccadic eye movements in the presence of cocaine-specific stimuli, combining previously unique research domains to capitalize on their respective experimental and conceptual strengths. CD subjects (N = 46) and healthy controls (N = 41) were tested on blocks of pro-saccade and anti-saccade trials featuring cocaine and neutral stimuli (pictures). Analyses of eye-movement data indicated (1) greater overall anti-saccade errors in the CD group; (2) greater attentional bias in CD subjects as measured by anti-saccade errors to cocaine-specific (relative to neutral) stimuli; and (3) no differences in pro-saccade error rates. Attentional bias was correlated with scores on the obsessive-compulsive cocaine scale. The results demonstrate increased saliency and differential attentional to cocaine cues by the CD group. The assay provides a sensitive index of saccadic (visual inhibitory) control, a specific index of attentional bias to drug-relevant cues, and preliminary insight into the visual circuitry that may contribute to drug-specific cue reactivity. PMID:26164486

  19. Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates

    SciTech Connect

    Zamanali, J.H. ); Hubbard, F.R. ); Mosleh, A. ); Waller, M.A. )

    1992-01-01

    The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition.

  20. Serialized Quantum Error Correction Protocol for High-Bandwidth Quantum Repeaters

    NASA Astrophysics Data System (ADS)

    Glaudell, Andrew; Waks, Edo; Taylor, Jacob

    Advances in single-photon creation, transmission, and detection suggest that sending quantum information over optical fibers may have low enough losses to be overcome using quantum error correction. Such error-corrected communication is equivalent to a novel quantum repeater scheme, but crucial questions regarding implementation and system requirements remain open. In this talk, I will show that long-range entangled bit generation with rates approaching 108 entangled bits per second may be possible using a completely serialized protocol, in which photons are generated, entangled, and error corrected via sequential, one-way interactions with as few matter qubits as possible. Provided loss and error rates of the required elements are below the threshold for quantum error correction, this scheme demonstrates improved performance over transmission of single photons. We find improvement in entangled bit rates at large distances using this serial protocol and various quantum error correcting codes.

  1. Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error

    ERIC Educational Resources Information Center

    Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju

    2009-01-01

    Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…

  2. Time-resolved in vivo luminescence dosimetry for online error detection in pulsed dose-rate brachytherapy

    SciTech Connect

    Andersen, Claus E.; Nielsen, Soeren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari

    2009-11-15

    Purpose: The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Methods: Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with {sup 192}Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from {+-}5 to {+-}15 mm) were simulated in software in order to assess the ability of the system to detect errors. Results: For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when

  3. Measuring error rates in genomic perturbation screens: gold standards for human functional genomics

    PubMed Central

    Hart, Traver; Brown, Kevin R; Sircoulomb, Fabrice; Rottapel, Robert; Moffat, Jason

    2014-01-01

    Technological advancement has opened the door to systematic genetics in mammalian cells. Genome-scale loss-of-function screens can assay fitness defects induced by partial gene knockdown, using RNA interference, or complete gene knockout, using new CRISPR techniques. These screens can reveal the basic blueprint required for cellular proliferation. Moreover, comparing healthy to cancerous tissue can uncover genes that are essential only in the tumor; these genes are targets for the development of specific anticancer therapies. Unfortunately, progress in this field has been hampered by off-target effects of perturbation reagents and poorly quantified error rates in large-scale screens. To improve the quality of information derived from these screens, and to provide a framework for understanding the capabilities and limitations of CRISPR technology, we derive gold-standard reference sets of essential and nonessential genes, and provide a Bayesian classifier of gene essentiality that outperforms current methods on both RNAi and CRISPR screens. Our results indicate that CRISPR technology is more sensitive than RNAi and that both techniques have nontrivial false discovery rates that can be mitigated by rigorous analytical methods. PMID:24987113

  4. Detecting trends in raptor counts: power and type I error rates of various statistical tests

    USGS Publications Warehouse

    Hatfield, J.S.; Gould, W.R., IV; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.

    1996-01-01

    We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.

  5. Accuracy of High-Rate GPS for Seismology

    NASA Technical Reports Server (NTRS)

    Elosegui, P.; Davis, J. L.; Oberlander, D.; Baena, R.; Ekstrom, G.

    2006-01-01

    We built a device for translating a GPS antenna on a positioning table to simulate the ground motions caused by an earthquake. The earthquake simulator is accurate to better than 0.1 mm in position, and provides the "ground truth" displacements for assessing the technique of high-rate GPS. We found that the root-mean-square error of the 1-Hz GPS position estimates over the 15-min duration of the simulated seismic event was 2.5 mm, with approximately 96% of the observations in error by less than 5 mm, and is independent of GPS antenna motion. The error spectrum of the GPS estimates is approximately flicker noise, with a 50% decorrelation time for the position error of approx.1.6 s. We that, for the particular event simulated, the spectrum of dependent error in the GPS measurements. surface deformations exceeds the GPS error spectrum within a finite band. More studies are required to determine whether a generally optimal bandwidth exists for a target group of seismic events.

  6. The testing of the aspheric mirror high-frequency band error

    NASA Astrophysics Data System (ADS)

    Wan, JinLong; Li, Bo; Li, XinNan

    2015-08-01

    In recent years, high frequency errors of mirror surface are taken seriously gradually. In manufacturing process of advanced telescope, there is clear indicator about high frequency errors. However, the sub-mirror off-axis aspheric telescope used is large. If uses the full aperture interferometers shape measurement, you need to use complex optical compensation device. Therefore, we propose a method to detect non-spherical lens based on the high-frequency stitching errors. This method does not use compensation components, only to measure Aperture sub-surface shape. By analyzing Zernike polynomial coefficients corresponding to the frequency errors, removing the previous 15 Zernike polynomials, then joining the surface shape, you can get full bore inside tested mirror high-frequency errors. 330mm caliber off-axis aspherical hexagon are measured with this method, obtain a complete face type of high-frequency surface errors and the feasibility of the approach.

  7. The effect of narrow-band digital processing and bit error rate on the intelligibility of ICAO spelling alphabet words

    NASA Astrophysics Data System (ADS)

    Schmidt-Nielsen, Astrid

    1987-08-01

    The recognition of ICAO spelling alphabet words (ALFA, BRAVO, CHARLIE, etc.) is compared with diagnostic rhyme test (DRT) scores for the same conditions. The voice conditions include unprocessed speech; speech processed through the DOD standard linear-predictive-coding algorithm operating at 2400 bit/s with random error rates of 0, 2, 5, 8, and 12 percent; and speech processed through an 800-bit/s pattern-matching algorithm. The results suggest that, with distinctive vocabularies, word intelligibility can be expected to remain high even when DRT scores fall into the poor range. However, once the DRT scores fall below 75 percent, the intelligibility can be expected to fall off rapidly; at DRT scores below 50, the recognition of a distinctive vocabulary should also fall below 50 percent.

  8. An intravenous medication safety system: preventing high-risk medication errors at the point of care.

    PubMed

    Hatcher, Irene; Sullivan, Mark; Hutchinson, James; Thurman, Susan; Gaffney, F Andrew

    2004-10-01

    Improving medication safety at the point of care--particularly for high-risk drugs--is a major concern of nursing administrators. The medication errors most likely to cause harm are administration errors related to infusion of high-risk medications. An intravenous medication safety system is designed to prevent high-risk infusion medication errors and to capture continuous quality improvement data for best practice improvement. Initial testing with 50 systems in 2 units at Vanderbilt University Medical Center revealed that, even in the presence of a fully mature computerized prescriber order-entry system, the new safety system averted 99 potential infusion errors in 8 months. PMID:15577664

  9. Performance analysis of content-addressable search and bit-error rate characteristics of a defocused volume holographic data storage system.

    PubMed

    Das, Bhargab; Joseph, Joby; Singh, Kehar

    2007-08-01

    One of the methods for smoothing the high intensity dc peak in the Fourier spectrum for reducing the reconstruction error in a Fourier transform volume holographic data storage system is to record holograms some distance away from or in front of the Fourier plane. We present the results of our investigation on the performance of such a defocused holographic data storage system in terms of bit-error rate and content search capability. We have evaluated the relevant recording geometry through numerical simulation, by obtaining the intensity distribution at the output detector plane. This has been done by studying the bit-error rate and the content search capability as a function of the aperture size and position of the recording material away from the Fourier plane. PMID:17676163

  10. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, W. S.; Burkhart, J. F.; Kylling, A.

    2015-08-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  11. Breaking Up Large High Schools: Five Common (and Understandable) Errors of Execution. ERIC Digest.

    ERIC Educational Resources Information Center

    Gregory, Tom

    In the past 30 years, research has suggested the need for much smaller high schools. In response, some administrators have attempted to subdivide big high schools into smaller entities. This digest reviews recent research on the movement to break up large schools and discusses five types of error common to such attempts--errors of autonomy, size,…

  12. Effect of automated drug distribution systems on medication error rates in a short-stay geriatric unit

    PubMed Central

    Cousein, Etienne; Mareville, Julie; Lerooy, Alexandre; Caillau, Antoine; Labreuche, Julien; Dambre, Delphine; Odou, Pascal; Bonte, Jean-Paul; Puisieux, François; Decaudin, Bertrand; Coupé, Patrick

    2014-01-01

    Rationale, aims and objectives To assess the impact of an automated drug distribution system on medication errors (MEs). Methods Before-after observational study in a 40-bed short stay geriatric unit within a 1800 bed general hospital in Valenciennes, France. Researchers attended nurse medication administration rounds and compared administered to prescribed drugs, before and after the drug distribution system changed from a ward stock system (WSS) to a unit dose dispensing system (UDDS), integrating a unit dose dispensing robot and automated medication dispensing cabinet (AMDC). Results A total of 615 opportunities of errors (OEs) were observed among 148 patients treated during the WSS period, and 783 OEs were observed among 166 patients treated during the UDDS period. ME [medication administration error (MAE)] rates were calculated and compared between the two periods. Secondary measures included type of errors, seriousness of errors and risk reduction for the patients. The implementation of an automated drug dispensing system resulted in a 53% reduction in MAEs. All error types were reduced in the UDDS period compared with the WSS period (P < 0.001). Wrong dose and wrong drug errors were reduced by 79.1% (2.4% versus 0.5%, P = 0.005) and 93.7% (1.9% versus 0.01%, P = 0.009), respectively. Conclusion An automated UDDS combining a unit dose dispensing robot and AMDCs could reduce discrepancies between ordered and administered drugs, thus improving medication safety among the elderly. PMID:24917185

  13. Prevalence of Refractive Errors among High School Students in Western Iran

    PubMed Central

    Hashemi, Hassan; Rezvan, Farhad; Beiranvand, Asghar; Papi, Omid-Ali; Hoseini Yazdi, Hosein; Ostadimoghaddam, Hadi; Yekta, Abbas Ali; Norouzirad, Reza; Khabazkhoob, Mehdi

    2014-01-01

    Purpose To determine the prevalence of refractive errors among high school students. Methods In a cross-sectional study, we applied stratified cluster sampling on high school students of Aligoudarz, Western Iran. Examinations included visual acuity, non-cycloplegic refraction by autorefraction and fine tuning with retinoscopy. Myopia and hyperopia were defined as spherical equivalent of -0.5/+0.5 diopter (D) or worse, respectively; astigmatism was defined as cylindrical error >0.5 D and anisometropia as an interocular difference in spherical equivalent exceeding 1 D. Results Of 451 selected students, 438 participated in the study (response rate, 97.0%). Data from 434 subjects with mean age of 16±1.3 (range, 14 to 21) years including 212 (48.8%) male subjects was analyzed. The prevalence of myopia, hyperopia and astigmatism was 29.3% [95% confidence interval (CI), 25-33.6%], 21.7% (95%CI, 17.8-25.5%), and 20.7% (95%CI, 16.9-24.6%), respectively. The prevalence of myopia increased significantly with age [odds ratio (OR)=1.30, P=0.003] and was higher among boys (OR=3.10, P<0.001). The prevalence of hyperopia was significantly higher in girls (OR=0.49, P=0.003). The prevalence of astigmatism was 25.9% in boys and 15.8% in girls (OR=2.13, P=0.002). The overall prevalence of high myopia and high hyperopia were 0.5% and 1.2%, respectively. The prevalence of with-the-rule, against-the-rule, and oblique astigmatism was 14.5%, 4.8% and 1.4%, respectively. Overall, 4.6% (95%CI, 2.6-6.6%) of subjects were anisometropic. Conclusion More than half of high school students in Aligoudarz had at least one type of refractive error. Compared to similar studies, the prevalence of refractive errors was high in this age group. PMID:25279126

  14. On the Power of Multiple Independent Tests when the Experimentwise Error Rate Is Controlled.

    ERIC Educational Resources Information Center

    Hsu, Louis M.

    1980-01-01

    The problem addressed is of assessing the loss of power which results from keeping the probability that at least one Type I error will occur in a family of N statistical tests at a tolerably low level. (Author/BW)

  15. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    SciTech Connect

    Chau, H.F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1{radical}(5){approx_equal}27.6%, thereby making it the most error resistant scheme known to date.

  16. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.

  17. High rates of molecular evolution in hantaviruses.

    PubMed

    Ramsden, Cadhla; Melo, Fernando L; Figueiredo, Luiz M; Holmes, Edward C; Zanotto, Paolo M A

    2008-07-01

    Hantaviruses are rodent-borne Bunyaviruses that infect the Arvicolinae, Murinae, and Sigmodontinae subfamilies of Muridae. The rate of molecular evolution in the hantaviruses has been previously estimated at approximately 10(-7) nucleotide substitutions per site, per year (substitutions/site/year), based on the assumption of codivergence and hence shared divergence times with their rodent hosts. If substantiated, this would make the hantaviruses among the slowest evolving of all RNA viruses. However, as hantaviruses replicate with an RNA-dependent RNA polymerase, with error rates in the region of one mutation per genome replication, this low rate of nucleotide substitution is anomalous. Here, we use a Bayesian coalescent approach to estimate the rate of nucleotide substitution from serially sampled gene sequence data for hantaviruses known to infect each of the 3 rodent subfamilies: Araraquara virus (Sigmodontinae), Dobrava virus (Murinae), Puumala virus (Arvicolinae), and Tula virus (Arvicolinae). Our results reveal that hantaviruses exhibit short-term substitution rates of 10(-2) to 10(-4) substitutions/site/year and so are within the range exhibited by other RNA viruses. The disparity between this substitution rate and that estimated assuming rodent-hantavirus codivergence suggests that the codivergence hypothesis may need to be reevaluated. PMID:18417484

  18. Design of high-power aspherical ophthalmic lenses with a reduced error budget

    NASA Astrophysics Data System (ADS)

    Sun, Wen-Shing; Chang, Horng; Sun, Ching-Cherng; Chang, Ming-Wen; Lin, Ching-Huang; Tien, Chuen-Lin

    2002-02-01

    As in the lens optimization process, ophthalmic lens designers have usually constructed error functions with 0.5, 0.7, and 1.0 at only three oblique fields. This seems enough to achieve a balanced trade-off with the astigmatic error, the power error, and the distortion all being considered simultaneously. However, for high-power ophthalmic lenses, the aberration curves show serious violations even if aspherical coefficients are involved. The analytical results indicate that a field error suppression of up to 7 points may even be required in some cases. The suppression effects are excellent and examples of both positive and negative lenses are designed.

  19. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels

    PubMed Central

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-01-01

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. PMID:26694878

  20. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:26694878

  1. The Effects of a Student Sampling Plan on Estimates of the Standard Errors for Student Passing Rates.

    ERIC Educational Resources Information Center

    Lee, Guemin; Fitzpatrick, Anne R.

    2003-01-01

    Studied three procedures for estimating the standard errors of school passing rates using a generalizability theory model and considered the effects of student sample size. Results show that procedures differ in terms of assumptions about the populations from which students were sampled, and student sample size was found to have a large effect on…

  2. Comparison of Self-Scoring Error Rate for SDS (Self Directed Search) (1970) and the Revised SDS (1977).

    ERIC Educational Resources Information Center

    Price, Gary E.; And Others

    A comparison of Self-Scoring Error Rate for Self Directed Search (SDS) and the revised SDS is presented. The subjects were college freshmen and sophomores who participated in career planning as a part of their orientation program, and a career workshop. Subjects, N=190 on first study and N=84 on second study, were then randomly assigned to the SDS…

  3. People's Hypercorrection of High-Confidence Errors: Did They Know It All Along?

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2011-01-01

    This study investigated the "knew it all along" explanation of the hypercorrection effect. The hypercorrection effect refers to the finding that when people are given corrective feedback, errors that are committed with high confidence are easier to correct than low-confidence errors. Experiment 1 showed that people were more likely to claim that…

  4. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  5. Internal pressure gradient errors in σ-coordinate ocean models in high resolution fjord studies

    NASA Astrophysics Data System (ADS)

    Berntsen, Jarle; Thiem, Øyvind; Avlesen, Helge

    2015-08-01

    Terrain following ocean models are today applied in coastal areas and fjords where the topography may be very steep. Recent advances in high performance computing facilitate model studies with very high spatial resolution. In general, numerical discretization errors tend to zero with the grid size. However, in fjords and near the coast the slopes may be very steep, and the internal pressure gradient errors associated with σ-models may be significant even in high resolution studies. The internal pressure gradient errors are due to errors when estimating the density gradients in σ-models, and these errors are investigated for two idealized test cases and for the Hardanger fjord in Norway. The methods considered are the standard second order method and a recently proposed method that is balanced such that the density gradients are zero for the case ρ = ρ(z) where ρ is the density and z is the vertical coordinate. The results show that by using the balanced method, the errors may be reduced considerably also for slope parameters larger than the maximum suggested value of 0.2. For the Hardanger fjord case initialized with ρ = ρ(z) , the errors in the results produced with the balanced method are orders of magnitude smaller than the corresponding errors in the results produced with the second order method.

  6. Compensation of spectral and RF errors in swept-source OCT for high extinction complex demodulation

    PubMed Central

    Siddiqui, Meena; Tozburun, Serhat; Zhang, Ellen Ziyi; Vakoc, Benjamin J.

    2015-01-01

    We provide a framework for compensating errors within passive optical quadrature demodulation circuits used in swept-source optical coherence tomography (OCT). Quadrature demodulation allows for detection of both the real and imaginary components of an interference fringe, and this information separates signals from positive and negative depth spaces. To achieve a high extinction (∼60 dB) between these positive and negative signals, the demodulation error must be less than 0.1% in amplitude and phase. It is difficult to construct a system that achieves this low error across the wide spectral and RF bandwidths of high-speed swept-source systems. In a prior work, post-processing methods for removing residual spectral errors were described. Here, we identify the importance of a second class of errors originating in the RF domain, and present a comprehensive framework for compensating both spectral and RF errors. Using this framework, extinctions >60 dB are demonstrated. A stability analysis shows that calibration parameters associated with RF errors are accurate for many days, while those associated with spectral errors must be updated prior to each imaging session. Empirical procedures to derive both RF and spectral calibration parameters simultaneously and to update spectral calibration parameters are presented. These algorithms provide the basis for using passive optical quadrature demodulation circuits with high speed and wide-bandwidth swept-source OCT systems. PMID:25836784

  7. Compensation of spectral and RF errors in swept-source OCT for high extinction complex demodulation.

    PubMed

    Siddiqui, Meena; Tozburun, Serhat; Zhang, Ellen Ziyi; Vakoc, Benjamin J

    2015-03-01

    We provide a framework for compensating errors within passive optical quadrature demodulation circuits used in swept-source optical coherence tomography (OCT). Quadrature demodulation allows for detection of both the real and imaginary components of an interference fringe, and this information separates signals from positive and negative depth spaces. To achieve a high extinction (∼60 dB) between these positive and negative signals, the demodulation error must be less than 0.1% in amplitude and phase. It is difficult to construct a system that achieves this low error across the wide spectral and RF bandwidths of high-speed swept-source systems. In a prior work, post-processing methods for removing residual spectral errors were described. Here, we identify the importance of a second class of errors originating in the RF domain, and present a comprehensive framework for compensating both spectral and RF errors. Using this framework, extinctions >60 dB are demonstrated. A stability analysis shows that calibration parameters associated with RF errors are accurate for many days, while those associated with spectral errors must be updated prior to each imaging session. Empirical procedures to derive both RF and spectral calibration parameters simultaneously and to update spectral calibration parameters are presented. These algorithms provide the basis for using passive optical quadrature demodulation circuits with high speed and wide-bandwidth swept-source OCT systems. PMID:25836784

  8. Analysis of 454 sequencing error rate, error sources, and artifact recombination for detection of Low-frequency drug resistance mutations in HIV-1 DNA

    PubMed Central

    2013-01-01

    Background 454 sequencing technology is a promising approach for characterizing HIV-1 populations and for identifying low frequency mutations. The utility of 454 technology for determining allele frequencies and linkage associations in HIV infected individuals has not been extensively investigated. We evaluated the performance of 454 sequencing for characterizing HIV populations with defined allele frequencies. Results We constructed two HIV-1 RT clones. Clone A was a wild type sequence. Clone B was identical to clone A except it contained 13 introduced drug resistant mutations. The clones were mixed at ratios ranging from 1% to 50% and were amplified by standard PCR conditions and by PCR conditions aimed at reducing PCR-based recombination. The products were sequenced using 454 pyrosequencing. Sequence analysis from standard PCR amplification revealed that 14% of all sequencing reads from a sample with a 50:50 mixture of wild type and mutant DNA were recombinants. The majority of the recombinants were the result of a single crossover event which can happen during PCR when the DNA polymerase terminates synthesis prematurely. The incompletely extended template then competes for primer sites in subsequent rounds of PCR. Although less often, a spectrum of other distinct crossover patterns was also detected. In addition, we observed point mutation errors ranging from 0.01% to 1.0% per base as well as indel (insertion and deletion) errors ranging from 0.02% to nearly 50%. The point errors (single nucleotide substitution errors) were mainly introduced during PCR while indels were the result of pyrosequencing. We then used new PCR conditions designed to reduce PCR-based recombination. Using these new conditions, the frequency of recombination was reduced 27-fold. The new conditions had no effect on point mutation errors. We found that 454 pyrosequencing was capable of identifying minority HIV-1 mutations at frequencies down to 0.1% at some nucleotide positions. Conclusion

  9. Power and Type I Error Rates for Rank-Score MANOVA Techniques.

    ERIC Educational Resources Information Center

    Pavur, Robert; Nath, Ravinder

    1989-01-01

    A Monte Carlo simulation study compared the power and Type I errors of the Wilks lambda statistic and the statistic of M. L. Puri and P. K. Sen (1971) on transformed data in a one-way multivariate analysis of variance. Preferred test procedures, based on robustness and power, are discussed. (SLD)

  10. A Comparison of Type I Error Rates of Alpha-Max with Established Multiple Comparison Procedures.

    ERIC Educational Resources Information Center

    Barnette, J. Jackson; McLean, James E.

    J. Barnette and J. McLean (1996) proposed a method of controlling Type I error in pairwise multiple comparisons after a significant omnibus F test. This procedure, called Alpha-Max, is based on a sequential cumulative probability accounting procedure in line with Bonferroni inequality. A missing element in the discussion of Alpha-Max was the…

  11. People’s Hypercorrection of High Confidence Errors: Did They Know it All Along?

    PubMed Central

    Metcalfe, Janet; Finn, Bridgid

    2010-01-01

    This study investigated the ‘knew it all along’ explanation of the hypercorrection effect. The hypercorrection effect refers to the finding that when given corrective feedback, errors that are committed with high confidence are easier to correct than low confidence errors. Experiment 1 showed that people were more likely to claim that they ‘knew it all along,’ when they were given the answers to high confidence errors as compared to low confidence errors. Experiments 2 and 3 investigated whether people really did know the correct answers before being told, or whether the claim in Experiment 1 was mere hindsight bias. Experiment 2 showed that (1) participants were more likely to choose the correct answer in a second guess multiple-choice test when they had expressed an error with high rather than low confidence, and (2) that they were more likely to generate the correct answers to high confidence as compared to low confidence errors, after being told they were wrong and to try again. Experiment 3 showed that (3) people were more likely to produce the correct answer when given a two-letter cue to high rather than low confidence errors, and that (4) when feedback was scaffolded by presenting the target letters one by one, people needed fewer such letter prompts to reach the correct answers when they had committed high, rather than low confidence errors. These results converge on the conclusion that when people said that they ‘knew it all along’, they were right. This knowledge, no doubt, contributes to why they are able to correct those high confidence errors so easily. PMID:21355668

  12. A comparison of error detection rates between the reading aloud method and the double data entry method.

    PubMed

    Kawado, Miyuki; Hinotsu, Shiro; Matsuyama, Yutaka; Yamaguchi, Takuhiro; Hashimoto, Shuji; Ohashi, Yasuo

    2003-10-01

    Data entry and its verification are important steps in the process of data management in clinical studies. In Japan, a kind of visual comparison called the reading aloud (RA) method is often used as an alternative to or in addition to the double data entry (DDE) method. In a typical RA method, one operator reads previously keyed data aloud while looking at a printed sheet or computer screen, and another operator compares the voice with the corresponding data recorded on case report forms (CRFs) to confirm whether the data are the same. We compared the efficiency of the RA method with that of the DDE method in the data management system of the Japanese Registry of Renal Transplantation. Efficiency was evaluated in terms of error detection rate and expended time. Five hundred sixty CRFs were randomly allocated to two operators for single data entry. Two types of DDE and RA methods were performed. Single data entry errors were detected in 358 of 104,720 fields (per-field error rate=0.34%). Error detection rates were 88.3% for the DDE method performed by a different operator, 69.0% for the DDE method performed by the same operator, 59.5% for the RA method performed by a different operator, and 39.9% for the RA method performed by the same operator. The differences in these rates were significant (p<0.001) between the two verification methods as well as between the types of operator (same or different). The total expended times were 74.8 hours for the DDE method and 57.9 hours for the RA method. These results suggest that in detecting errors of single data entry, the RA method is inferior to the DDE method, while its time cost is lower. PMID:14500053

  13. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  14. High burn rate solid composite propellants

    NASA Astrophysics Data System (ADS)

    Manship, Timothy D.

    High burn rate propellants help maintain high levels of thrust without requiring complex, high surface area grain geometries. Utilizing high burn rate propellants allows for simplified grain geometries that not only make production of the grains easier, but the simplified grains tend to have better mechanical strength, which is important in missiles undergoing high-g accelerations. Additionally, high burn rate propellants allow for a higher volumetric loading which reduces the overall missile's size and weight. The purpose of this study is to present methods of achieving a high burn rate propellant and to develop a composite propellant formulation that burns at 1.5 inches per second at 1000 psia. In this study, several means of achieving a high burn rate propellant were presented. In addition, several candidate approaches were evaluated using the Kepner-Tregoe method with hydroxyl terminated polybutadiene (HTPB)-based propellants using burn rate modifiers and dicyclopentadiene (DCPD)-based propellants being selected for further evaluation. Propellants with varying levels of nano-aluminum, nano-iron oxide, FeBTA, and overall solids loading were produced using the HTPB binder and evaluated in order to determine the effect the various ingredients have on the burn rate and to find a formulation that provides the burn rate desired. Experiments were conducted to compare the burn rates of propellants using the binders HTPB and DCPD. The DCPD formulation matched that of the baseline HTPB mix. Finally, GAP-plasticized DCPD gumstock dogbones were attempted to be made for mechanical evaluation. Results from the study show that nano-additives have a substantial effect on propellant burn rate with nano-iron oxide having the largest influence. Of the formulations tested, the highest burn rate was a 84% solids loading mix using nano-aluminum nano-iron oxide, and ammonium perchlorate in a 3:1(20 micron: 200 micron) ratio which achieved a burn rate of 1.2 inches per second at 1000

  15. Multichannel analyzers at high rates of input

    NASA Technical Reports Server (NTRS)

    Rudnick, S. J.; Strauss, M. G.

    1969-01-01

    Multichannel analyzer, used with a gating system incorporating pole-zero compensation, pile-up rejection, and baseline-restoration, achieves good resolution at high rates of input. It improves resolution, reduces tailing and rate-contributed continuum, and eliminates spectral shift.

  16. High-rate lithium thionyl chloride cells

    NASA Technical Reports Server (NTRS)

    Goebel, F.

    1982-01-01

    A high-rate C cell with disc electrodes was developed to demonstrate current rates which are comparable to other primary systems. The tests performed established the limits of abuse beyond which the cell becomes hazardous. Tests include: impact, shock, and vibration tests; temperature cycling; and salt water immersion of fresh cells.

  17. ISS Update: High Rate Communications System

    NASA Video Gallery

    ISS Update Commentator Pat Ryan interviews Diego Serna, Communications and Tracking Officer, about the High Rate Communications System. Questions? Ask us on Twitter @NASA_Johnson and include the ha...

  18. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  19. Tracking in high-frame-rate imaging.

    PubMed

    Wu, Shih-Ying; Wang, Shun-Li; Li, Pai-Chi

    2010-01-01

    Speckle tracking has been used for motion estimation in ultrasound imaging. Unlike conventional Doppler techniques, which are angle-dependent, speckle tracking can be utilized to estimate velocity vectors. However, the accuracy of speckle-tracking methods is limited by speckle decorrelation, which is related to the displacement between two consecutive images, and, hence, combining high-frame-rate imaging and speckle tracking could potentially increase the accuracy of motion estimation. However, the lack of transmit focusing may also affect the tracking results and the high computational requirement may be problematic. This study therefore assessed the performance of high-frame-rate speckle tracking and compared it with conventional focusing. The effects of the signal-to-noise ratio (SNR), bulk motion, and velocity gradients were investigated in both experiments and simulations. The results show that high-frame-rate speckle tracking can achieve high accuracy if the SNR is sufficiently high. In addition, its computational complexity is acceptable because smaller search windows can be used due to the displacements between frames generally being smaller during high-frame-rate imaging. Speckle decor-relation resulting from velocity gradients within a sample volume is also not as significant during high-frame-rate imaging. PMID:20690428

  20. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  1. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  2. Improvement of bit error rate and page alignment in the holographic data storage system by using the structural similarity method.

    PubMed

    Chen, Yu-Ta; Ou-Yang, Mang; Lee, Cheng-Chung

    2012-06-01

    Although widely recognized as a promising candidate for the next generation of data storage devices, holographic data storage systems (HDSS) incur adverse effects such as noise, misalignment, and aberration. Therefore, based on the structural similarity (SSIM) concept, this work presents a more accurate locating approach than the gray level weighting method (GLWM). Three case studies demonstrate the effectiveness of the proposed approach. Case 1 focuses on achieving a high performance of a Fourier lens in HDSS, Cases 2 and 3 replace the Fourier lens with a normal lens to decrease the quality of the HDSS, and Case 3 demonstrates the feasibility of a defocus system in the worst-case scenario. Moreover, the bit error rate (BER) is evaluated in several average matrices extended from the located position. Experimental results demonstrate that the proposed SSIM method renders a more accurate centering and a lower BER, lower BER of 2 dB than those of the GLWM in Cases 1 and 2, and BER of 1.5 dB in Case 3. PMID:22695607

  3. A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware.

    PubMed

    Shi, Haixiang; Schmidt, Bertil; Liu, Weiguo; Müller-Wittig, Wolfgang

    2010-04-01

    Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this article, we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data so that error-free reads can be available before DNA fragment assembly, which is of high importance to many graph-based short-read assembly tools. The algorithm is based on spectral alignment and uses the Compute Unified Device Architecture (CUDA) programming model. To gain efficiency we are taking advantage of the CUDA texture memory using a space-efficient Bloom filter data structure for spectrum membership queries. We have tested the runtime and accuracy of our algorithm using real and simulated Illumina data for different read lengths, error rates, input sizes, and algorithmic parameters. Using a CUDA-enabled mass-produced GPU (available for less than US$400 at any local computer outlet), this results in speedups of 12-84 times for the parallelized error correction, and speedups of 3-63 times for both sequential preprocessing and parallelized error correction compared to the publicly available Euler-SR program. Our implementation is freely available for download from http://cuda-ec.sourceforge.net . PMID:20426693

  4. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  5. Single Event Test Methodologies and System Error Rate Analysis for Triple Modular Redundant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael

    2010-01-01

    We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.

  6. An Error Model for High-Time Resolution Satellite Precipitation Products

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Sapiano, M.; Adler, R. F.; Huffman, G. J.; Tian, Y.

    2013-12-01

    A new error scheme (PUSH: Precipitation Uncertainties for Satellite Hydrology) is presented to provide global estimates of errors for high time resolution, merged precipitation products. Errors are estimated for the widely used Tropical Rainfall Monitoring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 product at daily/0.25° resolution, using the high quality NOAA CPC-UNI gauge analysis as the benchmark. Each of the following four scenarios is explored and explicitly modeled: correct no-precipitation detection (both satellite and gauges detect no precipitation), missed precipitation (satellite records a zero, but it is incorrect), false alarm (satellite detects precipitation, but the reference is zero), and hit (both satellite and gauges detect precipitation). Results over Oklahoma show that the estimated probability distributions are able to reproduce the probability density functions of the benchmark precipitation, in terms of both expected values and quantiles. PUSH adequately captures missed precipitation and false detection uncertainties, reproduces the spatial pattern of the error, and shows a good agreement between observed and estimated errors. The resulting error estimates could be attached to the standard products for the scientific community to use. Investigation is underway to: 1) test the approach in different regions of the world; 2) verify the ability of the model to discern the systematic and random components of the error; 3) and evaluate the model performance when higher time-resolution satellite products (i.e., 3-hourly) are employed.

  7. A Framework for Interpreting Type I Error Rates from a Product‐Term Model of Interaction Applied to Quantitative Traits

    PubMed Central

    Province, Michael A.

    2015-01-01

    ABSTRACT Adequate control of type I error rates will be necessary in the increasing genome‐wide search for interactive effects on complex traits. After observing unexpected variability in type I error rates from SNP‐by‐genome interaction scans, we sought to characterize this variability and test the ability of heteroskedasticity‐consistent standard errors to correct it. We performed 81 SNP‐by‐genome interaction scans using a product‐term model on quantitative traits in a sample of 1,053 unrelated European Americans from the NHLBI Family Heart Study, and additional scans on five simulated datasets. We found that the interaction‐term genomic inflation factor (lambda) showed inflation and deflation that varied with sample size and allele frequency; that similar lambda variation occurred in the absence of population substructure; and that lambda was strongly related to heteroskedasticity but not to minor non‐normality of phenotypes. Heteroskedasticity‐consistent standard errors narrowed the range of lambda, with HC3 outperforming HC0, but in individual scans tended to create new P‐value outliers related to sparse two‐locus genotype classes. We explain the lambda variation as a result of non‐independence of test statistics coupled with stochastic biases in test statistics due to a failure of the test to reach asymptotic properties. We propose that one way to interpret lambda is by comparison to an empirical distribution generated from data simulated under the null hypothesis and without population substructure. We further conclude that the interaction‐term lambda should not be used to adjust test statistics and that heteroskedasticity‐consistent standard errors come with limitations that may outweigh their benefits in this setting. PMID:26659945

  8. Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results

    SciTech Connect

    Clark, E.L.

    1994-07-01

    Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.

  9. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  10. High Strain Rate Rheology of Polymer Melts

    NASA Astrophysics Data System (ADS)

    Kelly, Adrian; Gough, Tim; Whiteside, Ben; Coates, Phil D.

    2009-07-01

    A modified servo electric injection moulding machine has been used in air-shot mode with capillary dies fitted at the nozzle to examine the rheology of a number of commercial polymers at wall shear strain rates of up to 107 s-1. Shear and extensional flow properties were obtained through the use of long and orifice (close to zero land length) dies of the same diameter. A range of polyethylene, polypropylene and polystyrene melts have been characterized; good agreement was found between the three techniques used in the ranges where strain rates overlapped. Shear viscosity of the polymers studied was found to exhibit a plateau above approximately 1×106 s-1. A relationship between the measured high strain rate rheological behaviour and molecular structure was noted, with polymers containing larger side groups reaching the rate independent plateau at lower strain rates than those with simpler structures.

  11. Orifice-induced pressure error studies in Langley 7- by 10-foot high-speed tunnel

    NASA Technical Reports Server (NTRS)

    Plentovich, E. B.; Gloss, B. B.

    1986-01-01

    For some time it has been known that the presence of a static pressure measuring hole will disturb the local flow field in such a way that the sensed static pressure will be in error. The results of previous studies aimed at studying the error induced by the pressure orifice were for relatively low Reynolds number flows. Because of the advent of high Reynolds number transonic wind tunnels, a study was undertaken to assess the magnitude of this error at high Reynolds numbers than previously published and to study a possible method of eliminating this pressure error. This study was conducted in the Langley 7- by 10-Foot High-Speed Tunnel on a flat plate. The model was tested at Mach numbers from 0.40 to 0.72 and at Reynolds numbers from 7.7 x 1,000,000 to 11 x 1,000,000 per meter (2.3 x 1,000,000 to 3.4 x 1,000,000 per foot), respectively. The results indicated that as orifice size increased, the pressure error also increased but that a porous metal (sintered metal) plug inserted in an orifice could greatly reduce the pressure error induced by the orifice.

  12. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  13. Children with High Functioning Autism show increased prefrontal and temporal cortex activity during error monitoring

    PubMed Central

    Goldberg, Melissa C.; Spinelli, Simona; Joel, Suresh; Pekar, James J.; Denckla, Martha B.; Mostofsky, Stewart H.

    2010-01-01

    Evidence exists for deficits in error monitoring in autism. These deficits may be particularly important because they may contribute to excessive perseveration and repetitive behavior in autism. We examined the neural correlates of error monitoring using fMRI in 8–12-year-old children with high-functioning autism (HFA, n=11) and typically developing children (TD, n=15) during performance of a Go/No-Go task by comparing the neural correlates of commission errors versus correct response inhibition trials. Compared to TD children, children with HFA showed increased BOLD fMRI signal in the anterior medial prefrontal cortex (amPFC) and the left superior temporal gyrus (STempG) during commission error (versus correct inhibition) trials. A follow-up region-of-interest analysis also showed increased BOLD signal in the right insula in HFA compared to TD controls. Our findings of increased amPFC and STempG activity in HFA, together with the increased activity in the insula, suggest a greater attention towards the internally-driven emotional state associated with making an error in children with HFA. Since error monitoring occurs across different cognitive tasks throughout daily life, an increased emotional reaction to errors may have important consequences for early learning processes. PMID:21151713

  14. A high-strain-rate superplastic ceramic.

    PubMed

    Kim, B N; Hiraga, K; Morita, K; Sakka, Y

    2001-09-20

    High-strain-rate superplasticity describes the ability of a material to sustain large plastic deformation in tension at high strain rates of the order of 10-2 to 10-1 s-1 and is of great technological interest for the shape-forming of engineering materials. High-strain-rate superplasticity has been observed in aluminium-based and magnesium-based alloys. But for ceramic materials, superplastic deformation has been restricted to low strain rates of the order of 10-5 to 10-4 s-1 for most oxides and nitrides with the presence of intergranular cavities leading to premature failure. Here we show that a composite ceramic material consisting of tetragonal zirconium oxide, magnesium aluminate spinel and alpha-alumina phases exhibits superplasticity at strain rates up to 1 s-1. The composite also exhibits a large tensile elongation, exceeding 1,050 per cent for a strain rate of 0.4 s-1. The tensile flow behaviour and deformed microstructure of the material indicate that superplasticity is due to a combination of limited grain growth in the constitutive phases and the intervention of dislocation-induced plasticity in the zirconium oxide phase. We suggest that the present results hold promise for the application of shape-forming technologies to ceramic materials. PMID:11565026

  15. The Effect of Administrative Boundaries and Geocoding Error on Cancer Rates in California

    PubMed Central

    Goldberg, Daniel W.; Cockburn, Myles G.

    2012-01-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods. PMID:22469490

  16. High Bit Rate Experiments Over ACTS

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A.; Gary, J. Patrick; Edelsen, Burt; Helm, Neil; Cohen, Judith; Shopbell, Patrick; Mechoso, C. Roberto; Chung-Chun; Farrara, M.; Spahr, Joseph

    1996-01-01

    This paper describes two high data rate experiments chat are being developed for the gigabit NASA Advanced Communications Technology Satellite (ACTS). The first is a telescience experiment that remotely acquires image data at the Keck telescope from the Caltech campus. The second is a distributed global climate application that is run between two supercomputer centers interconnected by ACTS. The implementation approach for each is described along with the expected results. Also. the ACTS high data rate (HDR) ground station is also described in detail.

  17. High Rate for Type IC Supernovae

    SciTech Connect

    Muller, R.A.; Marvin-Newberg, H.J.; Pennypacker, Carl R.; Perlmutter, S.; Sasseen, T.P.; Smith, C.K.

    1991-09-01

    Using an automated telescope we have detected 20 supernovae in carefully documented observations of nearby galaxies. The supernova rates for late spiral (Sbc, Sc, Scd, and Sd) galaxies, normalized to a blue luminosity of 10{sup 10} L{sub Bsun}, are 0.4 h{sup 2}, 1.6 h{sup 2}, and 1.1 h{sup 2} per 100 years for SNe type la, Ic, and II. The rate for type Ic supernovae is significantly higher than found in previous surveys. The rates are not corrected for detection inefficiencies, and do not take into account the indications that the Ic supernovae are fainter on the average than the previous estimates; therefore the true rates are probably higher. The rates are not strongly dependent on the galaxy inclination, in contradiction to previous compilations. If the Milky Way is a late spiral, then the rate of Galactic supernovae is greater than 1 per 30 {+-} 7 years, assuming h = 0.75. This high rate has encouraging consequences for future neutrino and gravitational wave observatories.

  18. Approximation and error estimation in high dimensional space for stochastic collocation methods on arbitrary sparse samples

    SciTech Connect

    Archibald, Richard K; Deiterding, Ralf; Hauck, Cory D; Jakeman, John D; Xiu, Dongbin

    2012-01-01

    We have develop a fast method that can capture piecewise smooth functions in high dimensions with high order and low computational cost. This method can be used for both approximation and error estimation of stochastic simulations where the computations can either be guided or come from a legacy database.

  19. Baltimore District Tackles High Suspension Rates

    ERIC Educational Resources Information Center

    Maxwell, Lesli A.

    2007-01-01

    This article reports on how the Baltimore District tackles its high suspension rates. Driven by an increasing belief that zero-tolerance disciplinary policies are ineffective, more educators are embracing strategies that do not exclude misbehaving students from school for offenses such as insubordination, disrespect, cutting class, tardiness, and…

  20. Understanding High School Graduation Rates in Illinois

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  1. Understanding High School Graduation Rates in Delaware

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  2. Assessing XCTD Fall Rate Errors using Concurrent XCTD and CTD Profiles in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Millar, J.; Gille, S. T.; Sprintall, J.; Frants, M.

    2010-12-01

    Refinements in the fall rate equation for XCTDs are not as well understood as those for XBTs, due in part to the paucity of concurrent and collocated XCTD and CTD profiles. During February and March 2010, the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES) conducted 31 collocated 1000-meter XCTD and CTD casts in the Drake Passage. These XCTD/CTD profile pairs are closely matched in space and time, with a mean distance between casts of 1.19 km and a mean lag time of 39 minutes. The profile pairs are well suited to address the XCTD fall rate problem specifically in higher latitude waters, where existing fall rate corrections have rarely been assessed. Many of these XCTD/CTD profile pairs reveal an observable depth offset in measurements of both temperature and conductivity. Here, the nature and extent of this depth offset is evaluated.

  3. Compensating inherent linear move water application errors using a variable rate irrigation system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Continuous move irrigation systems such as linear move and center pivot irrigate unevenly when applying conventional uniform water rates due to the towers/motors stop/advance pattern. The effect of the cart movement pattern on linear move water application is larger on the first two spans which intr...

  4. An approach for reducing the error rate in automated lung segmentation.

    PubMed

    Gill, Gurman; Beichel, Reinhard R

    2016-09-01

    Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855±0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897

  5. Denoising DNA deep sequencing data—high-throughput sequencing errors and their correction

    PubMed Central

    Laehnemann, David; Borkhardt, Arndt

    2016-01-01

    Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here. PMID:26026159

  6. High rate vacuum deposited silicon layers

    NASA Astrophysics Data System (ADS)

    Kipperman, A. H. M.; van Zolingen, R. J. C.

    1982-08-01

    Silicon layers were deposited in vacuum at high rates (up to 50 microns/min) on aluminum-, silicon oxide-, and silicon nitride-coated stainless steel, pyrex, and silicon substrates. The morphological, crystallographic, and electrical properties of the layers were studied in as-grown and annealed conditions. Layers as-grown on aluminum-coated substrates had unsatisfactory electrical properties and too high an aluminum concentration to be acceptable for solar cells. Thermal annealing of layers on SiO2- and on Si3N4-coated substrates markedly improved their crystallographic and electrical properties. In all cases, silicon layers deposited at about 550 C showed a columnar structure which, after prolonged etching, was found to be composed of fibrils of about 0.3 microns in diameter extending over the entire thickness of the layer. It is suggested that further tests should be carried out at a substrate temperature of about 800 C maintaining the high deposition rates.

  7. High strain rate behaviour of polypropylene microfoams

    NASA Astrophysics Data System (ADS)

    Gómez-del Río, T.; Garrido, M. A.; Rodríguez, J.; Arencón, D.; Martínez, A. B.

    2012-08-01

    Microcellular materials such as polypropylene foams are often used in protective applications and passive safety for packaging (electronic components, aeronautical structures, food, etc.) or personal safety (helmets, knee-pads, etc.). In such applications the foams which are used are often designed to absorb the maximum energy and are generally subjected to severe loadings involving high strain rates. The manufacture process to obtain polymeric microcellular foams is based on the polymer saturation with a supercritical gas, at high temperature and pressure. This method presents several advantages over the conventional injection moulding techniques which make it industrially feasible. However, the effect of processing conditions such as blowing agent, concentration and microfoaming time and/or temperature on the microstructure of the resulting microcellular polymer (density, cell size and geometry) is not yet set up. The compressive mechanical behaviour of several microcellular polypropylene foams has been investigated over a wide range of strain rates (0.001 to 3000 s-1) in order to show the effects of the processing parameters and strain rate on the mechanical properties. High strain rate tests were performed using a Split Hopkinson Pressure Bar apparatus (SHPB). Polypropylene and polyethylene-ethylene block copolymer foams of various densities were considered.

  8. Trends and weekly and seasonal cycles in the rate of errors in the clinical management of hospitalized patients.

    PubMed

    Buckley, David; Bulger, David

    2012-08-01

    Studies on the rate of adverse events in hospitalized patients seldom examine temporal patterns. This study presents evidence of both weekly and annual cycles. The study is based on a large and diverse data set, with nearly 5 yrs of data from a voluntary staff-incident reporting system of a large public health care provider in rural southeastern Australia. The data of 63 health care facilities were included, ranging from large non-metropolitan hospitals to small community and aged health care facilities. Poisson regression incorporating an observation-driven autoregressive effect using the GLARMA framework was used to explain daily error counts with respect to long-term trend and weekly and annual effects, with procedural volume as an offset. The annual pattern was modeled using a first-order sinusoidal effect. The rate of errors reported demonstrated an increasing annual trend of 13.4% (95% confidence interval [CI] 10.6% to 16.3%); however, this trend was only significant for errors of minor or no harm to the patient. A strong "weekend effect" was observed. The incident rate ratio for the weekend versus weekdays was 2.74 (95% CI 2.55 to 2.93). The weekly pattern was consistent for incidents of all levels of severity, but it was more pronounced for less severe incidents. There was an annual cycle in the rate of incidents, the number of incidents peaking in October, on the 282 nd day of the year (spring in Australia), with an incident rate ratio 1.09 (95% CI 1.05 to 1.14) compared to the annual mean. There was no so-called "killing season" or "July effect," as the peak in incident rate was not related to the commencement of work by new medical school graduates. The major finding of this study is the rate of adverse events is greater on weekends and during spring. The annual pattern appears to be unrelated to the commencement of new graduates and potentially results from seasonal variation in the case mix of patients or the health of the medical workforce that alters

  9. A survey of computational methods and error rate estimation procedures for peptide and protein identification in shotgun proteomics

    PubMed Central

    Nesvizhskii, Alexey I.

    2010-01-01

    This manuscript provides a comprehensive review of the peptide and protein identification process using tandem mass spectrometry (MS/MS) data generated in shotgun proteomic experiments. The commonly used methods for assigning peptide sequences to MS/MS spectra are critically discussed and compared, from basic strategies to advanced multi-stage approaches. A particular attention is paid to the problem of false-positive identifications. Existing statistical approaches for assessing the significance of peptide to spectrum matches are surveyed, ranging from single-spectrum approaches such as expectation values to global error rate estimation procedures such as false discovery rates and posterior probabilities. The importance of using auxiliary discriminant information (mass accuracy, peptide separation coordinates, digestion properties, and etc.) is discussed, and advanced computational approaches for joint modeling of multiple sources of information are presented. This review also includes a detailed analysis of the issues affecting the interpretation of data at the protein level, including the amplification of error rates when going from peptide to protein level, and the ambiguities in inferring the identifies of sample proteins in the presence of shared peptides. Commonly used methods for computing protein-level confidence scores are discussed in detail. The review concludes with a discussion of several outstanding computational issues. PMID:20816881

  10. Reliability of perceived neighborhood conditions and the effects of measurement error on self-rated health across urban and rural neighborhoods

    PubMed Central

    Pruitt, Sandi L.; Jeffe, Donna B.; Yan, Yan; Schootman, Mario

    2011-01-01

    Background Limited psychometric research has examined the reliability of self-reported measures of neighborhood conditions, the effect of measurement error on associations between neighborhood conditions and health, and potential differences in the reliabilities between neighborhood strata(urban vs. rural and low vs. high poverty). We assessed overall and stratified reliability of self-reported perceived neighborhood conditions using 5 scales (Social and Physical Disorder, Social Control, Social Cohesion, Fear) and 4 single items (Multidimensional Neighboring). We also assessed measurement error-corrected associations of these conditions with self-rated health. Methods Using random-digit dialing, 367 women without breast cancer (matched controls from a larger study) were interviewed twice, 2–3 weeks apart. We assessed test-retest (intraclass correlation coefficients [ICC]/weighted kappa [k]) and internal consistency reliability (Cronbach’sα). Differences in reliability across neighborhood strata were tested using bootstrap methods. Regression calibration corrected estimates for measurement error. Results All measures demonstrated satisfactory internal consistency (α≥.70) and either moderate (ICC/k=.41–.60) or substantial (ICC/k=.61–.80) test-retest reliability in the full sample. Internal consistency did not differ by neighborhood strata. Test-retest reliability was significantly lower among rural (vs. urban) residents for 2 scales (Social Control, Physical Disorder) and 2 Multidimensional Neighboring items; test-retest reliability was higher for Physical Disorder and lower for 1 item Multidimensional Neighboring item among the high (vs. low) poverty strata. After measurement error correction, the magnitude of associations between neighborhood conditions and self-rated health were larger, particularly in the rural population. Conclusion Research is needed to develop and test reliable measures of perceived neighborhood conditions relevant to the health

  11. The safety of electronic prescribing: manifestations, mechanisms, and rates of system-related errors associated with two commercial systems in hospitals

    PubMed Central

    Westbrook, Johanna I; Baysari, Melissa T; Li, Ling; Burke, Rosemary; Richardson, Katrina L; Day, Richard O

    2013-01-01

    Objectives To compare the manifestations, mechanisms, and rates of system-related errors associated with two electronic prescribing systems (e-PS). To determine if the rate of system-related prescribing errors is greater than the rate of errors prevented. Methods Audit of 629 inpatient admissions at two hospitals in Sydney, Australia using the CSC MedChart and Cerner Millennium e-PS. System related errors were classified by manifestation (eg, wrong dose), mechanism, and severity. A mechanism typology comprised errors made: selecting items from drop-down menus; constructing orders; editing orders; or failing to complete new e-PS tasks. Proportions and rates of errors by manifestation, mechanism, and e-PS were calculated. Results 42.4% (n=493) of 1164 prescribing errors were system-related (78/100 admissions). This result did not differ by e-PS (MedChart 42.6% (95% CI 39.1 to 46.1); Cerner 41.9% (37.1 to 46.8)). For 13.4% (n=66) of system-related errors there was evidence that the error was detected prior to study audit. 27.4% (n=135) of system-related errors manifested as timing errors and 22.5% (n=111) wrong drug strength errors. Selection errors accounted for 43.4% (34.2/100 admissions), editing errors 21.1% (16.5/100 admissions), and failure to complete new e-PS tasks 32.0% (32.0/100 admissions). MedChart generated more selection errors (OR=4.17; p=0.00002) but fewer new task failures (OR=0.37; p=0.003) relative to the Cerner e-PS. The two systems prevented significantly more errors than they generated (220/100 admissions (95% CI 180 to 261) vs 78 (95% CI 66 to 91)). Conclusions System-related errors are frequent, yet few are detected. e-PS require new tasks of prescribers, creating additional cognitive load and error opportunities. Dual classification, by manifestation and mechanism, allowed identification of design features which increase risk and potential solutions. e-PS designs with fewer drop-down menu selections may reduce error risk. PMID:23721982

  12. Highly stable high-rate discriminator for nuclear counting

    NASA Technical Reports Server (NTRS)

    English, J. J.; Howard, R. H.; Rudnick, S. J.

    1969-01-01

    Pulse amplitude discriminator is specially designed for nuclear counting applications. At very high rates, the threshold is stable. The output-pulse width and the dead time change negligibly. The unit incorporates a provision for automatic dead-time correction.

  13. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  14. Optical system error analysis and calibration method of high-accuracy star trackers.

    PubMed

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  15. High temperature electrochemical corrosion rate probes

    SciTech Connect

    Bullard, Sophie J.; Covino, Bernard S., Jr.; Holcomb, Gordon R.; Ziomek-Moroz, M.

    2005-09-01

    Corrosion occurs in the high temperature sections of energy production plants due to a number of factors: ash deposition, coal composition, thermal gradients, and low NOx conditions, among others. Electrochemical corrosion rate (ECR) probes have been shown to operate in high temperature gaseous environments that are similar to those found in fossil fuel combustors. ECR probes are rarely used in energy production plants at the present time, but if they were more fully understood, corrosion could become a process variable at the control of plant operators. Research is being conducted to understand the nature of these probes. Factors being considered are values selected for the Stern-Geary constant, the effect of internal corrosion, and the presence of conductive corrosion scales and ash deposits. The nature of ECR probes will be explored in a number of different atmospheres and with different electrolytes (ash and corrosion product). Corrosion rates measured using an electrochemical multi-technique capabilities instrument will be compared to those measured using the linear polarization resistance (LPR) technique. In future experiments, electrochemical corrosion rates will be compared to penetration corrosion rates determined using optical profilometry measurements.

  16. High strain rate damage of Carrara marble

    NASA Astrophysics Data System (ADS)

    Doan, Mai-Linh; Billi, Andrea

    2011-10-01

    Several cases of rock pulverization have been observed along major active faults in granite and other crystalline rocks. They have been interpreted as due to coseismic pervasive microfracturing. In contrast, little is known about pulverization in carbonates. With the aim of understanding carbonate pulverization, we investigate the high strain rate (c. 100 s-1) behavior of unconfined Carrara marble through a set of experiments with a Split Hopkinson Pressure Bar. Three final states were observed: (1) at low strain, the sample is kept intact, without apparent macrofractures; (2) failure is localized along a few fractures once stress is larger than 100 MPa, corresponding to a strain of 0.65%; (3) above 1.3% strain, the sample is pulverized. Contrary to granite, the transition to pulverization is controlled by strain rather than strain rate. Yet, at low strain rate, a sample from the same marble displayed only a few fractures. This suggests that the experiments were done above the strain rate transition to pulverization. Marble seems easier to pulverize than granite. This creates a paradox: finely pulverized rocks should be prevalent along any high strain zone near faults through carbonates, but this is not what is observed. A few alternatives are proposed to solve this paradox.

  17. Slow-growing cells within isogenic populations have increased RNA polymerase error rates and DNA damage.

    PubMed

    van Dijk, David; Dhar, Riddhiman; Missarova, Alsu M; Espinar, Lorena; Blevins, William R; Lehner, Ben; Carey, Lucas B

    2015-01-01

    Isogenic cells show a large degree of variability in growth rate, even when cultured in the same environment. Such cell-to-cell variability in growth can alter sensitivity to antibiotics, chemotherapy and environmental stress. To characterize transcriptional differences associated with this variability, we have developed a method--FitFlow--that enables the sorting of subpopulations by growth rate. The slow-growing subpopulation shows a transcriptional stress response, but, more surprisingly, these cells have reduced RNA polymerase fidelity and exhibit a DNA damage response. As DNA damage is often caused by oxidative stress, we test the addition of an antioxidant, and find that it reduces the size of the slow-growing population. More generally, we find a significantly altered transcriptome in the slow-growing subpopulation that only partially resembles that of cells growing slowly due to environmental and culture conditions. Slow-growing cells upregulate transposons and express more chromosomal, viral and plasmid-borne transcripts, and thus explore a larger genotypic--and so phenotypic--space. PMID:26268986

  18. Slow-growing cells within isogenic populations have increased RNA polymerase error rates and DNA damage

    PubMed Central

    van Dijk, David; Dhar, Riddhiman; Missarova, Alsu M.; Espinar, Lorena; Blevins, William R.; Lehner, Ben; Carey, Lucas B.

    2015-01-01

    Isogenic cells show a large degree of variability in growth rate, even when cultured in the same environment. Such cell-to-cell variability in growth can alter sensitivity to antibiotics, chemotherapy and environmental stress. To characterize transcriptional differences associated with this variability, we have developed a method—FitFlow—that enables the sorting of subpopulations by growth rate. The slow-growing subpopulation shows a transcriptional stress response, but, more surprisingly, these cells have reduced RNA polymerase fidelity and exhibit a DNA damage response. As DNA damage is often caused by oxidative stress, we test the addition of an antioxidant, and find that it reduces the size of the slow-growing population. More generally, we find a significantly altered transcriptome in the slow-growing subpopulation that only partially resembles that of cells growing slowly due to environmental and culture conditions. Slow-growing cells upregulate transposons and express more chromosomal, viral and plasmid-borne transcripts, and thus explore a larger genotypic—and so phenotypic — space. PMID:26268986

  19. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  20. High Rate Data Delivery Thrust Area

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul

    2000-01-01

    In this paper, a brief description of the high rate data delivery (HRDD) thrust area, its focus and current technical activities being carried out by NASA centers including JPL, academia and industry under this program is provided. The processes and methods being used to achieve active participation in this program are presented. The developments in space communication technologies, which will shape NASA enterprise missions in the 21 st. century, are highlighted.

  1. HIGH ENERGY RATE EXTRUSION OF URANIUM

    DOEpatents

    Lewis, L.

    1963-07-23

    A method of extruding uranium at a high energy rate is described. Conditions during the extrusion are such that the temperature of the metal during extrusion reaches a point above the normal alpha to beta transition, but the metal nevertheless remains in the alpha phase in accordance with the Clausius- Clapeyron equation. Upon exiting from the die, the metal automatically enters the beta phase, after which the metal is permitted to cool. (AEC)

  2. Reserve, flowing electrolyte, high rate lithium battery

    NASA Astrophysics Data System (ADS)

    Puskar, M.; Harris, P.

    Flowing electrolyte Li/SOCl2 tests in single cell and multicell bipolar fixtures have been conducted, and measurements are presented for electrolyte flow rates, inlet and outlet temperatures, fixture temperatures at several points, and the pressure drop across the fixture. Reserve lithium batteries with flowing thionyl-chloride electrolytes are found to be capable of very high energy densities with usable voltages and capacities at current densities as high as 500 mA/sq cm. At this current density, a battery stack 10 inches in diameter is shown to produce over 60 kW of power while maintaining a safe operating temperature.

  3. Optimization of coplanar high rate supercapacitors

    NASA Astrophysics Data System (ADS)

    Sun, Leimeng; Wang, Xinghui; Liu, Wenwen; Zhang, Kang; Zou, Jianping; Zhang, Qing

    2016-05-01

    In this work, we describe two efficient methods to enhance the electrochemical performance of high-rate coplanar micro-supercapacitors (MSCs). Through introducing MnO2 nanosheets on vertical-aligned carbon nanotube (VACNT) array, the areal capacitance and volumetric energy density exhibit tremendous improvements which have been increased from 0.011 mF cm-2 to 0.017 mWh cm-3 to 0.479 mF cm-2 and 0.426 mWh cm-3 respectively at an ultrahigh scan rate of 50000 mV s-1. Subsequently, by fabricating an asymmetric MSC, the energy density could be increased to 0.167 mWh cm-3 as well. Moreover, as a result of applying MnO2/VACNT as the positive electrode and VACNT as the negative electrode, the cell operating voltage in aqueous electrolyte could be increased to as high as 2.0 V. Our advanced planar MSCs could operate well at different high scan rates and offer a promising integration potential with other in-plane devices on the same substrate.

  4. Optimization of coplanar high rate supercapacitors

    NASA Astrophysics Data System (ADS)

    Sun, Leimeng; Wang, Xinghui; Liu, Wenwen; Zhang, Kang; Zou, Jianping; Zhang, Qing

    2016-05-01

    In this work, we describe two efficient methods to enhance the electrochemical performance of high-rate coplanar micro-supercapacitors (MSCs). Through introducing MnO2 nanosheets on vertical-aligned carbon nanotube (VACNT) array, the areal capacitance and volumetric energy density exhibit tremendous improvements which have been increased from 0.011 mF cm-2 to 0.017 mWh cm-3 to 0.479 mF cm-2 and 0.426 mWh cm-3 respectively at an ultrahigh scan rate of 50000 mV s-1. Subsequently, by fabricating an asymmetric MSC, the energy density could be increased to 0.167 mWh cm-3 as well. Moreover, as a result of applying MnO2/VACNT as the positive electrode and VACNT as the negative electrode, the cell operating voltage in aqueous electrolyte could be increased to as high as 2.0 V. Our advanced planar MSCs could operate well at different high scan rates and offer a promising integration potential with other in-plane devices on the same substrate.

  5. Civilian residential fire fatality rates: Six high-rate states versus six low-rate states

    NASA Astrophysics Data System (ADS)

    Hall, J. R., Jr.; Helzer, S. G.

    1983-08-01

    Results of an analysis of 1,600 fire fatalities occurring in six states with high fire-death rates and six states with low fire-death rates are presented. Reasons for the differences in rates are explored, with special attention to victim age, sex, race, and condition at time of ignition. Fire cause patterns are touched on only lightly but are addressed more extensively in the companion piece to this report, "Rural and Non-Rural Civilian Residential Fire Fatalities in Twelve States', NBSIR 82-2519.

  6. Senior High School Students' Errors on the Use of Relative Words

    ERIC Educational Resources Information Center

    Bao, Xiaoli

    2015-01-01

    Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…

  7. Error Analysis in High School Mathematics. Conceived as Information-Processing Pathology.

    ERIC Educational Resources Information Center

    Davis, Robert B.

    This paper, presented at the 1979 meeting of the American Educational Research Association (AERA), investigates student errors in high school mathematics. A conceptual framework of hypothetical information-handling processes such as procedures, frames, retrieval from memory, visually-moderated sequences (VMS sequences), the integrated sequence,…

  8. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    The various methods of high bit transition density encoding are presented, their relative performance is compared in so far as error propagation characteristics, transition properties and system constraints are concerned. A computer simulation of the system using the specific PN code recommended, is included.

  9. Movement error rate for evaluation of machine learning methods for sEMG-based hand movement classification.

    PubMed

    Gijsberts, Arjan; Atzori, Manfredo; Castellini, Claudio; Muller, Henning; Caputo, Barbara

    2014-07-01

    There has been increasing interest in applying learning algorithms to improve the dexterity of myoelectric prostheses. In this work, we present a large-scale benchmark evaluation on the second iteration of the publicly released NinaPro database, which contains surface electromyography data for 6 DOF force activations as well as for 40 discrete hand movements. The evaluation involves a modern kernel method and compares performance of three feature representations and three kernel functions. Both the force regression and movement classification problems can be learned successfully when using a nonlinear kernel function, while the exp- χ(2) kernel outperforms the more popular radial basis function kernel in all cases. Furthermore, combining surface electromyography and accelerometry in a multimodal classifier results in significant increases in accuracy as compared to when either modality is used individually. Since window-based classification accuracy should not be considered in isolation to estimate prosthetic controllability, we also provide results in terms of classification mistakes and prediction delay. To this extent, we propose the movement error rate as an alternative to the standard window-based accuracy. This error rate is insensitive to prediction delays and it allows us therefore to quantify mistakes and delays as independent performance characteristics. This type of analysis confirms that the inclusion of accelerometry is superior, as it results in fewer mistakes while at the same time reducing prediction delay. PMID:24760932

  10. Influence of nonhomogeneous earth on the rms phase error and beam-pointing errors of large, sparse high-frequency receiving arrays

    NASA Astrophysics Data System (ADS)

    Weiner, M. M.

    1994-01-01

    The performance of ground-based high-frequency (HF) receiving arrays is reduced when the array elements have electrically small ground planes. The array rms phase error and beam-pointing errors, caused by multipath rays reflected from a nonhomogeneous Earth, are determined for a sparse array of elements that are modeled as Hertzian dipoles in close proximity to Earth with no ground planes. Numerical results are presented for cases of randomly distributed and systematically distributed Earth nonhomogeneities where one-half of vertically polarized array elements are located in proximity to one type of Earth and the remaining half are located in proximity to a second type of Earth. The maximum rms phase errors, for the cases examined, are 18 deg and 9 deg for randomly distributed and systematically distributed nonhomogeneities, respectively. The maximum beampointing errors are 0 and 0.3 beam widths for randomly distributed and systematically distributed nonhomogeneities, respectively.

  11. Reducing Systematic Centroid Errors Induced by Fiber Optic Faceplates in Intensified High-Accuracy Star Trackers

    PubMed Central

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  12. Reducing systematic centroid errors induced by fiber optic faceplates in intensified high-accuracy star trackers.

    PubMed

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  13. Evaluation of soft error rates using nuclear probes in bulk and SOI SRAMs with a technology node of 90 nm

    NASA Astrophysics Data System (ADS)

    Abo, Satoshi; Masuda, Naoyuki; Wakaya, Fujio; Onoda, Shinobu; Hirao, Toshio; Ohshima, Takeshi; Iwamatsu, Toshiaki; Takai, Mikio

    2010-06-01

    The difference of soft error rates (SERs) in conventional bulk Si and silicon-on-insulator (SOI) static random access memories (SRAMs) with a technology node of 90 nm has been investigated by helium ion probes with energies ranging from 0.8 to 6.0 MeV and a dose of 75 ions/μm 2. The SERs in the SOI SRAM were also investigated by oxygen ion probes with energies ranging from 9.0 to 18.0 MeV and doses of 0.14-0.76 ions/μm 2. The soft error in the bulk and SOI SRAMs occurred by helium ion irradiation with energies at and above 1.95 and 2.10 MeV, respectively. The SER in the bulk SRAM saturated with ion energies at and above 2.5 MeV. The SER in the SOI SRAM became the highest by helium ion irradiation at 2.5 MeV and drastically decreased with increasing the ion energies above 2.5 MeV, in which helium ions at this energy range generated the maximum amount of excess charge carriers in a SOI body. The soft errors occurred by helium ions were induced by a floating body effect due to generated excess charge carriers in the channel regions. The soft error occurred by oxygen ion irradiation with energies at and above 10.5 MeV in the SOI SRAM. The SER in the SOI SRAM gradually increased with energies from 10.5 to 13.5 MeV and saturated at 18 MeV, in which the amount of charge carriers induced by oxygen ions in this energy range gradually increased. The computer calculation indicated that the oxygen ions with energies above 13.0 MeV generated more excess charge carriers than the critical charge of the 90 nm node SOI SRAM with the designed over-layer thickness. The soft errors, occurred by oxygen ions with energies at and below 12.5 MeV, were induced by a floating body effect due to the generated excess charge carriers in the channel regions and those with energies at and above 13.0 MeV were induced by both the floating body effect and generated excess carriers. The difference of the threshold energy of the oxygen ions between the experiment and the computer calculation might

  14. High rate pulse processing algorithms for microcalorimeters

    SciTech Connect

    Rabin, Michael; Hoover, Andrew S; Bacrania, Mnesh K; Tan, Hui; Breus, Dimitry; Henning, Wolfgang; Sabourov, Konstantin; Collins, Jeff; Warburton, William K; Dorise, Bertrand; Ullom, Joel N

    2009-01-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensor can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small to maintain good energy resolution, and pulse decay times are normally in the order of milliseconds due to slow thermal relaxation. Consequently, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. Large arrays, however, require as much pulse processing as possible to be performed at the front end of the readout electronics to avoid transferring large amounts of waveform data to a host computer for processing. In this paper, they present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in the readout electronics that they are also currently developing, is to achieve sufficiently good energy resolution for most applications while being (a) simple enough to be implemented in the readout electronics and (b) capable of processing overlapping pulses and thus achieving much higher output count rates than the rates that existing algorithms are currently achieving. Details of these algorithms are presented, and their performance was compared to that of the 'optimal filter' that is the dominant pulse processing algorithm in the cryogenic-detector community.

  15. Effects of Two Commercial Electronic Prescribing Systems on Prescribing Error Rates in Hospital In-Patients: A Before and After Study

    PubMed Central

    Westbrook, Johanna I.; Reckmann, Margaret; Li, Ling; Runciman, William B.; Burke, Rosemary; Lo, Connie; Baysari, Melissa T.; Braithwaite, Jeffrey; Day, Richard O.

    2012-01-01

    Background Considerable investments are being made in commercial electronic prescribing systems (e-prescribing) in many countries. Few studies have measured or evaluated their effectiveness at reducing prescribing error rates, and interactions between system design and errors are not well understood, despite increasing concerns regarding new errors associated with system use. This study evaluated the effectiveness of two commercial e-prescribing systems in reducing prescribing error rates and their propensities for introducing new types of error. Methods and Results We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system) at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders) and clinical (e.g., wrong dose, wrong drug) errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious) by hospital and study period; and rates and categories of postintervention “system-related” errors (where system functionality or design contributed to the error) were calculated. Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards (respectively reductions of 66.1% [95% CI 53.9%–78.3%]; 57.5% [33.8%–81.2%]; and 60.5% [48.5%–72.4%]). The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission (95% CI 5.23–7.28) to 2.12 (95% CI 1.71–2.54; p<0.0001) and at Hospital B from 3.62 (95% CI 3.30–3.93) to 1.46 (95% CI 1.20–1.73; p<0

  16. High Strain Rate Behavior of Polyurea Compositions

    NASA Astrophysics Data System (ADS)

    Joshi, Vasant; Milby, Christopher

    2011-06-01

    Polyurea has been gaining importance in recent years due to its impact resistance properties. The actual compositions of this viscoelastic material must be tailored for specific use. It is therefore imperative to study the effect of variations in composition on the properties of the material. High-strain-rate response of three polyurea compositions with varying molecular weights has been investigated using a Split Hopkinson Pressure Bar arrangement equipped with titanium bars. The polyurea compositions were synthesized from polyamines (Versalink, Air Products) with a multi-functional isocyanate (Isonate 143L, Dow Chemical). Amines with molecular weights of 1000, 650, and a blend of 250/1000 have been used in the current investigation. The materials have been tested up to strain rates of 6000/s. Results from these tests have shown interesting trends on the high rate behavior. While higher molecular weight composition show lower yield, they do not show dominant hardening behavior. On the other hand, the blend of 250/1000 show higher load bearing capability but lower strain hardening effects than the 600 and 1000 molecular weight amine based materials. Refinement in experimental methods and comparison of results using aluminum Split Hopkinson Bar is presented.

  17. High strain rate behavior of polyurea compositions

    NASA Astrophysics Data System (ADS)

    Joshi, Vasant S.; Milby, Christopher

    2012-03-01

    High-strain-rate response of three polyurea compositions with varying molecular weights has been investigated using a Split Hopkinson Pressure Bar arrangement equipped with aluminum bars. Three polyurea compositions were synthesized from polyamines (Versalink, Air Products) with a multi-functional isocyanate (Isonate 143L, Dow Chemical). Amines with molecular weights of 1000, 650, and a blend of 250/1000 have been used in the current investigation. These materials have been tested to strain rates of over 6000/s. High strain rate results from these tests have shown varying trends as a function of increasing strain. While higher molecular weight composition show lower yield, they do not show dominant hardening behavior at lower strain. On the other hand, the blend of 250/1000 show higher load bearing capability but lower strain hardening effects than the 600 and 1000 molecular weight amine based materials. Results indicate that the initial increase in the modulus of the blend of 250/1000 may lead to the loss of strain hardening characteristics as the material is compressed to 50% strain, compared to 1000 molecular weight amine based material.

  18. High strain-rate magnetoelasticity in Galfenol

    NASA Astrophysics Data System (ADS)

    Domann, J. P.; Loeffler, C. M.; Martin, B. E.; Carman, G. P.

    2015-09-01

    This paper presents the experimental measurements of a highly magnetoelastic material (Galfenol) under impact loading. A Split-Hopkinson Pressure Bar was used to generate compressive stress up to 275 MPa at strain rates of either 20/s or 33/s while measuring the stress-strain response and change in magnetic flux density due to magnetoelastic coupling. The average Young's modulus (44.85 GPa) was invariant to strain rate, with instantaneous stiffness ranging from 25 to 55 GPa. A lumped parameters model simulated the measured pickup coil voltages in response to an applied stress pulse. Fitting the model to the experimental data provided the average piezomagnetic coefficient and relative permeability as functions of field strength. The model suggests magnetoelastic coupling is primarily insensitive to strain rates as high as 33/s. Additionally, the lumped parameters model was used to investigate magnetoelastic transducers as potential pulsed power sources. Results show that Galfenol can generate large quantities of instantaneous power (80 MW/m3 ), comparable to explosively driven ferromagnetic pulse generators (500 MW/m3 ). However, this process is much more efficient and can be cyclically carried out in the linear elastic range of the material, in stark contrast with explosively driven pulsed power generators.

  19. Optical and electronic error correction schemes for highly parallel access memories

    NASA Astrophysics Data System (ADS)

    Neifeld, Mark A.; Hayes, Jerry D.

    1993-11-01

    We have fabricated and tested an optically addressed, parallel electronic Reed-Solomon decoder for use with parallel access optical memories. A comparison with various serial implementations has demonstrated that for many instances of code block size and error correction capability, the parallel approach is superior from the perspectives of VLSI layout area and decoding latency. The demonstrated Reed-Solomon parallel pipeline decoder operates on 60 bit input words and has been demonstrated at a clock rate of 5 MHz yielding a demonstrated data rate of 300 Mbps.

  20. High strain rate deformation of layered nanocomposites.

    PubMed

    Lee, Jae-Hwang; Veysset, David; Singer, Jonathan P; Retsch, Markus; Saini, Gagan; Pezeril, Thomas; Nelson, Keith A; Thomas, Edwin L

    2012-01-01

    Insight into the mechanical behaviour of nanomaterials under the extreme condition of very high deformation rates and to very large strains is needed to provide improved understanding for the development of new protective materials. Applications include protection against bullets for body armour, micrometeorites for satellites, and high-speed particle impact for jet engine turbine blades. Here we use a microscopic ballistic test to report the responses of periodic glassy-rubbery layered block-copolymer nanostructures to impact from hypervelocity micron-sized silica spheres. Entire deformation fields are experimentally visualized at an exceptionally high resolution (below 10 nm) and we discover how the microstructure dissipates the impact energy via layer kinking, layer compression, extreme chain conformational flattening, domain fragmentation and segmental mixing to form a liquid phase. Orientation-dependent experiments show that the dissipation can be enhanced by 30% by proper orientation of the layers. PMID:23132014

  1. High strain rate deformation of layered nanocomposites

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Hwang; Veysset, David; Singer, Jonathan P.; Retsch, Markus; Saini, Gagan; Pezeril, Thomas; Nelson, Keith A.; Thomas, Edwin L.

    2012-11-01

    Insight into the mechanical behaviour of nanomaterials under the extreme condition of very high deformation rates and to very large strains is needed to provide improved understanding for the development of new protective materials. Applications include protection against bullets for body armour, micrometeorites for satellites, and high-speed particle impact for jet engine turbine blades. Here we use a microscopic ballistic test to report the responses of periodic glassy-rubbery layered block-copolymer nanostructures to impact from hypervelocity micron-sized silica spheres. Entire deformation fields are experimentally visualized at an exceptionally high resolution (below 10 nm) and we discover how the microstructure dissipates the impact energy via layer kinking, layer compression, extreme chain conformational flattening, domain fragmentation and segmental mixing to form a liquid phase. Orientation-dependent experiments show that the dissipation can be enhanced by 30% by proper orientation of the layers.

  2. Assessment of error rates in acoustic monitoring with the R package monitoR

    USGS Publications Warehouse

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were  for song event detection.

  3. High frame-rate digital radiographic videography

    SciTech Connect

    King, N.S.P.; Cverna, F.H.; Albright, K.L.; Jaramillo, S.A.; Yates, G.J.; McDonald, T.E.; Flynn, M.J.; Tashman, S.

    1994-09-01

    High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100-microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.

  4. High-frame-rate digital radiographic videography

    NASA Astrophysics Data System (ADS)

    King, Nicholas S. P.; Cverna, Frank H.; Albright, Kevin L.; Jaramillo, Steven A.; Yates, George J.; McDonald, Thomas E.; Flynn, Michael J.; Tashman, Scott

    1994-10-01

    High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100 microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.

  5. Fuel droplet burning rates at high pressures.

    NASA Technical Reports Server (NTRS)

    Canada, G. S.; Faeth, G. M.

    1973-01-01

    Combustion of methanol, ethanol, propanol-1, n-pentane, n-heptane, and n-decane was observed in air under natural convection conditions, at pressures up to 100 atm. The droplets were simulated by porous spheres, with diameters in the range from 0.63 to 1.90 cm. The pressure levels of the tests were high enough so that near-critical combustion was observed for methanol and ethanol. Due to the high pressures, the phase-equilibrium models of the analysis included both the conventional low-pressure approach as well as high-pressure versions, allowing for real gas effects and the solubility of combustion-product gases in the liquid phase. The burning-rate predictions of the various theories were similar, and in fair agreement with the data. The high-pressure theory gave the best prediction for the liquid-surface temperatures of ethanol and propanol-1 at high pressure. The experiments indicated the approach of critical burning conditions for methanol and ethanol at pressures on the order of 80 to 100 atm, which was in good agreement with the predictions of both the low- and high-pressure analysis.

  6. Microalgal separation from high-rate ponds

    SciTech Connect

    Nurdogan, Y.

    1988-01-01

    High rate ponding (HRP) processes are playing an increasing role in the treatment of organic wastewaters in sunbelt communities. Photosynthetic oxygenation by algae has proved to cost only one-seventh as much as mechanical aeration for activated sludge systems. During this study, an advanced HRP, which produces an effluent equivalent to tertiary treatment has been studied. It emphasizes not only waste oxidation but also algal separation and nutrient removal. This new system is herein called advanced tertiary high rate ponding (ATHRP). Phosphorus removal in HRP systems is normally low because algal uptake of phosphorus is about one percent of their 200-300 mg/L dry weights. Precipitation of calcium phosphates by autofluocculation also occurs in HRP at high pH levels, but it is generally not complete due to insufficient calcium concentration in the pond. In the case of Richmond where the studies were conducted, the sewage is very low in calcium. Therefore, enhancement of natural autoflocculation was studied by adding small amounts of lime to the pond. Through this simple procedure phosphorus and nitrogen removals were virtually complete justifying the terminology ATHRP.

  7. The Influence of Relatives on the Efficiency and Error Rate of Familial Searching

    PubMed Central

    Rohlfs, Rori V.; Murphy, Erin; Song, Yun S.; Slatkin, Montgomery

    2013-01-01

    We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011), for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler® Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability () of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins) there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases. PMID:23967076

  8. High Rate Pulse Processing Algorithms for Microcalorimeters

    NASA Astrophysics Data System (ADS)

    Tan, Hui; Breus, Dimitry; Hennig, Wolfgang; Sabourov, Konstantin; Collins, Jeffrey W.; Warburton, William K.; Bertrand Doriese, W.; Ullom, Joel N.; Bacrania, Minesh K.; Hoover, Andrew S.; Rabin, Michael W.

    2009-12-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensors can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small in order to maintain good energy resolution, and pulse decay times are normally on the order of milliseconds due to slow thermal relaxation. Therefore, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. For large arrays, however, as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of waveform data to a host computer for post-processing. In this paper, we present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in readout electronics that we are also currently developing, is to achieve sufficiently good energy resolution for most applications while being: a) simple enough to be implemented in the readout electronics; and, b) capable of processing overlapping pulses, and thus achieving much higher output count rates than those achieved by existing algorithms. Details of our algorithms are presented, and their performance is compared to that of the "optimal filter" that is currently the predominantly used pulse processing algorithm in the cryogenic-detector community.

  9. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  10. High Prevalence of Refractive Errors in 7 Year Old Children in Iran

    PubMed Central

    HASHEMI, Hassan; YEKTA, Abbasali; JAFARZADEHPUR, Ebrahim; OSTADIMOGHADDAM, Hadi; ETEMAD, Koorosh; ASHARLOUS, Amir; NABOVATI, Payam; KHABAZKHOOB, Mehdi

    2016-01-01

    Background: The latest WHO report indicates that refractive errors are the leading cause of visual impairment throughout the world. The aim of this study was to determine the prevalence of myopia, hyperopia, and astigmatism in 7 yr old children in Iran. Methods: In a cross-sectional study in 2013 with multistage cluster sampling, first graders were randomly selected from 8 cities in Iran. All children were tested by an optometrist for uncorrected and corrected vision, and non-cycloplegic and cycloplegic refraction. Refractive errors in this study were determined based on spherical equivalent (SE) cyloplegic refraction. Results: From 4614 selected children, 89.0% participated in the study, and 4072 were eligible. The prevalence rates of myopia, hyperopia and astigmatism were 3.04% (95% CI: 2.30–3.78), 6.20% (95% CI: 5.27–7.14), and 17.43% (95% CI: 15.39–19.46), respectively. Prevalence of myopia (P=0.925) and astigmatism (P=0.056) were not statistically significantly different between the two genders, but the odds of hyperopia were 1.11 (95% CI: 1.01–2.05) times higher in girls (P=0.011). The prevalence of with-the-rule astigmatism was 12.59%, against-the-rule was 2.07%, and oblique 2.65%. Overall, 22.8% (95% CI: 19.7–24.9) of the schoolchildren in this study had at least one type of refractive error. Conclusion: One out of every 5 schoolchildren had some refractive error. Conducting multicenter studies throughout the Middle East can be very helpful in understanding the current distribution patterns and etiology of refractive errors compared to the previous decade. PMID:27114984