Science.gov

Sample records for high error rates

  1. A forward error correction technique using a high-speed, high-rate single chip codec

    NASA Technical Reports Server (NTRS)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    1989-01-01

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  2. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors

    PubMed Central

    Bányai, László; Patthy, László

    2016-01-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation. PMID:27476717

  3. High rates of phasing errors in highly polymorphic species with low levels of linkage disequilibrium.

    PubMed

    Bukowicki, Marek; Franssen, Susanne U; Schlötterer, Christian

    2016-07-01

    Short read sequencing of diploid individuals does not permit the direct inference of the sequence on each of the two homologous chromosomes. Although various phasing software packages exist, they were primarily tailored for and tested on human data, which differ from other species in factors that influence phasing, such as SNP density, amounts of linkage disequilibrium (LD) and sample sizes. Despite becoming increasingly popular for other species, the reliability of phasing in non-human data has not been evaluated to a sufficient extent. We scrutinized the phasing accuracy for Drosophila melanogaster, a species with high polymorphism levels and reduced LD relative to humans. We phased two D. melanogaster populations and compared the results to the known haplotypes. The performance increased with size of the reference panel and was highest when the reference panel and phased individuals were from the same population. Full genomic SNP data and inclusion of sequence read information also improved phasing. Despite humans and Drosophila having similar switch error rates between polymorphic sites, the distances between switch errors were much shorter in Drosophila with only fragments <300-1500 bp being correctly phased with ≥95% confidence. This suggests that the higher SNP density cannot compensate for the higher recombination rate in D. melanogaster. Furthermore, we show that populations that have gone through demographic events such as bottlenecks can be phased with higher accuracy. Our results highlight that statistically phased data are particularly error prone in species with large population sizes or populations lacking suitable reference panels. PMID:26929272

  4. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  5. High speed and adaptable error correction for megabit/s rate quantum key distribution

    NASA Astrophysics Data System (ADS)

    Dixon, A. R.; Sato, H.

    2014-12-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  6. High-speed communication detector characterization by bit error rate measurements

    NASA Technical Reports Server (NTRS)

    Green, S. I.

    1978-01-01

    Performance data taken on several candidate high data rate laser communications photodetectors is presented. Measurements of bit error rate versus signal level were made in both a 1064 nm system at 400 Mbps and a 532 nm system at 500 Mbps. RCA silicon avalanche photodiodes are superior at 1064 nm, but the Rockwell hybrid 3-5 avalanche photodiode preamplifiers offer potentially superior performance. Varian dynamic crossed field photomultipliers are superior at 532 nm, however, the RCA silicon avalanche photodiode is a close contender.

  7. Bit error rate performance of Image Processing Facility high density tape recorders

    NASA Technical Reports Server (NTRS)

    Heffner, P.

    1981-01-01

    The Image Processing Facility at the NASA/Goddard Space Flight Center uses High Density Tape Recorders (HDTR's) to transfer high volume image data and ancillary information from one system to another. For ancillary information, it is required that very low bit error rates (BER's) accompany the transfers. The facility processes about 10 to the 11th bits of image data per day from many sensors, involving 15 independent processing systems requiring the use of HDTR's. When acquired, the 16 HDTR's offered state-of-the-art performance of 1 x 10 to the -6th BER as specified. The BER requirement was later upgraded in two steps: (1) incorporating data randomizing circuitry to yield a BER of 2 x 10 to the -7th and (2) further modifying to include a bit error correction capability to attain a BER of 2 x 10 to the -9th. The total improvement factor was 500 to 1. Attention is given here to the background, technical approach, and final results of these modifications. Also discussed are the format of the data recorded by the HDTR, the magnetic tape format, the magnetic tape dropout characteristics as experienced in the Image Processing Facility, the head life history, and the reliability of the HDTR's.

  8. High-rate error-correction codes for the optical atmospheric channel

    NASA Astrophysics Data System (ADS)

    Anguita, Jaime A.; Djordjevic, Ivan B.; Neifeld, Mark A.; Vasic, Bane V.

    2005-08-01

    We evaluate two error correction systems based on low-density parity-check (LDPC) codes for free-space optical (FSO) communication channels subject to atmospheric turbulence. We simulate the effect of turbulence on the received signal by modeling the channel with a gamma-gamma distribution. We compare the bit-error rate performance of these codes with the performance of Reed-Solomon codes of similar rate and obtain coding gains from 3 to 14 dB depending on the turbulence conditions.

  9. Design of bit error rate tester based on a high speed bit and sequence synchronization

    NASA Astrophysics Data System (ADS)

    Wang, Xuanmin; Zhao, Xiangmo; Zhang, Lichuan; Zhang, Yinglong

    2013-03-01

    In traditional BER (Bit Error Rate) tester, bit synchronization applied digital PLL and sequence synchronization utilized sequence's correlation.It resulted in a low speed on bit and sequence synchronization. this paper came up new method to realize bit and sequence synchronization .which were Bit-edge-tracking method and Immitting-sequence method.The BER tester based on FPGA was designed.The functions of inserting error-bit and removing the false sequence synchronization were added. The results of Debuging and simulating display that the time to realize bit synchronization is less than a bit width, the lagged time of the tracking bit pulse is 1/8 of the code cycle,and there is only a M sequence's cycle to realize sequence synchronization.This new BER tester has many advantages,such as a short time to realize bit and sequence synchronization,no false sequence synchronization,testing the ability of the receiving port's error -correcting and a simple hareware.

  10. “Missed” Mild Cognitive Impairment: High False-Negative Error Rate Based on Conventional Diagnostic Criteria

    PubMed Central

    Edmonds, Emily C.; Delano-Wood, Lisa; Jak, Amy J.; Galasko, Douglas R.; Salmon, David P.; Bondi, Mark W.

    2016-01-01

    Mild cognitive impairment (MCI) is typically diagnosed using subjective complaints, screening measures, clinical judgment, and a single memory score. Our prior work has shown that this method is highly susceptible to false-positive diagnostic errors. We examined whether the criteria also lead to “false-negative” errors by diagnostically reclassifying 520 participants using novel actuarial neuropsychological criteria. Results revealed a false-negative error rate of 7.1%. Participants’ neuropsychological performance, cerebrospinal fluid biomarkers, and rate of decline provided evidence that an MCI diagnosis is warranted. The impact of “missed” cases of MCI has direct relevance to clinical practice, research studies, and clinical trials of prodromal Alzheimer's disease. PMID:27031477

  11. Unacceptably High Error Rates in Vitek 2 Testing of Cefepime Susceptibility in Extended-Spectrum-β-Lactamase-Producing Escherichia coli

    PubMed Central

    Rhodes, Nathaniel J.; Richardson, Chad L.; Heraty, Ryan; Liu, Jiajun; Malczynski, Michael; Qi, Chao

    2014-01-01

    While a lack of concordance is known between gold standard MIC determinations and Vitek 2, the magnitude of the discrepancy and its impact on treatment decisions for extended-spectrum-β-lactamase (ESBL)-producing Escherichia coli are not. Clinical isolates of ESBL-producing E. coli were collected from blood, tissue, and body fluid samples from January 2003 to July 2009. Resistance genotypes were identified by PCR. Primary analyses evaluated the discordance between Vitek 2 and gold standard methods using cefepime susceptibility breakpoint cutoff values of 8, 4, and 2 μg/ml. The discrepancies in MICs between the methods were classified per convention as very major, major, and minor errors. Sensitivity, specificity, and positive and negative predictive values for susceptibility classifications were calculated. A total of 304 isolates were identified; 59% (179) of the isolates carried blaCTX-M, 47% (143) carried blaTEM, and 4% (12) carried blaSHV. At a breakpoint MIC of 8 μg/ml, Vitek 2 produced a categorical agreement of 66.8% and exhibited very major, major, and minor error rates of 23% (20/87 isolates), 5.1% (8/157 isolates), and 24% (73/304), respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 8 μg/ml were 94.9%, 61.2%, 72.3%, and 91.8%, respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 2 μg/ml were 83.8%, 65.3%, 41%, and 93.3%, respectively. Vitek 2 results in unacceptably high error rates for cefepime compared to those of agar dilution for ESBL-producing E. coli. Clinicians should be wary of making treatment decisions on the basis of Vitek 2 susceptibility results for ESBL-producing E. coli. PMID:24752253

  12. Improvement of Bit Error Rate in Holographic Data Storage Using the Extended High-Frequency Enhancement Filter

    NASA Astrophysics Data System (ADS)

    Kim, Do-Hyung; Cho, Janghyun; Moon, Hyungbae; Jeon, Sungbin; Park, No-Cheol; Yang, Hyunseok; Park, Kyoung-Su; Park, Young-Pil

    2013-09-01

    Optimized image restoration is suggested in angular-multiplexing-page-based holographic data storage. To improve the bit error rate (BER), an extended high frequency enhancement filter is recalculated from the point spread function (PSF) and Gaussian mask as the image restoration filter. Using the extended image restoration filter, the proposed system reduces the number of processing steps compared with the image upscaling method and provides better performance in BER and SNR. Numerical simulations and experiments were performed to verify the proposed method. The proposed system exhibited a marked improvement in BER from 0.02 to 0.002 for a Nyquist factor of 1.1, and from 0.006 to 0 for a Nyquist factor of 1.2. Moreover, more than 3 times faster performance in calculation time was achieved compared with image restoration with PSF upscaling owing to the reductions in the number of system process and calculation load.

  13. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities

    PubMed Central

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-01-01

    Introduction Sound is among the significant environmental factors for people’s health, and it has an important role in both physical and psychological injuries, and it also affects individuals’ performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. Methods This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Results Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant’s performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). Conclusion This study found that a sound level of 110 dB had an important effect on the individuals’ performances, i.e., the performances were decreased. PMID:27123216

  14. Bit-error-rate testing of high-power 30-GHz traveling wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Fujikawa, Gene

    1986-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30 GHz, 200 W, coupled-cavity traveling wave tubes (TWTs). The transmission effects of each TWT were investigated on a band-limited, 220 Mb/sec SMSK signal. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20 GHz technology development program. The approach taken to test the 30 GHz tubes is described and the resultant test data are discussed. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  15. Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.

    1987-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  16. Adaptation of bit error rate by coding

    NASA Astrophysics Data System (ADS)

    Marguinaud, A.; Sorton, G.

    1984-07-01

    The use of coding in spacecraft wideband communication to reduce power transmission, save bandwith, and lower antenna specifications was studied. The feasibility of a coder decoder functioning at a bit rate of 10 Mb/sec with a raw bit error rate (BER) of 0.001 and an output BER of 0.000000001 is demonstrated. A single block code protection, and two coding levels protection are examined. A single level protection BCH code with 5 errors correction capacity, 16% redundancy, and interleaving depth 4 giving a coded block of 1020 bits is simple to implement, but has BER = 0.000000007. A single level BCH code with 7 errors correction capacity and 12% redundancy meets specifications, but is more difficult to implement. Two level protection with 9% BCH outer and 10% BCH inner codes, both levels with 3 errors correction capacity and 8% redundancy for a coded block of 7050 bits is the most complex, but offers performance advantages.

  17. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase.

    PubMed

    McInerney, Peter; Adams, Paul; Hadi, Masood Z

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572

  18. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    PubMed Central

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572

  19. Forward error correction and spatial diversity techniques for high-data-rate MILSATCOM over a slow-fading, nuclear-disturbed channel

    NASA Astrophysics Data System (ADS)

    Paul, Heywood I.; Meader, Charles B.; Lyons, Daniel A.; Ayers, David R.

    Forward error correction (FEC) and spatial diversity techniques are considered for improving the reliability of high-data-rate military satellite communication (MILSATCOM) over a slow-fading, nuclear-disturbed channel. Slow fading, which occurs when the channel decorrelation time is much greater than the transmitted symbol interval, is characterized by deep fades and, without special precautions, long bursts of errors over high-data-rate communication links. Using the widely accepted Defense Nuclear Agency (DNA) nuclear-scintillated channel model, the authors derive performance tradeoffs among required interleaver storage, FEC, spatial diversity, and link signal-to-noise ratio for differential binary phase shift keying (DBPSK) in the slow-fading environment. Spatial diversity is found to yield impressive gains without the large memory storage and transmission relay requirements associated with interleaving.

  20. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGESBeta

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore » rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  1. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. PMID:26888900

  2. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Content of Error Rate Reports. 98.102 Section 98... DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report—At a minimum, States, the District of Columbia and Puerto Rico shall submit an initial error...

  3. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  4. Improved coded optical communication error rates using joint detection receivers

    NASA Astrophysics Data System (ADS)

    Dutton, Zachary; Guha, Saikat; Chen, Jian; Habif, Jonathan; Lazarus, Richard

    2012-02-01

    It is now known that coherent state (laser light) modulation is sufficient to reach the ultimate quantum limit (the Holevo bound) for classical communication capacity. However, all current optical communication systems are fundamentally limited in capacity because they perform measurements on single symbols at a time. To reach the Holevo bound, joint quantum measurements over long symbol blocks will be required. We recently proposed and demonstrated the ``conditional pulse nulling'' (CPN) receiver -- which acts jointly on the time slots of a pulse-position-modulation (PPM) codeword by employing pulse nulling and quantum feedforward -- and demonstrated a 2.3 dB improvement in error rate over direct detection (DD). In a communication system coded error rates are made arbitrary small by employing an outer code (such as Reed-Solomon (RS)). Here we analyze RS coding of PPM errors with both DD and CPN receivers and calculate the outer code length requirements. We find the improved PPM error rates with the CPN translates into >10 times improvement in the required outer code length at high rates. This advantage also translates increase the range for a given coding complexity. In addition, we present results for outer coded error rates of our recently proposed ``Green Machine'' which realizes a joint detection advantage for binary phase shift keyed (BPSK) modulation.

  5. Error Growth Rate in the MM5 Model

    NASA Astrophysics Data System (ADS)

    Ivanov, S.; Palamarchuk, J.

    2006-12-01

    The goal of this work is to estimate model error growth rates in simulations of the atmospheric circulation by the MM5 model all the way from the short range to the medium range and beyond. The major topics are addressed to: (i) search the optimal set of parameterization schemes; (ii) evaluate the spatial structure and scales of the model error for various atmospheric fields; (iii) determine geographical regions where model errors are largest; (iv) define particular atmospheric patterns contributing to the fast and significant model error growth. Results are presented for geopotential, temperature, relative humidity and horizontal wind components fields on standard surfaces over the Atlantic-European region during winter 2002. Various combinations of parameterization schemes for cumulus, PBL, moisture and radiation are used to identify which one provides a lesser difference between the model state and analysis. The comparison of the model fields is carried out versus ERA-40 reanalysis of the ECMWF. Results show that the rate, at which the model error grows as well as its magnitude, varies depending on the forecast range, atmospheric variable and level. The typical spatial scale and structure of the model error also depends on the particular atmospheric variable. The distribution of the model error over the domain can be separated in two parts: the steady and transient. The first part is associated with a few high mountain regions including Greenland, where model error is larger. The transient model error mainly moves along with areas of high gradients in the atmospheric flow. Acknowledgement: This study has been supported by NATO Science for Peace grant #981044. The MM5 modelling system used in this study has been provided by UCAR. ERA-40 re-analysis data have been obtained from the ECMWF data server.

  6. Controlling type-1 error rates in whole effluent toxicity testing

    SciTech Connect

    Smith, R.; Johnson, S.C.

    1995-12-31

    A form of variability, called the dose x test interaction, has been found to affect the variability of the mean differences from control in the statistical tests used to evaluate Whole Effluent Toxicity Tests for compliance purposes. Since the dose x test interaction is not included in these statistical tests, the assumed type-1 and type-2 error rates can be incorrect. The accepted type-1 error rate for these tests is 5%. Analysis of over 100 Ceriodaphnia, fathead minnow and sea urchin fertilization tests showed that when the test x dose interaction term was not included in the calculations the type-1 error rate was inflated to as high as 20%. In a compliance setting, this problem may lead to incorrect regulatory decisions. Statistical tests are proposed that properly incorporate the dose x test interaction variance.

  7. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public... Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart apply to the fifty States, the District of Columbia and Puerto Rico. (b) Generally—States, the...

  8. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Error Rate Report. 98.100 Section 98.100 Public... Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart apply to the fifty States, the District of Columbia and Puerto Rico. (b) Generally—States, the...

  9. Defining Error Rates and Power for Detecting Answer Copying.

    ERIC Educational Resources Information Center

    Wollack, James A.; Cohen, Allan S.; Serlin, Ronald C.

    2001-01-01

    Developed a family wise approach for evaluating the significance of copying indices designed to hold the Type I error rate constant for each examinee. Examined the Type I error rate and power of two indices under a variety of copying situations. Results indicate the superiority of a family wise definition of Type I error rate over a pair-wise…

  10. Neutron-induced soft error rate measurements in semiconductor memories

    NASA Astrophysics Data System (ADS)

    Ünlü, Kenan; Narayanan, Vijaykrishnan; Çetiner, Sacit M.; Degalahal, Vijay; Irwin, Mary J.

    2007-08-01

    Soft error rate (SER) testing of devices have been performed using the neutron beam at the Radiation Science and Engineering Center at Penn State University. The soft error susceptibility for different memory chips working at different technology nodes and operating voltages is determined. The effect of 10B on SER as an in situ excess charge source is observed. The effect of higher-energy neutrons on circuit operation will be published later. Penn State Breazeale Nuclear Reactor was used as the neutron source in the experiments. The high neutron flux allows for accelerated testing of the SER phenomenon. The experiments and analyses have been performed only on soft errors due to thermal neutrons. Various memory chips manufactured by different vendors were tested at various supply voltages and reactor power levels. The effect of 10B reaction caused by thermal neutron absorption on SER is discussed.

  11. Logical error rate in the Pauli twirling approximation

    PubMed Central

    Katabarwa, Amara; Geller, Michael R.

    2015-01-01

    The performance of error correction protocols are necessary for understanding the operation of potential quantum computers, but this requires physical error models that can be simulated efficiently with classical computers. The Gottesmann-Knill theorem guarantees a class of such error models. Of these, one of the simplest is the Pauli twirling approximation (PTA), which is obtained by twirling an arbitrary completely positive error channel over the Pauli basis, resulting in a Pauli channel. In this work, we test the PTA’s accuracy at predicting the logical error rate by simulating the 5-qubit code using a 9-qubit circuit with realistic decoherence and unitary gate errors. We find evidence for good agreement with exact simulation, with the PTA overestimating the logical error rate by a factor of 2 to 3. Our results suggest that the PTA is a reliable predictor of the logical error rate, at least for low-distance codes. PMID:26419417

  12. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.

  13. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval; Wallman, Joel; Sanders, Barry

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.

  14. The Rate of Physicochemical Incompatibilities, Administration Errors. Factors Correlating with Nurses' Errors.

    PubMed

    Fahimi, Fanak; Sefidani Forough, Aida; Taghikhani, Sepideh; Saliminejad, Leila

    2015-01-01

    Medication errors are commonly encountered in hospital setting. Intravenous medications pose particular risks because of their greater complexity and the multiple steps required in their preparation, administration and monitoring. We aimed to determine the rate of errors during the preparation and administration phase of intravenous medications and the correlation of these errors with the demographics of nurses involved in the process. One hundred patients who were receiving IV medications were monitored by a trained pharmacist. The researcher accompanied the nurses during the preparation and administration process of IV medications. Collected data were compared with the acceptable guidelines. A checklist was filled for each IV medication. Demographic data of the nurses were collected as well. A total of 454 IV medications were recorded. Inappropriate administration rate constituted a large proportion of errors in our study (35.3%). No significant or life threatening drug interaction was recorded during the study. Evaluating the impact of the nurses' demographic characteristics on the incidence of medication errors showed that there is a direct correlation between nurses' employment status and the rate of medication errors, while other characteristics did not show a significant impact on the rate of administration errors. Administration errors were significantly higher in temporary 1-year contract group than other groups (p-value < 0.0001). Study results show that there should be more vigilance on administration rate of IV medications to prevent negative consequences especially by pharmacists. Optimizing the working conditions of nurses may play a crucial role. PMID:26185509

  15. The Rate of Physicochemical Incompatibilities, Administration Errors. Factors Correlating with Nurses' Errors

    PubMed Central

    Fahimi, Fanak; Sefidani Forough, Aida; Taghikhani, Sepideh; Saliminejad, Leila

    2015-01-01

    Medication errors are commonly encountered in hospital setting. Intravenous medications pose particular risks because of their greater complexity and the multiple steps required in their preparation, administration and monitoring. We aimed to determine the rate of errors during the preparation and administration phase of intravenous medications and the correlation of these errors with the demographics of nurses involved in the process. One hundred patients who were receiving IV medications were monitored by a trained pharmacist. The researcher accompanied the nurses during the preparation and administration process of IV medications. Collected data were compared with the acceptable guidelines. A checklist was filled for each IV medication. Demographic data of the nurses were collected as well. A total of 454 IV medications were recorded. Inappropriate administration rate constituted a large proportion of errors in our study (35.3%). No significant or life threatening drug interaction was recorded during the study. Evaluating the impact of the nurses’ demographic characteristics on the incidence of medication errors showed that there is a direct correlation between nurses’ employment status and the rate of medication errors, while other characteristics did not show a significant impact on the rate of administration errors. Administration errors were significantly higher in temporary 1-year contract group than other groups (p-value < 0.0001). Study results show that there should be more vigilance on administration rate of IV medications to prevent negative consequences especially by pharmacists. Optimizing the working conditions of nurses may play a crucial role. PMID:26185509

  16. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  17. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  18. Hypercorrection of High Confidence Errors in Children

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2012-01-01

    Three experiments investigated whether the hypercorrection effect--the finding that errors committed with high confidence are easier, rather than more difficult, to correct than are errors committed with low confidence--occurs in grade school children as it does in young adults. All three experiments showed that Grade 3-6 children hypercorrected…

  19. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  20. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency

    PubMed Central

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2013-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 2nd graders and 974 3rd graders. Participants were assessed using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and the Woodcock Reading Mastery Test (WRMT) Passage Comprehension subtest. Results from this study further illuminate the significant relationships between error rate, oral reading fluency, and reading comprehension performance, and grade-specific guidelines for appropriate error rate levels. Low oral reading fluency and high error rates predict the level of passage comprehension performance. For second grade students below benchmark, a fall assessment error rate of 28% predicts that student comprehension performance will be below average. For third grade students below benchmark, the fall assessment cut point is 14%. Instructional implications of the findings are discussed. PMID:24319307

  1. Dose error from deviation of dwell time and source position for high dose-rate 192Ir in remote afterloading system

    PubMed Central

    Okamoto, Hiroyuki; Aikawa, Ako; Wakita, Akihisa; Yoshio, Kotaro; Murakami, Naoya; Nakamura, Satoshi; Hamada, Minoru; Abe, Yoshihisa; Itami, Jun

    2014-01-01

    The influence of deviations in dwell times and source positions for 192Ir HDR-RALS was investigated. The potential dose errors for various kinds of brachytherapy procedures were evaluated. The deviations of dwell time ΔT of a 192Ir HDR source for the various dwell times were measured with a well-type ionization chamber. The deviations of source position ΔP were measured with two methods. One is to measure actual source position using a check ruler device. The other is to analyze peak distances from radiographic film irradiated with 20 mm gap between the dwell positions. The composite dose errors were calculated using Gaussian distribution with ΔT and ΔP as 1σ of the measurements. Dose errors depend on dwell time and distance from the point of interest to the dwell position. To evaluate the dose error in clinical practice, dwell times and point of interest distances were obtained from actual treatment plans involving cylinder, tandem-ovoid, tandem-ovoid with interstitial needles, multiple interstitial needles, and surface-mold applicators. The ΔT and ΔP were 32 ms (maximum for various dwell times) and 0.12 mm (ruler), 0.11 mm (radiographic film). The multiple interstitial needles represent the highest dose error of 2%, while the others represent less than approximately 1%. Potential dose error due to dwell time and source position deviation can depend on kinds of brachytherapy techniques. In all cases, the multiple interstitial needles is most susceptible. PMID:24566719

  2. Experimental quantum error correction with high fidelity

    NASA Astrophysics Data System (ADS)

    Zhang, Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-01

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.81.2152 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from ɛ to ˜ɛ2. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  3. Experimental quantum error correction with high fidelity

    SciTech Connect

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-15

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from {epsilon} to {approx}{epsilon}{sup 2}. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  4. Theoretical Accuracy for ESTL Bit Error Rate Tests

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin

    1998-01-01

    "Bit error rate" [BER] for the purposes of this paper is the fraction of binary bits which are inverted by passage through a communication system. BER can be measured for a block of sample bits by comparing a received block with the transmitted block and counting the erroneous bits. Bit Error Rate [BER] tests are the most common type of test used by the ESTL for evaluating system-level performance. The resolution of the test is obvious: the measurement cannot be resolved more finely than 1/N, the number of bits tested. The tolerance is not. This paper examines the measurement accuracy of the bit error rate test. It is intended that this information will be useful in analyzing data taken in the ESTL. This paper is divided into four sections and follows a logically ordered presentation, with results developed before they are evaluated. However, first-time readers will derive the greatest benefit from this paper by skipping the lengthy section devoted to analysis, and treating it as reference material. The analysis performed in this paper is based on a Probability Density Function [PDF] which is developed with greater detail in a past paper, Theoretical Accuracy for ESTL Probability of Acquisition Tests, EV4-98-609.

  5. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  6. Empirical assessment of sequencing errors for high throughput pyrosequencing data

    PubMed Central

    2013-01-01

    Background Sequencing-by-synthesis technologies significantly improve over the Sanger method in terms of speed and cost per base. However, they still usually fail to compete in terms of read length and quality. Current high-throughput implementations of the pyrosequencing technique yield reads whose length approach those of the capillary electrophoresis method. A less obvious question is whether their quality is affected by platform-specific sequencing errors. Results We present an empirical study aimed at assessing the quality and characterising sequencing errors for high throughput pyrosequencing data. We have developed a procedure for extracting sequencing error data from genome assemblies and study their characteristics, in particular the length distribution of indel gaps and their relation to the sequence contexts where they occur. We used this procedure to analyse data from three prokaryotic genomes sequenced with the GS FLX technology. We also compared two models previously employed with success for peptide sequence alignment. Conclusions We observed an overall very low error rate in the analysed data, with indel errors being much more abundant than substitutions. We also observed a dependence between the length of the gaps and that of the homopolymer context where they occur. As with protein alignments, a power-law model seems to approximate the indel errors more accurately, although the results are not so conclusive as to justify a depart from the commonly used affine gap penalty scheme. In whichever case, however, our procedure can be used to estimate more realistic error model parameters. PMID:23339526

  7. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  8. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  9. Improving bit error rate through multipath differential demodulation

    NASA Astrophysics Data System (ADS)

    Lize, Yannick Keith; Christen, Louis; Nuccio, Scott; Willner, Alan E.; Kashyap, Raman

    2007-02-01

    Differential phase shift keyed transmission (DPSK) is currently under serious consideration as a deployable datamodulation format for high-capacity optical communication systems due mainly to its 3 dB OSNR advantage over intensity modulation. However DPSK OSNR requirements are still 3 dB higher than its coherent counter part, PSK. Some strategies have been proposed to reduce this penalty through multichip soft detection but the improvement is limited to 0.3dB at BER 10-3. Better performance is expected from other soft-detection schemes using feedback control but the implementation is not straight forward. We present here an optical multipath error correction technique for differentially encoded modulation formats such as differential-phase-shift-keying (DPSK) and differential polarization shift keying (DPolSK) for fiber-based and free-space communication. This multipath error correction method combines optical and electronic logic gates. The scheme can easily be implemented using commercially available interferometers and high speed logic gates and does not require any data overhead therefore does not affect the effective bandwidth of the transmitted data. It is not merely compatible but also complementary to error correction codes commonly used in optical transmission systems such as forward-error-correction (FEC). The technique consists of separating the demodulation at the receiver in multiple paths. Each path consists of a Mach-Zehnder interferometer with an integer bit delay and a different delay is used in each path. Some basic logical operations follow and the three paths are compared using a simple majority vote algorithm. Receiver sensitivity is improved by 0.35 dB in simulations and 1.5 dB experimentally at BER of 10-3.

  10. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  11. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  12. Error Rates and Channel Capacities in Multipulse PPM

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Moision, Bruce

    2007-01-01

    A method of computing channel capacities and error rates in multipulse pulse-position modulation (multipulse PPM) has been developed. The method makes it possible, when designing an optical PPM communication system, to determine whether and under what conditions a given multipulse PPM scheme would be more or less advantageous, relative to other candidate modulation schemes. In conventional M-ary PPM, each symbol is transmitted in a time frame that is divided into M time slots (where M is an integer >1), defining an M-symbol alphabet. A symbol is represented by transmitting a pulse (representing 1) during one of the time slots and no pulse (representing 0 ) during the other M 1 time slots. Multipulse PPM is a generalization of PPM in which pulses are transmitted during two or more of the M time slots.

  13. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    SciTech Connect

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  14. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  15. Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers

    NASA Technical Reports Server (NTRS)

    Ha, Eunho; North, Gerald R.

    1995-01-01

    Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.

  16. High-dimensional bolstered error estimation

    PubMed Central

    Sima, Chao; Braga-Neto, Ulisses M.; Dougherty, Edward R.

    2011-01-01

    Motivation: In small-sample settings, bolstered error estimation has been shown to perform better than cross-validation and competitively with bootstrap with regard to various criteria. The key issue for bolstering performance is the variance setting for the bolstering kernel. Heretofore, this variance has been determined in a non-parametric manner from the data. Although bolstering based on this variance setting works well for small feature sets, results can deteriorate for high-dimensional feature spaces. Results: This article computes an optimal kernel variance depending on the classification rule, sample size, model and feature space, both the original number and the number remaining after feature selection. A key point is that the optimal variance is robust relative to the model. This allows us to develop a method for selecting a suitable variance to use in real-world applications where the model is not known, but the other factors in determining the optimal kernel are known. Availability: Companion website at http://compbio.tgen.org/paper_supp/high_dim_bolstering Contact: edward@mail.ece.tamu.edu PMID:21914630

  17. Testing Theories of Transfer Using Error Rate Learning Curves.

    PubMed

    Koedinger, Kenneth R; Yudelson, Michael V; Pavlik, Philip I

    2016-07-01

    We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions. PMID:27230694

  18. High Dimensional Variable Selection with Error Control.

    PubMed

    Kim, Sangjin; Halabi, Susan

    2016-01-01

    Background. The iterative sure independence screening (ISIS) is a popular method in selecting important variables while maintaining most of the informative variables relevant to the outcome in high throughput data. However, it not only is computationally intensive but also may cause high false discovery rate (FDR). We propose to use the FDR as a screening method to reduce the high dimension to a lower dimension as well as controlling the FDR with three popular variable selection methods: LASSO, SCAD, and MCP. Method. The three methods with the proposed screenings were applied to prostate cancer data with presence of metastasis as the outcome. Results. Simulations showed that the three variable selection methods with the proposed screenings controlled the predefined FDR and produced high area under the receiver operating characteristic curve (AUROC) scores. In applying these methods to the prostate cancer example, LASSO and MCP selected 12 and 8 genes and produced AUROC scores of 0.746 and 0.764, respectively. Conclusions. We demonstrated that the variable selection methods with the sequential use of FDR and ISIS not only controlled the predefined FDR in the final models but also had relatively high AUROC scores. PMID:27597974

  19. High Dimensional Variable Selection with Error Control

    PubMed Central

    2016-01-01

    Background. The iterative sure independence screening (ISIS) is a popular method in selecting important variables while maintaining most of the informative variables relevant to the outcome in high throughput data. However, it not only is computationally intensive but also may cause high false discovery rate (FDR). We propose to use the FDR as a screening method to reduce the high dimension to a lower dimension as well as controlling the FDR with three popular variable selection methods: LASSO, SCAD, and MCP. Method. The three methods with the proposed screenings were applied to prostate cancer data with presence of metastasis as the outcome. Results. Simulations showed that the three variable selection methods with the proposed screenings controlled the predefined FDR and produced high area under the receiver operating characteristic curve (AUROC) scores. In applying these methods to the prostate cancer example, LASSO and MCP selected 12 and 8 genes and produced AUROC scores of 0.746 and 0.764, respectively. Conclusions. We demonstrated that the variable selection methods with the sequential use of FDR and ISIS not only controlled the predefined FDR in the final models but also had relatively high AUROC scores. PMID:27597974

  20. The effects of digitizing rate and phase distortion errors on the shock response spectrum

    NASA Technical Reports Server (NTRS)

    Wise, J. H.

    1983-01-01

    Some of the methods used for acquisition and digitization of high-frequency transients in the analysis of pyrotechnic events, such as explosive bolts for spacecraft separation, are discussed with respect to the reduction of errors in the computed shock response spectrum. Equations are given for maximum error as a function of the sampling rate, phase distortion, and slew rate, and the effects of the characteristics of the filter used are analyzed. A filter is noted to exhibit good passband amplitude, phase response, and response to a step function is a compromise between the flat passband of the elliptic filter and the phase response of the Bessel filter; it is suggested that it be used with a sampling rate of 10f (5 percent).

  1. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  2. An Examination of Negative Halo Error in Ratings.

    ERIC Educational Resources Information Center

    Lance, Charles E.; And Others

    1990-01-01

    A causal model of halo error (HE) is derived. Three hypotheses are formulated to explain findings of negative HE. It is suggested that apparent negative HE may have been misinferred from existing correlational measures of HE, and that positive HE is more prevalent than had previously been thought. (SLD)

  3. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... cases in the sample with an error compared to the total number of cases in the sample; (2) Percentage of... the sample with an improper payment compared to the total number of cases in the sample; (3... improper payments in the sample compared to the total dollar amount of payments made in the sample;...

  4. Reducing error rates in straintronic multiferroic nanomagnetic logic by pulse shaping

    NASA Astrophysics Data System (ADS)

    Munira, Kamaram; Xie, Yunkun; Nadri, Souheil; Forgues, Mark B.; Salehi Fashami, Mohammad; Atulasimha, Jayasimha; Bandyopadhyay, Supriyo; Ghosh, Avik W.

    2015-06-01

    Dipole-coupled nanomagnetic logic (NML), where nanomagnets (NMs) with bistable magnetization states act as binary switches and information is transferred between them via dipole-coupling and Bennett clocking, is a potential replacement for conventional transistor logic since magnets dissipate less energy than transistors when they switch in a logic circuit. Magnets are also ‘non-volatile’ and hence can store the results of a computation after the computation is over, thereby doubling as both logic and memory—a feat that transistors cannot achieve. However, dipole-coupled NML is much more error-prone than transistor logic at room temperature (\\gt 1%) because thermal noise can easily disrupt magnetization dynamics. Here, we study a particularly energy-efficient version of dipole-coupled NML known as straintronic multiferroic logic (SML) where magnets are clocked/switched with electrically generated mechanical strain. By appropriately ‘shaping’ the voltage pulse that generates strain, we show that the error rate in SML can be reduced to tolerable limits. We describe the error probabilities associated with various stress pulse shapes and discuss the trade-off between error rate and switching speed in SML.The lowest error probability is obtained when a ‘shaped’ high voltage pulse is applied to strain the output NM followed by a low voltage pulse. The high voltage pulse quickly rotates the output magnet’s magnetization by 90° and aligns it roughly along the minor (or hard) axis of the NM. Next, the low voltage pulse produces the critical strain to overcome the shape anisotropy energy barrier in the NM and produce a monostable potential energy profile in the presence of dipole coupling from the neighboring NM. The magnetization of the output NM then migrates to the global energy minimum in this monostable profile and completes a 180° rotation (magnetization flip) with high likelihood.

  5. A minimum-error, energy-constrained neural code is an instantaneous-rate code.

    PubMed

    Johnson, Erik C; Jones, Douglas L; Ratnam, Rama

    2016-04-01

    Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals. PMID:26922680

  6. A Simple Approximation for the Symbol Error Rate of Triangular Quadrature Amplitude Modulation

    NASA Astrophysics Data System (ADS)

    Duy, Tran Trung; Kong, Hyung Yun

    In this paper, we consider the error performance of the regular triangular quadrature amplitude modulation (TQAM). In particular, using an accurate exponential bound of the complementary error function, we derive a simple approximation for the average symbol error rate (SER) of TQAM over Additive White Gaussian Noise (AWGN) and fading channels. The accuracy of our approach is verified by some simulation results.

  7. Finding the right coverage: the impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates.

    PubMed

    Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah

    2016-07-01

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown. Here, we estimated genotyping error rates in SNPs genotyped with double digest RAD sequencing from Mendelian incompatibilities in known mother-offspring dyads of Hoffman's two-toed sloth (Choloepus hoffmanni) across a range of coverage and sequence quality criteria, for both reference-aligned and de novo-assembled data sets. Genotyping error rates were more sensitive to coverage than sequence quality and low coverage yielded high error rates, particularly in de novo-assembled data sets. For example, coverage ≥5 yielded median genotyping error rates of ≥0.03 and ≥0.11 in reference-aligned and de novo-assembled data sets, respectively. Genotyping error rates declined to ≤0.01 in reference-aligned data sets with a coverage ≥30, but remained ≥0.04 in the de novo-assembled data sets. We observed approximately 10- and 13-fold declines in the number of loci sampled in the reference-aligned and de novo-assembled data sets when coverage was increased from ≥5 to ≥30 at quality score ≥30, respectively. Finally, we assessed the effects of genotyping coverage on a common population genetic application, parentage assignments, and showed that the proportion of incorrectly assigned maternities was relatively high at low coverage. Overall, our results suggest that the trade-off between sample size and genotyping error rates be considered prior to building sequencing libraries, reporting genotyping error rates become standard practice, and that effects of genotyping errors on inference be evaluated in restriction-enzyme-based SNP studies. PMID:26946083

  8. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  9. Controlling Type I Error Rate in Evaluating Differential Item Functioning for Four DIF Methods: Use of Three Procedures for Adjustment of Multiple Item Testing

    ERIC Educational Resources Information Center

    Kim, Jihye

    2010-01-01

    In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…

  10. High accuracy optical rate sensor

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, J.

    1990-01-01

    Optical rate sensors, in particular CCD arrays, will be used on Space Station Freedom to track stars in order to provide inertial attitude reference. An algorithm to provide attitude rate information by directly manipulating the sensor pixel intensity output is presented. The star image produced by a sensor in the laboratory is modeled. Simulated, moving star images are generated, and the algorithm is applied to this data for a star moving at a constant rate. The algorithm produces accurate derived rate of the above data. A step rate change requires two frames for the output of the algorithm to accurately reflect the new rate. When zero mean Gaussian noise with a standard deviation of 5 is added to the simulated data of a star image moving at a constant rate, the algorithm derives the rate with an error of 1.9 percent at a rate of 1.28 pixels per frame.

  11. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  12. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    ERIC Educational Resources Information Center

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  13. Conserved rates and patterns of transcription errors across bacterial growth states and lifestyles.

    PubMed

    Traverse, Charles C; Ochman, Howard

    2016-03-22

    Errors that occur during transcription have received much less attention than the mutations that occur in DNA because transcription errors are not heritable and usually result in a very limited number of altered proteins. However, transcription error rates are typically several orders of magnitude higher than the mutation rate. Also, individual transcripts can be translated multiple times, so a single error can have substantial effects on the pool of proteins. Transcription errors can also contribute to cellular noise, thereby influencing cell survival under stressful conditions, such as starvation or antibiotic stress. Implementing a method that captures transcription errors genome-wide, we measured the rates and spectra of transcription errors in Escherichia coli and in endosymbionts for which mutation and/or substitution rates are greatly elevated over those of E. coli Under all tested conditions, across all species, and even for different categories of RNA sequences (mRNA and rRNAs), there were no significant differences in rates of transcription errors, which ranged from 2.3 × 10(-5) per nucleotide in mRNA of the endosymbiont Buchnera aphidicola to 5.2 × 10(-5) per nucleotide in rRNA of the endosymbiont Carsonella ruddii The similarity of transcription error rates in these bacterial endosymbionts to that in E. coli (4.63 × 10(-5) per nucleotide) is all the more surprising given that genomic erosion has resulted in the loss of transcription fidelity factors in both Buchnera and Carsonella. PMID:26884158

  14. Design and verification of a bit error rate tester in Altera FPGA for optical link developments

    NASA Astrophysics Data System (ADS)

    Cao, T.; Chang, J.; Gong, D.; Liu, C.; Liu, T.; Xiang, A.; Ye, J.

    2010-12-01

    This paper presents a custom bit error rate (BER) tester implementation in an Altera Stratix II GX signal integrity development kit. This BER tester deploys a parallel to serial pseudo random bit sequence (PRBS) generator, a bit and link status error detector and an error logging FIFO. The auto-correlation pattern enables receiver synchronization without specifying protocol at the physical layer. The error logging FIFO records both bit error data and link operation events. The tester's BER and data acquisition functions are utilized in a proton test of a 5 Gbps serializer. Experimental and data analysis results are discussed.

  15. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    PubMed

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data. PMID:25540125

  16. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    PubMed

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments. PMID:23270978

  17. Optimal GSTDN/TDRSS bit error rate evaluation using limited sample sizes

    NASA Technical Reports Server (NTRS)

    Coffey, R. E.; Lawrence, G. M.; Stuart, J. R.

    1982-01-01

    Statistical studies of telemetry errors were made on data from the Solar Mesosphere Explorer (SME). Examination of frame sync words, as received at the ground station, indicated a wide spread of Bit Error Rates (BER) among stations. A study of the distribution of errors per station pass, however, showed that there was a tendency for the station software to add an even number of spurious errors to the count. A count of wild points in science data, rejecting drop-outs and other system errors, yielded an average random BER of 3.1 x 10 to the -6 with 99% confidence limits of 2.6 and 3.8 x 10 to the -6. The system errors are typically 5 to 100 times more frequent than the truly random errors.

  18. Design and Verification of an FPGA-based Bit Error Rate Tester

    NASA Astrophysics Data System (ADS)

    Xiang, Annie; Gong, Datao; Hou, Suen; Liu, Chonghan; Liang, Futian; Liu, Tiankuan; Su, Da-Shung; Teng, Ping-Kun; Ye, Jingbo

    Bit error rate (BER) is the principle measure of performance of a data transmission link. With the integration of high-speed transceivers inside a field programmable gate array (FPGA), the BER testing can now be handled by transceiver-enabled FPGA hardware. This provides a cheaper alternative to dedicated table-top equipment and offers the flexibility of test customization and data analysis. This paper presents a BER tester implementation based on the Altera Stratix II GX and IV GT development boards. The architecture of the tester is described. Lab test results and field test data analysis are discussed. The Stratix II GX tester operates at up to 5 Gbps and the Stratix IV GT tester operates at up to 10 Gbps, both in 4 duplex channels. The tester deploys a pseudo random bit sequence (PRBS) generator and detector, a transceiver controller, and an error logger. It also includes a computer interface for data acquisition and user configuration. The tester's functionality was validated and its performance characterized in a point-to-point serial optical link setup. BER vs. optical receiver sensitivity was measured to emulate stressed link conditions. The Stratix II GX tester was also used in a proton test on a custom designed serializer chip to record and analyse radiation-induced errors.

  19. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  20. Bit-Error-Rate Performance of a Gigabit Ethernet O-CDMA Technology Demonstrator (TD)

    SciTech Connect

    Hernandez, V J; Mendez, A J; Bennett, C V; Lennon, W J

    2004-07-09

    An O-CDMA TD based on 2-D (wavelength/time) codes is described, with bit-error-rate (BER) and eye-diagram measurements given for eight users. Simulations indicate that the TD can support 32 asynchronous users.

  1. Exact error rate analysis of free-space optical communications with spatial diversity over Gamma-Gamma atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Li, Kangning; Tan, Liying; Yu, Siyuan; Cao, Yubin

    2016-02-01

    The error rate performances and outage probabilities of free-space optical (FSO) communications with spatial diversity are studied for Gamma-Gamma turbulent environments. Equal gain combining (EGC) and selection combining (SC) diversity are considered as practical schemes to mitigate turbulence. The exact bit-error rate (BER) expression and outage probability are derived for direct detection EGC multiple aperture receiver system. BER performances and outage probabilities are analyzed and compared for different number of sub-apertures each having aperture area A with EGC and SC techniques. BER performances and outage probabilities of a single monolithic aperture and multiple aperture receiver system with the same total aperture area are compared under thermal-noise-limited and background-noise-limited conditions. It is shown that multiple aperture receiver system can greatly improve the system communication performances. And these analytical tools are useful in providing highly accurate error rate estimation for FSO communication systems.

  2. Error estimation for delta VLBI angle and angle rate measurements over baselines between a ground station and a geosynchronous orbiter

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1982-01-01

    Baselines between a ground station and a geosynchronous orbiter provide high resolution Delta VLBI data which is beyond the capability of ground-based interferometry. The effects of possible error sources on such Delta VLBI data for the determination of spacecraft angle and angle rate are investigated. For comparison, the effects on spacecraft-only VLBI are also studied.

  3. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors

    PubMed Central

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Introduction: Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. Materials and methods: This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. Results: A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. Conclusions: The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples. PMID:25351356

  4. Bit error rate investigation of spin-transfer-switched magnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Wang, Zihui; Zhou, Yuchen; Zhang, Jing; Huai, Yiming

    2012-10-01

    A method is developed to enable a fast bit error rate (BER) characterization of spin-transfer-torque magnetic random access memory magnetic tunnel junction (MTJ) cells without integrating with complementary metal-oxide semiconductor circuit. By utilizing the reflected signal from the devices under test, the measurement setup allows a fast measurement of bit error rates at >106, writing events per second. It is further shown that this method provides a time domain capability to examine the MTJ resistance states during a switching event, which can assist write error analysis in great detail. BER of a set of spin-transfer-torque MTJ cells has been evaluated by using this method, and bit error free operation (down to 10-8) for optimized in-plane MTJ cells has been demonstrated.

  5. Compensatory and Noncompensatory Information Integration and Halo Error in Performance Rating Judgments.

    ERIC Educational Resources Information Center

    Kishor, Nand

    1992-01-01

    The relationship between compensatory and noncompensatory information integration and the intensity of the halo effect in performance rating was studied. Seventy University of British Columbia (Canada) students rated 27 teacher profiles. That the way performance information is mentally integrated affects the intensity of halo error was supported.…

  6. A stochastic node-failure network with individual tolerable error rate at multiple sinks

    NASA Astrophysics Data System (ADS)

    Huang, Cheng-Fu; Lin, Yi-Kuei

    2014-05-01

    Many enterprises consider several criteria during data transmission such as availability, delay, loss, and out-of-order packets from the service level agreements (SLAs) point of view. Hence internet service providers and customers are gradually focusing on tolerable error rate in transmission process. The internet service provider should provide the specific demand and keep a certain transmission error rate by their SLAs to each customer. This paper is mainly to evaluate the system reliability that the demand can be fulfilled under the tolerable error rate at all sinks by addressing a stochastic node-failure network (SNFN), in which each component (edge or node) has several capacities and a transmission error rate. An efficient algorithm is first proposed to generate all lower boundary points, the minimal capacity vectors satisfying demand and tolerable error rate for all sinks. Then the system reliability can be computed in terms of such points by applying recursive sum of disjoint products. A benchmark network and a practical network in the United States are demonstrated to illustrate the utility of the proposed algorithm. The computational complexity of the proposed algorithm is also analyzed.

  7. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Astrophysics Data System (ADS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-09-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  8. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-01-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  9. High Rate Digital Demodulator ASIC

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder; Sheikh, Salman; Koubek, Steve; Hoy, Scott; Gray, Andrew

    1998-01-01

    The architecture of High Rate (600 Mega-bits per second) Digital Demodulator (HRDD) ASIC capable of demodulating BPSK and QPSK modulated data is presented in this paper. The advantages of all-digital processing include increased flexibility and reliability with reduced reproduction costs. Conventional serial digital processing would require high processing rates necessitating a hardware implementation in other than CMOS technology such as Gallium Arsenide (GaAs) which has high cost and power requirements. It is more desirable to use CMOS technology with its lower power requirements and higher gate density. However, digital demodulation of high data rates in CMOS requires parallel algorithms to process the sampled data at a rate lower than the data rate. The parallel processing algorithms described here were developed jointly by NASA's Goddard Space Flight Center (GSFC) and the Jet Propulsion Laboratory (JPL). The resulting all-digital receiver has the capability to demodulate BPSK, QPSK, OQPSK, and DQPSK at data rates in excess of 300 Mega-bits per second (Mbps) per channel. This paper will provide an overview of the parallel architecture and features of the HRDR ASIC. In addition, this paper will provide an over-view of the implementation of the hardware architectures used to create flexibility over conventional high rate analog or hybrid receivers. This flexibility includes a wide range of data rates, modulation schemes, and operating environments. In conclusion it will be shown how this high rate digital demodulator can be used with an off-the-shelf A/D and a flexible analog front end, both of which are numerically computer controlled, to produce a very flexible, low cost high rate digital receiver.

  10. Type-II generalized family-wise error rate formulas with application to sample size determination.

    PubMed

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26914402

  11. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets.

    PubMed

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  12. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets

    PubMed Central

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W.; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  13. Methylphenidate improves diminished error and feedback sensitivity in ADHD: An evoked heart rate analysis.

    PubMed

    Groen, Yvonne; Mulder, Lambertus J M; Wijers, Albertus A; Minderaa, Ruud B; Althaus, Monika

    2009-09-01

    Attention Deficit Hyperactivity Disorder (ADHD) is a developmental disorder that has previously been related to a decreased sensitivity to errors and feedback. Supplementary to the traditional performance measures, this study uses autonomic measures to study this decreased sensitivity in ADHD and the modulating effects of medication. Children with ADHD, on and off Methylphenidate (Mph), and typically developing (TD) children performed a selective attention task with three feedback conditions: reward, punishment and no feedback. Evoked Heart Rate (EHR) responses were computed for correct and error trials. All groups performed more efficiently with performance feedback than without. EHR analyses, however, showed that enhanced EHR decelerations on error trials seen in TD children, were absent in the medication-free ADHD group for all feedback conditions. The Mph-treated ADHD group showed 'normalised' EHR decelerations to errors and error feedback, depending on the feedback condition. This study provides further evidence for a decreased physiological responsiveness to errors and error feedback in children with ADHD and for a modulating effect of Mph. PMID:19464338

  14. Estimation of the minimum mRNA splicing error rate in vertebrates.

    PubMed

    Skandalis, A

    2016-01-01

    The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons. PMID:26811995

  15. Parallel Transmission Pulse Design with Explicit Control for the Specific Absorption Rate in the Presence of Radiofrequency Errors

    PubMed Central

    Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L.; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L.; Guerin, Bastien

    2016-01-01

    Purpose A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. Methods The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors (“worst-case SAR”) is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Results Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled “worst-case SAR” in the presence of errors of this magnitude at minor cost of the excitation profile quality. Conclusion Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. PMID:26147916

  16. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  17. High performance interconnection between high data rate networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.

    1992-01-01

    The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.

  18. 20 CFR 602.43 - No incentives or sanctions based on specific error rates.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false No incentives or sanctions based on specific error rates. 602.43 Section 602.43 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR QUALITY CONTROL IN THE FEDERAL-STATE UNEMPLOYMENT INSURANCE SYSTEM Quality Control...

  19. 20 CFR 602.43 - No incentives or sanctions based on specific error rates.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false No incentives or sanctions based on specific error rates. 602.43 Section 602.43 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR QUALITY CONTROL IN THE FEDERAL-STATE UNEMPLOYMENT INSURANCE SYSTEM Quality Control...

  20. The Impact of Statistically Adjusting for Rater Effects on Conditional Standard Errors of Performance Ratings

    ERIC Educational Resources Information Center

    Raymond, Mark R.; Harik, Polina; Clauser, Brian E.

    2011-01-01

    Prior research indicates that the overall reliability of performance ratings can be improved by using ordinary least squares (OLS) regression to adjust for rater effects. The present investigation extends previous work by evaluating the impact of OLS adjustment on standard errors of measurement ("SEM") at specific score levels. In addition, a…

  1. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    2001-01-01

    Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…

  2. Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.

    2010-01-01

    We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.

  3. Error and Uncertainty in High-resolution Quantitative Sediment Budgets

    NASA Astrophysics Data System (ADS)

    Grams, P. E.; Schmidt, J. C.; Topping, D. J.; Yackulic, C. B.

    2012-12-01

    Sediment budgets are a fundamental tool in fluvial geomorphology. The power of the sediment budget is in the explicit coupling of sediment flux and sediment storage through the Exner equation for bed sediment conservation. Thus, sediment budgets may be calculated either from the divergence of the sediment flux or from measurements of morphologic change. Until recently, sediment budgets were typically calculated using just one of these methods, and often with sparse data. Recent advances in measurement methods for sediment transport have made it possible to measure sediment flux at much higher temporal resolution, while advanced methods for high-resolution topographic and bathymetric mapping have made it possible to measure morphologic change with much greater spatial resolution. Thus, it is now possible to measure all terms of a sediment budget and more thoroughly evaluate uncertainties in measurement methods and sampling strategies. However, measurements of sediment flux and morphologic change involve different types of uncertainty that are encountered over different time and space scales. Three major factors contribute uncertainty to sediment budgets computed from measurements of sediment flux. These are measurement error, the accumulation of error over time, and physical processes that cause systematic bias. In the absence of bias, uncertainty is proportional to measurement error and the ratio of fluxes at the two measurement stations. For example, if the ratio between measured sediment fluxes is more than 0.8, measurement uncertainty must be less than 10 percent in order to calculate a meaningful sediment budget. Systematic bias in measurements of flux can introduce much larger uncertainty. The uncertainties in sediment budgets computed from morphologic measurements fall into three similar categories. These are measurement error, the spatial and temporal propagation of error, and physical processes that cause bias when measurements are interpolated or

  4. High Frequency of Imprinted Methylation Errors in Human Preimplantation Embryos

    PubMed Central

    White, Carlee R.; Denomme, Michelle M.; Tekpetey, Francis R.; Feyles, Valter; Power, Stephen G. A.; Mann, Mellissa R. W.

    2015-01-01

    Assisted reproductive technologies (ARTs) represent the best chance for infertile couples to conceive, although increased risks for morbidities exist, including imprinting disorders. This increased risk could arise from ARTs disrupting genomic imprints during gametogenesis or preimplantation. The few studies examining ART effects on genomic imprinting primarily assessed poor quality human embryos. Here, we examined day 3 and blastocyst stage, good to high quality, donated human embryos for imprinted SNRPN, KCNQ1OT1 and H19 methylation. Seventy-six percent day 3 embryos and 50% blastocysts exhibited perturbed imprinted methylation, demonstrating that extended culture did not pose greater risk for imprinting errors than short culture. Comparison of embryos with normal and abnormal methylation didn’t reveal any confounding factors. Notably, two embryos from male factor infertility patients using donor sperm harboured aberrant methylation, suggesting errors in these embryos cannot be explained by infertility alone. Overall, these results indicate that ART human preimplantation embryos possess a high frequency of imprinted methylation errors. PMID:26626153

  5. The Impact of Sex of the Speaker, Sex of the Rater and Profanity Type of Language Trait Errors in Speech Evaluation: A Test of the Rating Error Paradigm.

    ERIC Educational Resources Information Center

    Bock, Douglas G.; And Others

    1984-01-01

    This study (1) demonstrates the negative impact of profanity in a public speech and (2) sheds light on the conceptualization of the term "rating error." Implications for classroom teaching are discussed. (PD)

  6. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  7. High Data Rate Instrument Study

    NASA Technical Reports Server (NTRS)

    Schober, Wayne; Lansing, Faiza; Wilson, Keith; Webb, Evan

    1999-01-01

    The High Data Rate Instrument Study was a joint effort between the Jet Propulsion Laboratory (JPL) and the Goddard Space Flight Center (GSFC). The objectives were to assess the characteristics of future high data rate Earth observing science instruments and then to assess the feasibility of developing data processing systems and communications systems required to meet those data rates. Instruments and technology were assessed for technology readiness dates of 2000, 2003, and 2006. The highest data rate instruments are hyperspectral and synthetic aperture radar instruments which are capable of generating 3.2 Gigabits per second (Gbps) and 1.3 Gbps, respectively, with a technology readiness date of 2003. These instruments would require storage of 16.2 Terebits (Tb) of information (RF communications case of two orbits of data) or 40.5 Tb of information (optical communications case of five orbits of data) with a technology readiness date of 2003. Onboard storage capability in 2003 is estimated at 4 Tb; therefore, all the data created cannot be stored without processing or compression. Of the 4 Tb of stored data, RF communications can only send about one third of the data to the ground, while optical communications is estimated at 6.4 Tb across all three technology readiness dates of 2000, 2003, and 2006 which were used in the study. The study includes analysis of the onboard processing and communications technologies at these three dates and potential systems to meet the high data rate requirements. In the 2003 case, 7.8% of the data can be stored and downlinked by RF communications while 10% of the data can be stored and downlinked with optical communications. The study conclusion is that only 1 to 10% of the data generated by high data rate instruments will be sent to the ground from now through 2006 unless revolutionary changes in spacecraft design and operations such as intelligent data extraction are developed.

  8. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  9. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    SciTech Connect

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  10. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  11. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  12. Automatic generation control of a hydrothermal system with new area control error considering generation rate constraint

    SciTech Connect

    Das, D.; Nanda, J.; Kothari, M.L.; Kothari, D.P. )

    1990-01-01

    The paper presents an analysis of the automatic generation control based on a new area control error strategy for an interconnected hydrothermal system in the discrete-mode considering generation rate constraints (GRCs). The investigations reveal that the system dynamic performances following a step load perturbation in either of the areas with constrained optimum gain settings and unconstrained optimum gain settings are not much different, hence optimum controller settings can be achieved without considering GRCs in the mathematical model.

  13. Safety Aspects of Pulsed Dose Rate Brachytherapy: Analysis of Errors in 1,300 Treatment Sessions

    SciTech Connect

    Koedooder, Kees Wieringen, Niek van; Grient, Hans N.B. van der; Herten, Yvonne R.J. van; Pieters, Bradley R.; Blank, Leo

    2008-03-01

    Purpose: To determine the safety of pulsed-dose-rate (PDR) brachytherapy by analyzing errors and technical failures during treatment. Methods and Materials: More than 1,300 patients underwent treatment with PDR brachytherapy, using five PDR remote afterloaders. Most patients were treated with consecutive pulse schemes, also outside regular office hours. Tumors were located in the breast, esophagus, prostate, bladder, gynecology, anus/rectum, orbit, head/neck, with a miscellaneous group of small numbers, such as the lip, nose, and bile duct. Errors and technical failures were analyzed for 1,300 treatment sessions, for which nearly 20,000 pulses were delivered. For each tumor localization, the number and type of occurring errors were determined, as were which localizations were more error prone than others. Results: By routinely using the built-in dummy check source, only 0.2% of all pulses showed an error during the phase of the pulse when the active source was outside the afterloader. Localizations treated using flexible catheters had greater error frequencies than those treated with straight needles or rigid applicators. Disturbed pulse frequencies were in the range of 0.6% for the anus/rectum on a classic version 1 afterloader to 14.9% for orbital tumors using a version 2 afterloader. Exceeding the planned overall treatment time by >10% was observed in only 1% of all treatments. Patients received their dose as originally planned in 98% of all treatments. Conclusions: According to the experience in our institute with 1,300 PDR treatments, we found that PDR is a safe brachytherapy treatment modality, both during and outside of office hours.

  14. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    NASA Technical Reports Server (NTRS)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  15. Error rates for nanopore discrimination among cytosine, methylcytosine, and hydroxymethylcytosine along individual DNA strands.

    PubMed

    Schreiber, Jacob; Wescoe, Zachary L; Abu-Shumays, Robin; Vivian, John T; Baatar, Baldandorj; Karplus, Kevin; Akeson, Mark

    2013-11-19

    Cytosine, 5-methylcytosine, and 5-hydroxymethylcytosine were identified during translocation of single DNA template strands through a modified Mycobacterium smegmatis porin A (M2MspA) nanopore under control of phi29 DNA polymerase. This identification was based on three consecutive ionic current states that correspond to passage of modified or unmodified CG dinucleotides and their immediate neighbors through the nanopore limiting aperture. To establish quality scores for these calls, we examined ~3,300 translocation events for 48 distinct DNA constructs. Each experiment analyzed a mixture of cytosine-, 5-methylcytosine-, and 5-hydroxymethylcytosine-bearing DNA strands that contained a marker that independently established the correct cytosine methylation status at the target CG of each molecule tested. To calculate error rates for these calls, we established decision boundaries using a variety of machine-learning methods. These error rates depended upon the identity of the bases immediately 5' and 3' of the targeted CG dinucleotide, and ranged from 1.7% to 12.2% for a single-pass read. We estimate that Q40 values (0.01% error rates) for methylation status calls could be achieved by reading single molecules 5-19 times depending upon sequence context. PMID:24167260

  16. Evaluating the Type II error rate in a sediment toxicity classification using the Reference Condition Approach.

    PubMed

    Rodriguez, Pilar; Maestre, Zuriñe; Martinez-Madrid, Maite; Reynoldson, Trefor B

    2011-01-17

    Sediments from 71 river sites in Northern Spain were tested using the oligochaete Tubifex tubifex (Annelida, Clitellata) chronic bioassay. 47 sediments were identified as reference primarily from macroinvertebrate community characteristics. The data for the toxicological endpoints were examined using non-metric MDS. Probability ellipses were constructed around the reference sites in multidimensional space to establish a classification for assessing test-sediments into one of three categories (Non Toxic, Potentially Toxic, and Toxic). The construction of such probability ellipses sets the Type I error rate. However, we also wished to include in the decision process for identifying pass-fail boundaries the degree of disturbance required to be detected, and the likelihood of being wrong in detecting that disturbance (i.e. the Type II error). Setting the ellipse size to use based on Type I error does not include any consideration of the probability of Type II error. To do this, the toxicological response observed in the reference sediments was manipulated by simulating different degrees of disturbance (simpacted sediments), and measuring the Type II error rate for each set of the simpacted sediments. From this procedure, the frequency at each probability ellipse of identifying impairment using sediments with known level of disturbance is quantified. Thirteen levels of disturbance and seven probability ellipses were tested. Based on the results the decision boundary for Non Toxic and Potentially Toxic was set at the 80% probability ellipse, and the boundary for Potentially Toxic and Toxic at the 95% probability ellipse. Using this approach, 9 test sediments were classified as Toxic, 2 as Potentially Toxic, and 13 as Non Toxic. PMID:20980065

  17. Accuracy assessment of high-rate GPS measurements for seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  18. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  19. Influence of wave-front aberrations on bit error rate in inter-satellite laser communications

    NASA Astrophysics Data System (ADS)

    Yang, Yuqiang; Han, Qiqi; Tan, Liying; Ma, Jing; Yu, Siyuan; Yan, Zhibin; Yu, Jianjie; Zhao, Sheng

    2011-06-01

    We derive the bit error rate (BER) of inter-satellite laser communication (lasercom) links with on-off-keying systems in the presence of both wave-front aberrations and pointing error, but without considering the noise of the detector. Wave-front aberrations induced by receiver terminal have no influence on the BER, while wave-front aberrations induced by transmitter terminal will increase the BER. The BER depends on the area S which is truncated out by the threshold intensity of the detector (such as APD) on the intensity function in the receiver plane, and changes with root mean square (RMS) of wave-front aberrations. Numerical results show that the BER rises with the increasing of RMS value. The influences of Astigmatism, Coma, Curvature and Spherical aberration on the BER are compared. This work can benefit the design of lasercom system.

  20. Preliminary error budget for an optical ranging system: Range, range rate, and differenced range observables

    NASA Technical Reports Server (NTRS)

    Folkner, W. M.; Finger, M. H.

    1990-01-01

    Future missions to the outer solar system or human exploration of Mars may use telemetry systems based on optical rather than radio transmitters. Pulsed laser transmission can be used to deliver telemetry rates of about 100 kbits/sec with an efficiency of several bits for each detected photon. Navigational observables that can be derived from timing pulsed laser signals are discussed. Error budgets are presented based on nominal ground stations and spacecraft-transceiver designs. Assuming a pulsed optical uplink signal, two-way range accuracy may approach the few centimeter level imposed by the troposphere uncertainty. Angular information can be achieved from differenced one-way range using two ground stations with the accuracy limited by the length of the available baseline and by clock synchronization and troposphere errors. A method of synchronizing the ground station clocks using optical ranging measurements is presented. This could allow differenced range accuracy to reach the few centimeter troposphere limit.

  1. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.

  2. Performance monitoring following total sleep deprivation: effects of task type and error rate.

    PubMed

    Renn, Ryan P; Cote, Kimberly A

    2013-04-01

    There is a need to understand the neural basis of performance deficits that result from sleep deprivation. Performance monitoring tasks generate response-locked event-related potentials (ERPs), generated from the anterior cingulate cortex (ACC) located in the medial surface of the frontal lobe that reflect error processing. The outcome of previous research on performance monitoring during sleepiness has been mixed. The purpose of this study was to evaluate performance monitoring in a controlled study of experimental sleep deprivation using a traditional Flanker task, and to broaden this examination using a response inhibition task. Forty-nine young adults (24 male) were randomly assigned to a total sleep deprivation or rested control group. The sleep deprivation group was slower on the Flanker task and less accurate on a Go/NoGo task compared to controls. General attentional impairments were evident in stimulus-locked ERPs for the sleep deprived group: P300 was delayed on Flanker trials and smaller to Go-stimuli. Further, N2 was smaller to NoGo stimuli, and the response-locked ERN was smaller on both tasks, reflecting neurocognitive impairment during performance monitoring. In the Flanker task, higher error rate was associated with smaller ERN amplitudes for both groups. Examination of ERN amplitude over time showed that it attenuated in the rested control group as error rate increased, but such habituation was not apparent in the sleep deprived group. Poor performing sleep deprived individuals had a larger Pe response than controls, possibly indicating perseveration of errors. These data provide insight into the neural underpinnings of performance failure during sleepiness and have implications for workplace and driving safety. PMID:23384887

  3. High rate manure supernatant digestion.

    PubMed

    Bergland, Wenche Hennie; Dinamarca, Carlos; Toradzadegan, Mehrdad; Nordgård, Anna Synnøve Røstad; Bakke, Ingrid; Bakke, Rune

    2015-06-01

    The study shows that high rate anaerobic digestion may be an efficient way to obtain sustainable energy recovery from slurries such as pig manure. High process capacity and robustness to 5% daily load increases are observed in the 370 mL sludge bed AD reactors investigated. The supernatant from partly settled, stored pig manure was fed at rates giving hydraulic retention times, HRT, gradually decreased from 42 to 1.7 h imposing a maximum organic load of 400 g COD L(-1) reactor d(-1). The reactors reached a biogas production rate of 97 g COD L(-1) reactor d(-1) at the highest load at which process stress signs were apparent. The yield was ∼0.47 g COD methane g(-1) CODT feed at HRT above 17 h, gradually decreasing to 0.24 at the lowest HRT (0.166 NL CH4 g(-1) CODT feed decreasing to 0.086). Reactor pH was innately stable at 8.0 ± 0.1 at all HRTs with alkalinity between 9 and 11 g L(-1). The first stress symptom occurred as reduced methane yield when HRT dropped below 17 h. When HRT dropped below 4 h the propionate removal stopped. The yield from acetate removal was constant at 0.17 g COD acetate removed per g CODT substrate. This robust methanogenesis implies that pig manure supernatant, and probably other similar slurries, can be digested for methane production in compact and effective sludge bed reactors. Denaturing gradient gel electrophoresis (DGGE) analysis indicated a relatively fast adaptation of the microbial communities to manure and implies that non-adapted granular sludge can be used to start such sludge bed bioreactors. PMID:25776915

  4. Assessment of type I error rate associated with dose-group switching in a longitudinal Alzheimer trial.

    PubMed

    Habteab Ghebretinsae, Aklilu; Molenberghs, Geert; Dmitrienko, Alex; Offen, Walt; Sethuraman, Gopalan

    2014-01-01

    In clinical trials, there always is the possibility to use data-driven adaptation at the end of a study. There prevails, however, concern on whether the type I error rate of the trial could be inflated with such design, thus, necessitating multiplicity adjustment. In this project, a simulation experiment was set up to assess type I error rate inflation associated with switching dose group as a function of dropout rate at the end of the study, where the primary analysis is in terms of a longitudinal outcome. This simulation is inspired by a clinical trial in Alzheimer's disease. The type I error rate was assessed under a number of scenarios, in terms of differing correlations between efficacy and tolerance, different missingness mechanisms, and different probabilities of switching. A collection of parameter values was used to assess sensitivity of the analysis. Results from ignorable likelihood analysis show that the type I error rate with and without switching was approximately the posited error rate for the various scenarios. Under last observation carried forward (LOCF), the type I error rate blew up both with and without switching. The type I error inflation is clearly connected to the criterion used for switching. While in general switching, in a way related to the primary endpoint, may impact the type I error, this was not the case for most scenarios in the longitudinal Alzheimer trial setting under consideration, where patients are expected to worsen over time. PMID:24697817

  5. Phase error compensation methods for high-accuracy profile measurement

    NASA Astrophysics Data System (ADS)

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Zhang, Zonghua; Jiang, Hao; Yin, Yongkai; Huang, Shujun

    2016-04-01

    In a phase-shifting algorithm-based fringe projection profilometry, the nonlinear intensity response, called the gamma effect, of the projector-camera setup is a major source of error in phase retrieval. This paper proposes two novel, accurate approaches to realize both active and passive phase error compensation based on a universal phase error model which is suitable for a arbitrary phase-shifting step. The experimental results on phase error compensation and profile measurement of standard components verified the validity and accuracy of the two proposed approaches which are robust when faced with changeable measurement conditions.

  6. Symbol error rate bound of DPSK modulation system in directional wave propagation

    NASA Astrophysics Data System (ADS)

    Hua, Jingyu; Zhuang, Changfei; Zhao, Xiaomin; Li, Gang; Meng, Qingmin

    This paper presents a new approach to determine the symbol error rate (SER) bound of differential phase shift keying (DPSK) systems in a directional fading channel, where the von Mises distribution is used to illustrate the non-isotropic angle of arrival (AOA). Our approach relies on the closed-form expression of the phase difference probability density function (pdf) in coherent fading channels and leads to expressions of the DPSK SER bound involving a single finite-range integral which can be readily evaluated numerically. Moreover, the simulation yields results consistent with numerical computation.

  7. Digitally modulated bit error rate measurement system for microwave component evaluation

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary Jo W.; Budinger, James M.

    1989-01-01

    The NASA Lewis Research Center has developed a unique capability for evaluation of the microwave components of a digital communication system. This digitally modulated bit-error-rate (BER) measurement system (DMBERMS) features a continuous data digital BER test set, a data processor, a serial minimum shift keying (SMSK) modem, noise generation, and computer automation. Application of the DMBERMS has provided useful information for the evaluation of existing microwave components and of design goals for future components. The design and applications of this system for digitally modulated BER measurements are discussed.

  8. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.

  9. High Resolution, High Frame Rate Video Technology

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Papers and working group summaries presented at the High Resolution, High Frame Rate Video (HHV) Workshop are compiled. HHV system is intended for future use on the Space Shuttle and Space Station Freedom. The Workshop was held for the dual purpose of: (1) allowing potential scientific users to assess the utility of the proposed system for monitoring microgravity science experiments; and (2) letting technical experts from industry recommend improvements to the proposed near-term HHV system. The following topics are covered: (1) State of the art in the video system performance; (2) Development plan for the HHV system; (3) Advanced technology for image gathering, coding, and processing; (4) Data compression applied to HHV; (5) Data transmission networks; and (6) Results of the users' requirements survey conducted by NASA.

  10. The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.

    PubMed

    Fadaee, Shannon B; Migliaccio, Americo A

    2016-04-01

    The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation. PMID:26715411

  11. Investigation on the bit error rate performance of 40Gb/s space optical communication system based on BPSK scheme

    NASA Astrophysics Data System (ADS)

    Li, Mi; Li, Bowen; Zhang, Xuping; Song, Yuejiang; Liu, Jia; Tu, Guojie

    2015-08-01

    Space optical communication technique is attracting increasingly more attention because it owns advantages such as high security and great communication quality compared with microwave communication. As the space optical communication develops, people have already achieved the communication at data rate of Gb/s currently. The next generation for space optical system have goal of the higher data rate of 40Gb/s. However, the traditional optical communication system cannot satisfy it when the data rate of system is at such high extent. This paper will introduce ground optical communication system of 40Gb/s data rate as to achieve the space optical communication at high data rate. Speaking of the data rate of 40Gb/s, we must apply waveguide modulator to modulate the optical signal and magnify this signal by laser amplifier. Moreover, the more sensitive avalanche photodiode (APD) will be as the detector to increase the communication quality. Based on communication system above, we analyze character of communication quality in downlink of space optical communication system when data rate is at the level of 40Gb/s. The bit error rate (BER) performance, an important factor to justify communication quality, versus some parameter ratios is discussed. From results, there exists optimum ratio of gain factor and divergence angle, which shows the best BER performance. We can also increase ratio of receiving diameter and divergence angle for better communication quality. These results can be helpful to comprehend the character of optical communication system at high data rate and contribute to the system design.

  12. Analytical Evaluation of Bit Error Rate Performance of a Free-Space Optical Communication System with Receive Diversity Impaired by Pointing Error

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2015-06-01

    Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.

  13. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    NASA Astrophysics Data System (ADS)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  14. Patch diameter limitation due to high chirp rates in focused SAR images

    NASA Astrophysics Data System (ADS)

    Doerry, Armin W.

    1994-10-01

    Polar-format processed synthetic aperture radar (SAR) images have a limited focused patch diameter that results from unmitigated phase errors. Very high chirp rates, encountered with fine-resolution short-pulse radars, exasperate the problem via a residual video phase error term. This letter modifies the traditional maximum patch diameter expression to include effects of very high chirp rates.

  15. Peat Accumulation in the Everglades (USA) during the Past 4000 Years: Rates, Drivers, and Sources of Error

    NASA Astrophysics Data System (ADS)

    Glaser, P. H.; Volin, J. C.; Givnish, T. J.; Hansen, B. C.; Stricker, C. A.

    2012-12-01

    Tropical and sub-tropical wetlands are considered to be globally important sources for greenhouse gases but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida, to assess these problems and determine the factors that could govern carbon accumulation in this large sub-tropical wetland. AMS-14C dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion ( 0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1 ) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands

  16. Carbon and sediment accumulation in the Everglades (USA) during the past 4000 years: Rates, drivers, and sources of error

    NASA Astrophysics Data System (ADS)

    Glaser, Paul H.; Volin, John C.; Givnish, Thomas J.; Hansen, Barbara C. S.; Stricker, Craig A.

    2012-09-01

    Tropical and subtropical wetlands are considered to be globally important sources of greenhouse gases, but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida in order to assess these problems and determine the factors that could govern carbon accumulation in this large subtropical wetland. Accelerator mass spectroscopy dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion (0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands.

  17. Carbon and sediment accumulation in the Everglades (USA) during the past 4000 years: rates, drivers, and sources of error

    USGS Publications Warehouse

    Glaser, Paul H.; Volin, John C.; Givnish, Thomas J.; Hansen, Barbara C. S.; Stricker, Craig A.

    2012-01-01

    Tropical and sub-tropical wetlands are considered to be globally important sources for greenhouse gases but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida, to assess these problems and determine the factors that could govern carbon accumulation in this large sub-tropical wetland. Accelerator mass spectroscopy dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion ( 0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1 ) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands.

  18. Anti-saccade error rates as a measure of attentional bias in cocaine dependent subjects.

    PubMed

    Dias, Nadeeka R; Schmitz, Joy M; Rathnayaka, Nuvan; Red, Stuart D; Sereno, Anne B; Moeller, F Gerard; Lane, Scott D

    2015-10-01

    Cocaine-dependent (CD) subjects show attentional bias toward cocaine-related cues, and this form of cue-reactivity may be predictive of craving and relapse. Attentional bias has previously been assessed by models that present drug-relevant stimuli and measure physiological and behavioral reactivity (often reaction time). Studies of several CNS diseases outside of substance use disorders consistently report anti-saccade deficits, suggesting a compromise in the interplay between higher-order cortical processes in voluntary eye control (i.e., anti-saccades) and reflexive saccades driven more by involuntary midbrain perceptual input (i.e., pro-saccades). Here, we describe a novel attentional-bias task developed by using measurements of saccadic eye movements in the presence of cocaine-specific stimuli, combining previously unique research domains to capitalize on their respective experimental and conceptual strengths. CD subjects (N = 46) and healthy controls (N = 41) were tested on blocks of pro-saccade and anti-saccade trials featuring cocaine and neutral stimuli (pictures). Analyses of eye-movement data indicated (1) greater overall anti-saccade errors in the CD group; (2) greater attentional bias in CD subjects as measured by anti-saccade errors to cocaine-specific (relative to neutral) stimuli; and (3) no differences in pro-saccade error rates. Attentional bias was correlated with scores on the obsessive-compulsive cocaine scale. The results demonstrate increased saliency and differential attentional to cocaine cues by the CD group. The assay provides a sensitive index of saccadic (visual inhibitory) control, a specific index of attentional bias to drug-relevant cues, and preliminary insight into the visual circuitry that may contribute to drug-specific cue reactivity. PMID:26164486

  19. Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates

    SciTech Connect

    Zamanali, J.H. ); Hubbard, F.R. ); Mosleh, A. ); Waller, M.A. )

    1992-01-01

    The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition.

  20. Serialized Quantum Error Correction Protocol for High-Bandwidth Quantum Repeaters

    NASA Astrophysics Data System (ADS)

    Glaudell, Andrew; Waks, Edo; Taylor, Jacob

    Advances in single-photon creation, transmission, and detection suggest that sending quantum information over optical fibers may have low enough losses to be overcome using quantum error correction. Such error-corrected communication is equivalent to a novel quantum repeater scheme, but crucial questions regarding implementation and system requirements remain open. In this talk, I will show that long-range entangled bit generation with rates approaching 108 entangled bits per second may be possible using a completely serialized protocol, in which photons are generated, entangled, and error corrected via sequential, one-way interactions with as few matter qubits as possible. Provided loss and error rates of the required elements are below the threshold for quantum error correction, this scheme demonstrates improved performance over transmission of single photons. We find improvement in entangled bit rates at large distances using this serial protocol and various quantum error correcting codes.

  1. Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error

    ERIC Educational Resources Information Center

    Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju

    2009-01-01

    Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…

  2. Time-resolved in vivo luminescence dosimetry for online error detection in pulsed dose-rate brachytherapy

    SciTech Connect

    Andersen, Claus E.; Nielsen, Soeren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari

    2009-11-15

    Purpose: The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Methods: Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with {sup 192}Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from {+-}5 to {+-}15 mm) were simulated in software in order to assess the ability of the system to detect errors. Results: For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when

  3. Measuring error rates in genomic perturbation screens: gold standards for human functional genomics

    PubMed Central

    Hart, Traver; Brown, Kevin R; Sircoulomb, Fabrice; Rottapel, Robert; Moffat, Jason

    2014-01-01

    Technological advancement has opened the door to systematic genetics in mammalian cells. Genome-scale loss-of-function screens can assay fitness defects induced by partial gene knockdown, using RNA interference, or complete gene knockout, using new CRISPR techniques. These screens can reveal the basic blueprint required for cellular proliferation. Moreover, comparing healthy to cancerous tissue can uncover genes that are essential only in the tumor; these genes are targets for the development of specific anticancer therapies. Unfortunately, progress in this field has been hampered by off-target effects of perturbation reagents and poorly quantified error rates in large-scale screens. To improve the quality of information derived from these screens, and to provide a framework for understanding the capabilities and limitations of CRISPR technology, we derive gold-standard reference sets of essential and nonessential genes, and provide a Bayesian classifier of gene essentiality that outperforms current methods on both RNAi and CRISPR screens. Our results indicate that CRISPR technology is more sensitive than RNAi and that both techniques have nontrivial false discovery rates that can be mitigated by rigorous analytical methods. PMID:24987113

  4. Detecting trends in raptor counts: power and type I error rates of various statistical tests

    USGS Publications Warehouse

    Hatfield, J.S.; Gould, W.R., IV; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.

    1996-01-01

    We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.

  5. Accuracy of High-Rate GPS for Seismology

    NASA Technical Reports Server (NTRS)

    Elosegui, P.; Davis, J. L.; Oberlander, D.; Baena, R.; Ekstrom, G.

    2006-01-01

    We built a device for translating a GPS antenna on a positioning table to simulate the ground motions caused by an earthquake. The earthquake simulator is accurate to better than 0.1 mm in position, and provides the "ground truth" displacements for assessing the technique of high-rate GPS. We found that the root-mean-square error of the 1-Hz GPS position estimates over the 15-min duration of the simulated seismic event was 2.5 mm, with approximately 96% of the observations in error by less than 5 mm, and is independent of GPS antenna motion. The error spectrum of the GPS estimates is approximately flicker noise, with a 50% decorrelation time for the position error of approx.1.6 s. We that, for the particular event simulated, the spectrum of dependent error in the GPS measurements. surface deformations exceeds the GPS error spectrum within a finite band. More studies are required to determine whether a generally optimal bandwidth exists for a target group of seismic events.

  6. The testing of the aspheric mirror high-frequency band error

    NASA Astrophysics Data System (ADS)

    Wan, JinLong; Li, Bo; Li, XinNan

    2015-08-01

    In recent years, high frequency errors of mirror surface are taken seriously gradually. In manufacturing process of advanced telescope, there is clear indicator about high frequency errors. However, the sub-mirror off-axis aspheric telescope used is large. If uses the full aperture interferometers shape measurement, you need to use complex optical compensation device. Therefore, we propose a method to detect non-spherical lens based on the high-frequency stitching errors. This method does not use compensation components, only to measure Aperture sub-surface shape. By analyzing Zernike polynomial coefficients corresponding to the frequency errors, removing the previous 15 Zernike polynomials, then joining the surface shape, you can get full bore inside tested mirror high-frequency errors. 330mm caliber off-axis aspherical hexagon are measured with this method, obtain a complete face type of high-frequency surface errors and the feasibility of the approach.

  7. The effect of narrow-band digital processing and bit error rate on the intelligibility of ICAO spelling alphabet words

    NASA Astrophysics Data System (ADS)

    Schmidt-Nielsen, Astrid

    1987-08-01

    The recognition of ICAO spelling alphabet words (ALFA, BRAVO, CHARLIE, etc.) is compared with diagnostic rhyme test (DRT) scores for the same conditions. The voice conditions include unprocessed speech; speech processed through the DOD standard linear-predictive-coding algorithm operating at 2400 bit/s with random error rates of 0, 2, 5, 8, and 12 percent; and speech processed through an 800-bit/s pattern-matching algorithm. The results suggest that, with distinctive vocabularies, word intelligibility can be expected to remain high even when DRT scores fall into the poor range. However, once the DRT scores fall below 75 percent, the intelligibility can be expected to fall off rapidly; at DRT scores below 50, the recognition of a distinctive vocabulary should also fall below 50 percent.

  8. An intravenous medication safety system: preventing high-risk medication errors at the point of care.

    PubMed

    Hatcher, Irene; Sullivan, Mark; Hutchinson, James; Thurman, Susan; Gaffney, F Andrew

    2004-10-01

    Improving medication safety at the point of care--particularly for high-risk drugs--is a major concern of nursing administrators. The medication errors most likely to cause harm are administration errors related to infusion of high-risk medications. An intravenous medication safety system is designed to prevent high-risk infusion medication errors and to capture continuous quality improvement data for best practice improvement. Initial testing with 50 systems in 2 units at Vanderbilt University Medical Center revealed that, even in the presence of a fully mature computerized prescriber order-entry system, the new safety system averted 99 potential infusion errors in 8 months. PMID:15577664

  9. Performance analysis of content-addressable search and bit-error rate characteristics of a defocused volume holographic data storage system.

    PubMed

    Das, Bhargab; Joseph, Joby; Singh, Kehar

    2007-08-01

    One of the methods for smoothing the high intensity dc peak in the Fourier spectrum for reducing the reconstruction error in a Fourier transform volume holographic data storage system is to record holograms some distance away from or in front of the Fourier plane. We present the results of our investigation on the performance of such a defocused holographic data storage system in terms of bit-error rate and content search capability. We have evaluated the relevant recording geometry through numerical simulation, by obtaining the intensity distribution at the output detector plane. This has been done by studying the bit-error rate and the content search capability as a function of the aperture size and position of the recording material away from the Fourier plane. PMID:17676163

  10. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, W. S.; Burkhart, J. F.; Kylling, A.

    2015-08-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  11. Breaking Up Large High Schools: Five Common (and Understandable) Errors of Execution. ERIC Digest.

    ERIC Educational Resources Information Center

    Gregory, Tom

    In the past 30 years, research has suggested the need for much smaller high schools. In response, some administrators have attempted to subdivide big high schools into smaller entities. This digest reviews recent research on the movement to break up large schools and discusses five types of error common to such attempts--errors of autonomy, size,…

  12. Effect of automated drug distribution systems on medication error rates in a short-stay geriatric unit

    PubMed Central

    Cousein, Etienne; Mareville, Julie; Lerooy, Alexandre; Caillau, Antoine; Labreuche, Julien; Dambre, Delphine; Odou, Pascal; Bonte, Jean-Paul; Puisieux, François; Decaudin, Bertrand; Coupé, Patrick

    2014-01-01

    Rationale, aims and objectives To assess the impact of an automated drug distribution system on medication errors (MEs). Methods Before-after observational study in a 40-bed short stay geriatric unit within a 1800 bed general hospital in Valenciennes, France. Researchers attended nurse medication administration rounds and compared administered to prescribed drugs, before and after the drug distribution system changed from a ward stock system (WSS) to a unit dose dispensing system (UDDS), integrating a unit dose dispensing robot and automated medication dispensing cabinet (AMDC). Results A total of 615 opportunities of errors (OEs) were observed among 148 patients treated during the WSS period, and 783 OEs were observed among 166 patients treated during the UDDS period. ME [medication administration error (MAE)] rates were calculated and compared between the two periods. Secondary measures included type of errors, seriousness of errors and risk reduction for the patients. The implementation of an automated drug dispensing system resulted in a 53% reduction in MAEs. All error types were reduced in the UDDS period compared with the WSS period (P < 0.001). Wrong dose and wrong drug errors were reduced by 79.1% (2.4% versus 0.5%, P = 0.005) and 93.7% (1.9% versus 0.01%, P = 0.009), respectively. Conclusion An automated UDDS combining a unit dose dispensing robot and AMDCs could reduce discrepancies between ordered and administered drugs, thus improving medication safety among the elderly. PMID:24917185

  13. Prevalence of Refractive Errors among High School Students in Western Iran

    PubMed Central

    Hashemi, Hassan; Rezvan, Farhad; Beiranvand, Asghar; Papi, Omid-Ali; Hoseini Yazdi, Hosein; Ostadimoghaddam, Hadi; Yekta, Abbas Ali; Norouzirad, Reza; Khabazkhoob, Mehdi

    2014-01-01

    Purpose To determine the prevalence of refractive errors among high school students. Methods In a cross-sectional study, we applied stratified cluster sampling on high school students of Aligoudarz, Western Iran. Examinations included visual acuity, non-cycloplegic refraction by autorefraction and fine tuning with retinoscopy. Myopia and hyperopia were defined as spherical equivalent of -0.5/+0.5 diopter (D) or worse, respectively; astigmatism was defined as cylindrical error >0.5 D and anisometropia as an interocular difference in spherical equivalent exceeding 1 D. Results Of 451 selected students, 438 participated in the study (response rate, 97.0%). Data from 434 subjects with mean age of 16±1.3 (range, 14 to 21) years including 212 (48.8%) male subjects was analyzed. The prevalence of myopia, hyperopia and astigmatism was 29.3% [95% confidence interval (CI), 25-33.6%], 21.7% (95%CI, 17.8-25.5%), and 20.7% (95%CI, 16.9-24.6%), respectively. The prevalence of myopia increased significantly with age [odds ratio (OR)=1.30, P=0.003] and was higher among boys (OR=3.10, P<0.001). The prevalence of hyperopia was significantly higher in girls (OR=0.49, P=0.003). The prevalence of astigmatism was 25.9% in boys and 15.8% in girls (OR=2.13, P=0.002). The overall prevalence of high myopia and high hyperopia were 0.5% and 1.2%, respectively. The prevalence of with-the-rule, against-the-rule, and oblique astigmatism was 14.5%, 4.8% and 1.4%, respectively. Overall, 4.6% (95%CI, 2.6-6.6%) of subjects were anisometropic. Conclusion More than half of high school students in Aligoudarz had at least one type of refractive error. Compared to similar studies, the prevalence of refractive errors was high in this age group. PMID:25279126

  14. On the Power of Multiple Independent Tests when the Experimentwise Error Rate Is Controlled.

    ERIC Educational Resources Information Center

    Hsu, Louis M.

    1980-01-01

    The problem addressed is of assessing the loss of power which results from keeping the probability that at least one Type I error will occur in a family of N statistical tests at a tolerably low level. (Author/BW)

  15. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    SciTech Connect

    Chau, H.F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1{radical}(5){approx_equal}27.6%, thereby making it the most error resistant scheme known to date.

  16. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.

  17. High rates of molecular evolution in hantaviruses.

    PubMed

    Ramsden, Cadhla; Melo, Fernando L; Figueiredo, Luiz M; Holmes, Edward C; Zanotto, Paolo M A

    2008-07-01

    Hantaviruses are rodent-borne Bunyaviruses that infect the Arvicolinae, Murinae, and Sigmodontinae subfamilies of Muridae. The rate of molecular evolution in the hantaviruses has been previously estimated at approximately 10(-7) nucleotide substitutions per site, per year (substitutions/site/year), based on the assumption of codivergence and hence shared divergence times with their rodent hosts. If substantiated, this would make the hantaviruses among the slowest evolving of all RNA viruses. However, as hantaviruses replicate with an RNA-dependent RNA polymerase, with error rates in the region of one mutation per genome replication, this low rate of nucleotide substitution is anomalous. Here, we use a Bayesian coalescent approach to estimate the rate of nucleotide substitution from serially sampled gene sequence data for hantaviruses known to infect each of the 3 rodent subfamilies: Araraquara virus (Sigmodontinae), Dobrava virus (Murinae), Puumala virus (Arvicolinae), and Tula virus (Arvicolinae). Our results reveal that hantaviruses exhibit short-term substitution rates of 10(-2) to 10(-4) substitutions/site/year and so are within the range exhibited by other RNA viruses. The disparity between this substitution rate and that estimated assuming rodent-hantavirus codivergence suggests that the codivergence hypothesis may need to be reevaluated. PMID:18417484

  18. Design of high-power aspherical ophthalmic lenses with a reduced error budget

    NASA Astrophysics Data System (ADS)

    Sun, Wen-Shing; Chang, Horng; Sun, Ching-Cherng; Chang, Ming-Wen; Lin, Ching-Huang; Tien, Chuen-Lin

    2002-02-01

    As in the lens optimization process, ophthalmic lens designers have usually constructed error functions with 0.5, 0.7, and 1.0 at only three oblique fields. This seems enough to achieve a balanced trade-off with the astigmatic error, the power error, and the distortion all being considered simultaneously. However, for high-power ophthalmic lenses, the aberration curves show serious violations even if aspherical coefficients are involved. The analytical results indicate that a field error suppression of up to 7 points may even be required in some cases. The suppression effects are excellent and examples of both positive and negative lenses are designed.

  19. The Effects of a Student Sampling Plan on Estimates of the Standard Errors for Student Passing Rates.

    ERIC Educational Resources Information Center

    Lee, Guemin; Fitzpatrick, Anne R.

    2003-01-01

    Studied three procedures for estimating the standard errors of school passing rates using a generalizability theory model and considered the effects of student sample size. Results show that procedures differ in terms of assumptions about the populations from which students were sampled, and student sample size was found to have a large effect on…

  20. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels

    PubMed Central

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-01-01

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. PMID:26694878

  1. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:26694878

  2. Comparison of Self-Scoring Error Rate for SDS (Self Directed Search) (1970) and the Revised SDS (1977).

    ERIC Educational Resources Information Center

    Price, Gary E.; And Others

    A comparison of Self-Scoring Error Rate for Self Directed Search (SDS) and the revised SDS is presented. The subjects were college freshmen and sophomores who participated in career planning as a part of their orientation program, and a career workshop. Subjects, N=190 on first study and N=84 on second study, were then randomly assigned to the SDS…

  3. People's Hypercorrection of High-Confidence Errors: Did They Know It All Along?

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2011-01-01

    This study investigated the "knew it all along" explanation of the hypercorrection effect. The hypercorrection effect refers to the finding that when people are given corrective feedback, errors that are committed with high confidence are easier to correct than low-confidence errors. Experiment 1 showed that people were more likely to claim that…

  4. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  5. Internal pressure gradient errors in σ-coordinate ocean models in high resolution fjord studies

    NASA Astrophysics Data System (ADS)

    Berntsen, Jarle; Thiem, Øyvind; Avlesen, Helge

    2015-08-01

    Terrain following ocean models are today applied in coastal areas and fjords where the topography may be very steep. Recent advances in high performance computing facilitate model studies with very high spatial resolution. In general, numerical discretization errors tend to zero with the grid size. However, in fjords and near the coast the slopes may be very steep, and the internal pressure gradient errors associated with σ-models may be significant even in high resolution studies. The internal pressure gradient errors are due to errors when estimating the density gradients in σ-models, and these errors are investigated for two idealized test cases and for the Hardanger fjord in Norway. The methods considered are the standard second order method and a recently proposed method that is balanced such that the density gradients are zero for the case ρ = ρ(z) where ρ is the density and z is the vertical coordinate. The results show that by using the balanced method, the errors may be reduced considerably also for slope parameters larger than the maximum suggested value of 0.2. For the Hardanger fjord case initialized with ρ = ρ(z) , the errors in the results produced with the balanced method are orders of magnitude smaller than the corresponding errors in the results produced with the second order method.

  6. Compensation of spectral and RF errors in swept-source OCT for high extinction complex demodulation.

    PubMed

    Siddiqui, Meena; Tozburun, Serhat; Zhang, Ellen Ziyi; Vakoc, Benjamin J

    2015-03-01

    We provide a framework for compensating errors within passive optical quadrature demodulation circuits used in swept-source optical coherence tomography (OCT). Quadrature demodulation allows for detection of both the real and imaginary components of an interference fringe, and this information separates signals from positive and negative depth spaces. To achieve a high extinction (∼60 dB) between these positive and negative signals, the demodulation error must be less than 0.1% in amplitude and phase. It is difficult to construct a system that achieves this low error across the wide spectral and RF bandwidths of high-speed swept-source systems. In a prior work, post-processing methods for removing residual spectral errors were described. Here, we identify the importance of a second class of errors originating in the RF domain, and present a comprehensive framework for compensating both spectral and RF errors. Using this framework, extinctions >60 dB are demonstrated. A stability analysis shows that calibration parameters associated with RF errors are accurate for many days, while those associated with spectral errors must be updated prior to each imaging session. Empirical procedures to derive both RF and spectral calibration parameters simultaneously and to update spectral calibration parameters are presented. These algorithms provide the basis for using passive optical quadrature demodulation circuits with high speed and wide-bandwidth swept-source OCT systems. PMID:25836784

  7. Compensation of spectral and RF errors in swept-source OCT for high extinction complex demodulation

    PubMed Central

    Siddiqui, Meena; Tozburun, Serhat; Zhang, Ellen Ziyi; Vakoc, Benjamin J.

    2015-01-01

    We provide a framework for compensating errors within passive optical quadrature demodulation circuits used in swept-source optical coherence tomography (OCT). Quadrature demodulation allows for detection of both the real and imaginary components of an interference fringe, and this information separates signals from positive and negative depth spaces. To achieve a high extinction (∼60 dB) between these positive and negative signals, the demodulation error must be less than 0.1% in amplitude and phase. It is difficult to construct a system that achieves this low error across the wide spectral and RF bandwidths of high-speed swept-source systems. In a prior work, post-processing methods for removing residual spectral errors were described. Here, we identify the importance of a second class of errors originating in the RF domain, and present a comprehensive framework for compensating both spectral and RF errors. Using this framework, extinctions >60 dB are demonstrated. A stability analysis shows that calibration parameters associated with RF errors are accurate for many days, while those associated with spectral errors must be updated prior to each imaging session. Empirical procedures to derive both RF and spectral calibration parameters simultaneously and to update spectral calibration parameters are presented. These algorithms provide the basis for using passive optical quadrature demodulation circuits with high speed and wide-bandwidth swept-source OCT systems. PMID:25836784

  8. Analysis of 454 sequencing error rate, error sources, and artifact recombination for detection of Low-frequency drug resistance mutations in HIV-1 DNA

    PubMed Central

    2013-01-01

    Background 454 sequencing technology is a promising approach for characterizing HIV-1 populations and for identifying low frequency mutations. The utility of 454 technology for determining allele frequencies and linkage associations in HIV infected individuals has not been extensively investigated. We evaluated the performance of 454 sequencing for characterizing HIV populations with defined allele frequencies. Results We constructed two HIV-1 RT clones. Clone A was a wild type sequence. Clone B was identical to clone A except it contained 13 introduced drug resistant mutations. The clones were mixed at ratios ranging from 1% to 50% and were amplified by standard PCR conditions and by PCR conditions aimed at reducing PCR-based recombination. The products were sequenced using 454 pyrosequencing. Sequence analysis from standard PCR amplification revealed that 14% of all sequencing reads from a sample with a 50:50 mixture of wild type and mutant DNA were recombinants. The majority of the recombinants were the result of a single crossover event which can happen during PCR when the DNA polymerase terminates synthesis prematurely. The incompletely extended template then competes for primer sites in subsequent rounds of PCR. Although less often, a spectrum of other distinct crossover patterns was also detected. In addition, we observed point mutation errors ranging from 0.01% to 1.0% per base as well as indel (insertion and deletion) errors ranging from 0.02% to nearly 50%. The point errors (single nucleotide substitution errors) were mainly introduced during PCR while indels were the result of pyrosequencing. We then used new PCR conditions designed to reduce PCR-based recombination. Using these new conditions, the frequency of recombination was reduced 27-fold. The new conditions had no effect on point mutation errors. We found that 454 pyrosequencing was capable of identifying minority HIV-1 mutations at frequencies down to 0.1% at some nucleotide positions. Conclusion

  9. Power and Type I Error Rates for Rank-Score MANOVA Techniques.

    ERIC Educational Resources Information Center

    Pavur, Robert; Nath, Ravinder

    1989-01-01

    A Monte Carlo simulation study compared the power and Type I errors of the Wilks lambda statistic and the statistic of M. L. Puri and P. K. Sen (1971) on transformed data in a one-way multivariate analysis of variance. Preferred test procedures, based on robustness and power, are discussed. (SLD)

  10. A Comparison of Type I Error Rates of Alpha-Max with Established Multiple Comparison Procedures.

    ERIC Educational Resources Information Center

    Barnette, J. Jackson; McLean, James E.

    J. Barnette and J. McLean (1996) proposed a method of controlling Type I error in pairwise multiple comparisons after a significant omnibus F test. This procedure, called Alpha-Max, is based on a sequential cumulative probability accounting procedure in line with Bonferroni inequality. A missing element in the discussion of Alpha-Max was the…

  11. People’s Hypercorrection of High Confidence Errors: Did They Know it All Along?

    PubMed Central

    Metcalfe, Janet; Finn, Bridgid

    2010-01-01

    This study investigated the ‘knew it all along’ explanation of the hypercorrection effect. The hypercorrection effect refers to the finding that when given corrective feedback, errors that are committed with high confidence are easier to correct than low confidence errors. Experiment 1 showed that people were more likely to claim that they ‘knew it all along,’ when they were given the answers to high confidence errors as compared to low confidence errors. Experiments 2 and 3 investigated whether people really did know the correct answers before being told, or whether the claim in Experiment 1 was mere hindsight bias. Experiment 2 showed that (1) participants were more likely to choose the correct answer in a second guess multiple-choice test when they had expressed an error with high rather than low confidence, and (2) that they were more likely to generate the correct answers to high confidence as compared to low confidence errors, after being told they were wrong and to try again. Experiment 3 showed that (3) people were more likely to produce the correct answer when given a two-letter cue to high rather than low confidence errors, and that (4) when feedback was scaffolded by presenting the target letters one by one, people needed fewer such letter prompts to reach the correct answers when they had committed high, rather than low confidence errors. These results converge on the conclusion that when people said that they ‘knew it all along’, they were right. This knowledge, no doubt, contributes to why they are able to correct those high confidence errors so easily. PMID:21355668

  12. A comparison of error detection rates between the reading aloud method and the double data entry method.

    PubMed

    Kawado, Miyuki; Hinotsu, Shiro; Matsuyama, Yutaka; Yamaguchi, Takuhiro; Hashimoto, Shuji; Ohashi, Yasuo

    2003-10-01

    Data entry and its verification are important steps in the process of data management in clinical studies. In Japan, a kind of visual comparison called the reading aloud (RA) method is often used as an alternative to or in addition to the double data entry (DDE) method. In a typical RA method, one operator reads previously keyed data aloud while looking at a printed sheet or computer screen, and another operator compares the voice with the corresponding data recorded on case report forms (CRFs) to confirm whether the data are the same. We compared the efficiency of the RA method with that of the DDE method in the data management system of the Japanese Registry of Renal Transplantation. Efficiency was evaluated in terms of error detection rate and expended time. Five hundred sixty CRFs were randomly allocated to two operators for single data entry. Two types of DDE and RA methods were performed. Single data entry errors were detected in 358 of 104,720 fields (per-field error rate=0.34%). Error detection rates were 88.3% for the DDE method performed by a different operator, 69.0% for the DDE method performed by the same operator, 59.5% for the RA method performed by a different operator, and 39.9% for the RA method performed by the same operator. The differences in these rates were significant (p<0.001) between the two verification methods as well as between the types of operator (same or different). The total expended times were 74.8 hours for the DDE method and 57.9 hours for the RA method. These results suggest that in detecting errors of single data entry, the RA method is inferior to the DDE method, while its time cost is lower. PMID:14500053

  13. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  14. High burn rate solid composite propellants

    NASA Astrophysics Data System (ADS)

    Manship, Timothy D.

    High burn rate propellants help maintain high levels of thrust without requiring complex, high surface area grain geometries. Utilizing high burn rate propellants allows for simplified grain geometries that not only make production of the grains easier, but the simplified grains tend to have better mechanical strength, which is important in missiles undergoing high-g accelerations. Additionally, high burn rate propellants allow for a higher volumetric loading which reduces the overall missile's size and weight. The purpose of this study is to present methods of achieving a high burn rate propellant and to develop a composite propellant formulation that burns at 1.5 inches per second at 1000 psia. In this study, several means of achieving a high burn rate propellant were presented. In addition, several candidate approaches were evaluated using the Kepner-Tregoe method with hydroxyl terminated polybutadiene (HTPB)-based propellants using burn rate modifiers and dicyclopentadiene (DCPD)-based propellants being selected for further evaluation. Propellants with varying levels of nano-aluminum, nano-iron oxide, FeBTA, and overall solids loading were produced using the HTPB binder and evaluated in order to determine the effect the various ingredients have on the burn rate and to find a formulation that provides the burn rate desired. Experiments were conducted to compare the burn rates of propellants using the binders HTPB and DCPD. The DCPD formulation matched that of the baseline HTPB mix. Finally, GAP-plasticized DCPD gumstock dogbones were attempted to be made for mechanical evaluation. Results from the study show that nano-additives have a substantial effect on propellant burn rate with nano-iron oxide having the largest influence. Of the formulations tested, the highest burn rate was a 84% solids loading mix using nano-aluminum nano-iron oxide, and ammonium perchlorate in a 3:1(20 micron: 200 micron) ratio which achieved a burn rate of 1.2 inches per second at 1000

  15. Multichannel analyzers at high rates of input

    NASA Technical Reports Server (NTRS)

    Rudnick, S. J.; Strauss, M. G.

    1969-01-01

    Multichannel analyzer, used with a gating system incorporating pole-zero compensation, pile-up rejection, and baseline-restoration, achieves good resolution at high rates of input. It improves resolution, reduces tailing and rate-contributed continuum, and eliminates spectral shift.

  16. High-rate lithium thionyl chloride cells

    NASA Technical Reports Server (NTRS)

    Goebel, F.

    1982-01-01

    A high-rate C cell with disc electrodes was developed to demonstrate current rates which are comparable to other primary systems. The tests performed established the limits of abuse beyond which the cell becomes hazardous. Tests include: impact, shock, and vibration tests; temperature cycling; and salt water immersion of fresh cells.

  17. ISS Update: High Rate Communications System

    NASA Video Gallery

    ISS Update Commentator Pat Ryan interviews Diego Serna, Communications and Tracking Officer, about the High Rate Communications System. Questions? Ask us on Twitter @NASA_Johnson and include the ha...

  18. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  19. Tracking in high-frame-rate imaging.

    PubMed

    Wu, Shih-Ying; Wang, Shun-Li; Li, Pai-Chi

    2010-01-01

    Speckle tracking has been used for motion estimation in ultrasound imaging. Unlike conventional Doppler techniques, which are angle-dependent, speckle tracking can be utilized to estimate velocity vectors. However, the accuracy of speckle-tracking methods is limited by speckle decorrelation, which is related to the displacement between two consecutive images, and, hence, combining high-frame-rate imaging and speckle tracking could potentially increase the accuracy of motion estimation. However, the lack of transmit focusing may also affect the tracking results and the high computational requirement may be problematic. This study therefore assessed the performance of high-frame-rate speckle tracking and compared it with conventional focusing. The effects of the signal-to-noise ratio (SNR), bulk motion, and velocity gradients were investigated in both experiments and simulations. The results show that high-frame-rate speckle tracking can achieve high accuracy if the SNR is sufficiently high. In addition, its computational complexity is acceptable because smaller search windows can be used due to the displacements between frames generally being smaller during high-frame-rate imaging. Speckle decor-relation resulting from velocity gradients within a sample volume is also not as significant during high-frame-rate imaging. PMID:20690428

  20. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  1. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  2. Improvement of bit error rate and page alignment in the holographic data storage system by using the structural similarity method.

    PubMed

    Chen, Yu-Ta; Ou-Yang, Mang; Lee, Cheng-Chung

    2012-06-01

    Although widely recognized as a promising candidate for the next generation of data storage devices, holographic data storage systems (HDSS) incur adverse effects such as noise, misalignment, and aberration. Therefore, based on the structural similarity (SSIM) concept, this work presents a more accurate locating approach than the gray level weighting method (GLWM). Three case studies demonstrate the effectiveness of the proposed approach. Case 1 focuses on achieving a high performance of a Fourier lens in HDSS, Cases 2 and 3 replace the Fourier lens with a normal lens to decrease the quality of the HDSS, and Case 3 demonstrates the feasibility of a defocus system in the worst-case scenario. Moreover, the bit error rate (BER) is evaluated in several average matrices extended from the located position. Experimental results demonstrate that the proposed SSIM method renders a more accurate centering and a lower BER, lower BER of 2 dB than those of the GLWM in Cases 1 and 2, and BER of 1.5 dB in Case 3. PMID:22695607

  3. A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware.

    PubMed

    Shi, Haixiang; Schmidt, Bertil; Liu, Weiguo; Müller-Wittig, Wolfgang

    2010-04-01

    Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this article, we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data so that error-free reads can be available before DNA fragment assembly, which is of high importance to many graph-based short-read assembly tools. The algorithm is based on spectral alignment and uses the Compute Unified Device Architecture (CUDA) programming model. To gain efficiency we are taking advantage of the CUDA texture memory using a space-efficient Bloom filter data structure for spectrum membership queries. We have tested the runtime and accuracy of our algorithm using real and simulated Illumina data for different read lengths, error rates, input sizes, and algorithmic parameters. Using a CUDA-enabled mass-produced GPU (available for less than US$400 at any local computer outlet), this results in speedups of 12-84 times for the parallelized error correction, and speedups of 3-63 times for both sequential preprocessing and parallelized error correction compared to the publicly available Euler-SR program. Our implementation is freely available for download from http://cuda-ec.sourceforge.net . PMID:20426693

  4. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  5. Single Event Test Methodologies and System Error Rate Analysis for Triple Modular Redundant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael

    2010-01-01

    We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.

  6. An Error Model for High-Time Resolution Satellite Precipitation Products

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Sapiano, M.; Adler, R. F.; Huffman, G. J.; Tian, Y.

    2013-12-01

    A new error scheme (PUSH: Precipitation Uncertainties for Satellite Hydrology) is presented to provide global estimates of errors for high time resolution, merged precipitation products. Errors are estimated for the widely used Tropical Rainfall Monitoring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 product at daily/0.25° resolution, using the high quality NOAA CPC-UNI gauge analysis as the benchmark. Each of the following four scenarios is explored and explicitly modeled: correct no-precipitation detection (both satellite and gauges detect no precipitation), missed precipitation (satellite records a zero, but it is incorrect), false alarm (satellite detects precipitation, but the reference is zero), and hit (both satellite and gauges detect precipitation). Results over Oklahoma show that the estimated probability distributions are able to reproduce the probability density functions of the benchmark precipitation, in terms of both expected values and quantiles. PUSH adequately captures missed precipitation and false detection uncertainties, reproduces the spatial pattern of the error, and shows a good agreement between observed and estimated errors. The resulting error estimates could be attached to the standard products for the scientific community to use. Investigation is underway to: 1) test the approach in different regions of the world; 2) verify the ability of the model to discern the systematic and random components of the error; 3) and evaluate the model performance when higher time-resolution satellite products (i.e., 3-hourly) are employed.

  7. A Framework for Interpreting Type I Error Rates from a Product‐Term Model of Interaction Applied to Quantitative Traits

    PubMed Central

    Province, Michael A.

    2015-01-01

    ABSTRACT Adequate control of type I error rates will be necessary in the increasing genome‐wide search for interactive effects on complex traits. After observing unexpected variability in type I error rates from SNP‐by‐genome interaction scans, we sought to characterize this variability and test the ability of heteroskedasticity‐consistent standard errors to correct it. We performed 81 SNP‐by‐genome interaction scans using a product‐term model on quantitative traits in a sample of 1,053 unrelated European Americans from the NHLBI Family Heart Study, and additional scans on five simulated datasets. We found that the interaction‐term genomic inflation factor (lambda) showed inflation and deflation that varied with sample size and allele frequency; that similar lambda variation occurred in the absence of population substructure; and that lambda was strongly related to heteroskedasticity but not to minor non‐normality of phenotypes. Heteroskedasticity‐consistent standard errors narrowed the range of lambda, with HC3 outperforming HC0, but in individual scans tended to create new P‐value outliers related to sparse two‐locus genotype classes. We explain the lambda variation as a result of non‐independence of test statistics coupled with stochastic biases in test statistics due to a failure of the test to reach asymptotic properties. We propose that one way to interpret lambda is by comparison to an empirical distribution generated from data simulated under the null hypothesis and without population substructure. We further conclude that the interaction‐term lambda should not be used to adjust test statistics and that heteroskedasticity‐consistent standard errors come with limitations that may outweigh their benefits in this setting. PMID:26659945

  8. Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results

    SciTech Connect

    Clark, E.L.

    1994-07-01

    Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.

  9. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  10. High Strain Rate Rheology of Polymer Melts

    NASA Astrophysics Data System (ADS)

    Kelly, Adrian; Gough, Tim; Whiteside, Ben; Coates, Phil D.

    2009-07-01

    A modified servo electric injection moulding machine has been used in air-shot mode with capillary dies fitted at the nozzle to examine the rheology of a number of commercial polymers at wall shear strain rates of up to 107 s-1. Shear and extensional flow properties were obtained through the use of long and orifice (close to zero land length) dies of the same diameter. A range of polyethylene, polypropylene and polystyrene melts have been characterized; good agreement was found between the three techniques used in the ranges where strain rates overlapped. Shear viscosity of the polymers studied was found to exhibit a plateau above approximately 1×106 s-1. A relationship between the measured high strain rate rheological behaviour and molecular structure was noted, with polymers containing larger side groups reaching the rate independent plateau at lower strain rates than those with simpler structures.

  11. Orifice-induced pressure error studies in Langley 7- by 10-foot high-speed tunnel

    NASA Technical Reports Server (NTRS)

    Plentovich, E. B.; Gloss, B. B.

    1986-01-01

    For some time it has been known that the presence of a static pressure measuring hole will disturb the local flow field in such a way that the sensed static pressure will be in error. The results of previous studies aimed at studying the error induced by the pressure orifice were for relatively low Reynolds number flows. Because of the advent of high Reynolds number transonic wind tunnels, a study was undertaken to assess the magnitude of this error at high Reynolds numbers than previously published and to study a possible method of eliminating this pressure error. This study was conducted in the Langley 7- by 10-Foot High-Speed Tunnel on a flat plate. The model was tested at Mach numbers from 0.40 to 0.72 and at Reynolds numbers from 7.7 x 1,000,000 to 11 x 1,000,000 per meter (2.3 x 1,000,000 to 3.4 x 1,000,000 per foot), respectively. The results indicated that as orifice size increased, the pressure error also increased but that a porous metal (sintered metal) plug inserted in an orifice could greatly reduce the pressure error induced by the orifice.

  12. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  13. Children with High Functioning Autism show increased prefrontal and temporal cortex activity during error monitoring

    PubMed Central

    Goldberg, Melissa C.; Spinelli, Simona; Joel, Suresh; Pekar, James J.; Denckla, Martha B.; Mostofsky, Stewart H.

    2010-01-01

    Evidence exists for deficits in error monitoring in autism. These deficits may be particularly important because they may contribute to excessive perseveration and repetitive behavior in autism. We examined the neural correlates of error monitoring using fMRI in 8–12-year-old children with high-functioning autism (HFA, n=11) and typically developing children (TD, n=15) during performance of a Go/No-Go task by comparing the neural correlates of commission errors versus correct response inhibition trials. Compared to TD children, children with HFA showed increased BOLD fMRI signal in the anterior medial prefrontal cortex (amPFC) and the left superior temporal gyrus (STempG) during commission error (versus correct inhibition) trials. A follow-up region-of-interest analysis also showed increased BOLD signal in the right insula in HFA compared to TD controls. Our findings of increased amPFC and STempG activity in HFA, together with the increased activity in the insula, suggest a greater attention towards the internally-driven emotional state associated with making an error in children with HFA. Since error monitoring occurs across different cognitive tasks throughout daily life, an increased emotional reaction to errors may have important consequences for early learning processes. PMID:21151713

  14. A high-strain-rate superplastic ceramic.

    PubMed

    Kim, B N; Hiraga, K; Morita, K; Sakka, Y

    2001-09-20

    High-strain-rate superplasticity describes the ability of a material to sustain large plastic deformation in tension at high strain rates of the order of 10-2 to 10-1 s-1 and is of great technological interest for the shape-forming of engineering materials. High-strain-rate superplasticity has been observed in aluminium-based and magnesium-based alloys. But for ceramic materials, superplastic deformation has been restricted to low strain rates of the order of 10-5 to 10-4 s-1 for most oxides and nitrides with the presence of intergranular cavities leading to premature failure. Here we show that a composite ceramic material consisting of tetragonal zirconium oxide, magnesium aluminate spinel and alpha-alumina phases exhibits superplasticity at strain rates up to 1 s-1. The composite also exhibits a large tensile elongation, exceeding 1,050 per cent for a strain rate of 0.4 s-1. The tensile flow behaviour and deformed microstructure of the material indicate that superplasticity is due to a combination of limited grain growth in the constitutive phases and the intervention of dislocation-induced plasticity in the zirconium oxide phase. We suggest that the present results hold promise for the application of shape-forming technologies to ceramic materials. PMID:11565026

  15. The Effect of Administrative Boundaries and Geocoding Error on Cancer Rates in California

    PubMed Central

    Goldberg, Daniel W.; Cockburn, Myles G.

    2012-01-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods. PMID:22469490

  16. High Bit Rate Experiments Over ACTS

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A.; Gary, J. Patrick; Edelsen, Burt; Helm, Neil; Cohen, Judith; Shopbell, Patrick; Mechoso, C. Roberto; Chung-Chun; Farrara, M.; Spahr, Joseph

    1996-01-01

    This paper describes two high data rate experiments chat are being developed for the gigabit NASA Advanced Communications Technology Satellite (ACTS). The first is a telescience experiment that remotely acquires image data at the Keck telescope from the Caltech campus. The second is a distributed global climate application that is run between two supercomputer centers interconnected by ACTS. The implementation approach for each is described along with the expected results. Also. the ACTS high data rate (HDR) ground station is also described in detail.

  17. High Rate for Type IC Supernovae

    SciTech Connect

    Muller, R.A.; Marvin-Newberg, H.J.; Pennypacker, Carl R.; Perlmutter, S.; Sasseen, T.P.; Smith, C.K.

    1991-09-01

    Using an automated telescope we have detected 20 supernovae in carefully documented observations of nearby galaxies. The supernova rates for late spiral (Sbc, Sc, Scd, and Sd) galaxies, normalized to a blue luminosity of 10{sup 10} L{sub Bsun}, are 0.4 h{sup 2}, 1.6 h{sup 2}, and 1.1 h{sup 2} per 100 years for SNe type la, Ic, and II. The rate for type Ic supernovae is significantly higher than found in previous surveys. The rates are not corrected for detection inefficiencies, and do not take into account the indications that the Ic supernovae are fainter on the average than the previous estimates; therefore the true rates are probably higher. The rates are not strongly dependent on the galaxy inclination, in contradiction to previous compilations. If the Milky Way is a late spiral, then the rate of Galactic supernovae is greater than 1 per 30 {+-} 7 years, assuming h = 0.75. This high rate has encouraging consequences for future neutrino and gravitational wave observatories.

  18. Approximation and error estimation in high dimensional space for stochastic collocation methods on arbitrary sparse samples

    SciTech Connect

    Archibald, Richard K; Deiterding, Ralf; Hauck, Cory D; Jakeman, John D; Xiu, Dongbin

    2012-01-01

    We have develop a fast method that can capture piecewise smooth functions in high dimensions with high order and low computational cost. This method can be used for both approximation and error estimation of stochastic simulations where the computations can either be guided or come from a legacy database.

  19. Understanding High School Graduation Rates in Illinois

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  20. Baltimore District Tackles High Suspension Rates

    ERIC Educational Resources Information Center

    Maxwell, Lesli A.

    2007-01-01

    This article reports on how the Baltimore District tackles its high suspension rates. Driven by an increasing belief that zero-tolerance disciplinary policies are ineffective, more educators are embracing strategies that do not exclude misbehaving students from school for offenses such as insubordination, disrespect, cutting class, tardiness, and…

  1. Understanding High School Graduation Rates in Delaware

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  2. Assessing XCTD Fall Rate Errors using Concurrent XCTD and CTD Profiles in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Millar, J.; Gille, S. T.; Sprintall, J.; Frants, M.

    2010-12-01

    Refinements in the fall rate equation for XCTDs are not as well understood as those for XBTs, due in part to the paucity of concurrent and collocated XCTD and CTD profiles. During February and March 2010, the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES) conducted 31 collocated 1000-meter XCTD and CTD casts in the Drake Passage. These XCTD/CTD profile pairs are closely matched in space and time, with a mean distance between casts of 1.19 km and a mean lag time of 39 minutes. The profile pairs are well suited to address the XCTD fall rate problem specifically in higher latitude waters, where existing fall rate corrections have rarely been assessed. Many of these XCTD/CTD profile pairs reveal an observable depth offset in measurements of both temperature and conductivity. Here, the nature and extent of this depth offset is evaluated.

  3. Compensating inherent linear move water application errors using a variable rate irrigation system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Continuous move irrigation systems such as linear move and center pivot irrigate unevenly when applying conventional uniform water rates due to the towers/motors stop/advance pattern. The effect of the cart movement pattern on linear move water application is larger on the first two spans which intr...

  4. An approach for reducing the error rate in automated lung segmentation.

    PubMed

    Gill, Gurman; Beichel, Reinhard R

    2016-09-01

    Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855±0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897

  5. Denoising DNA deep sequencing data—high-throughput sequencing errors and their correction

    PubMed Central

    Laehnemann, David; Borkhardt, Arndt

    2016-01-01

    Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here. PMID:26026159

  6. High rate vacuum deposited silicon layers

    NASA Astrophysics Data System (ADS)

    Kipperman, A. H. M.; van Zolingen, R. J. C.

    1982-08-01

    Silicon layers were deposited in vacuum at high rates (up to 50 microns/min) on aluminum-, silicon oxide-, and silicon nitride-coated stainless steel, pyrex, and silicon substrates. The morphological, crystallographic, and electrical properties of the layers were studied in as-grown and annealed conditions. Layers as-grown on aluminum-coated substrates had unsatisfactory electrical properties and too high an aluminum concentration to be acceptable for solar cells. Thermal annealing of layers on SiO2- and on Si3N4-coated substrates markedly improved their crystallographic and electrical properties. In all cases, silicon layers deposited at about 550 C showed a columnar structure which, after prolonged etching, was found to be composed of fibrils of about 0.3 microns in diameter extending over the entire thickness of the layer. It is suggested that further tests should be carried out at a substrate temperature of about 800 C maintaining the high deposition rates.

  7. Trends and weekly and seasonal cycles in the rate of errors in the clinical management of hospitalized patients.

    PubMed

    Buckley, David; Bulger, David

    2012-08-01

    Studies on the rate of adverse events in hospitalized patients seldom examine temporal patterns. This study presents evidence of both weekly and annual cycles. The study is based on a large and diverse data set, with nearly 5 yrs of data from a voluntary staff-incident reporting system of a large public health care provider in rural southeastern Australia. The data of 63 health care facilities were included, ranging from large non-metropolitan hospitals to small community and aged health care facilities. Poisson regression incorporating an observation-driven autoregressive effect using the GLARMA framework was used to explain daily error counts with respect to long-term trend and weekly and annual effects, with procedural volume as an offset. The annual pattern was modeled using a first-order sinusoidal effect. The rate of errors reported demonstrated an increasing annual trend of 13.4% (95% confidence interval [CI] 10.6% to 16.3%); however, this trend was only significant for errors of minor or no harm to the patient. A strong "weekend effect" was observed. The incident rate ratio for the weekend versus weekdays was 2.74 (95% CI 2.55 to 2.93). The weekly pattern was consistent for incidents of all levels of severity, but it was more pronounced for less severe incidents. There was an annual cycle in the rate of incidents, the number of incidents peaking in October, on the 282 nd day of the year (spring in Australia), with an incident rate ratio 1.09 (95% CI 1.05 to 1.14) compared to the annual mean. There was no so-called "killing season" or "July effect," as the peak in incident rate was not related to the commencement of work by new medical school graduates. The major finding of this study is the rate of adverse events is greater on weekends and during spring. The annual pattern appears to be unrelated to the commencement of new graduates and potentially results from seasonal variation in the case mix of patients or the health of the medical workforce that alters

  8. High strain rate behaviour of polypropylene microfoams

    NASA Astrophysics Data System (ADS)

    Gómez-del Río, T.; Garrido, M. A.; Rodríguez, J.; Arencón, D.; Martínez, A. B.

    2012-08-01

    Microcellular materials such as polypropylene foams are often used in protective applications and passive safety for packaging (electronic components, aeronautical structures, food, etc.) or personal safety (helmets, knee-pads, etc.). In such applications the foams which are used are often designed to absorb the maximum energy and are generally subjected to severe loadings involving high strain rates. The manufacture process to obtain polymeric microcellular foams is based on the polymer saturation with a supercritical gas, at high temperature and pressure. This method presents several advantages over the conventional injection moulding techniques which make it industrially feasible. However, the effect of processing conditions such as blowing agent, concentration and microfoaming time and/or temperature on the microstructure of the resulting microcellular polymer (density, cell size and geometry) is not yet set up. The compressive mechanical behaviour of several microcellular polypropylene foams has been investigated over a wide range of strain rates (0.001 to 3000 s-1) in order to show the effects of the processing parameters and strain rate on the mechanical properties. High strain rate tests were performed using a Split Hopkinson Pressure Bar apparatus (SHPB). Polypropylene and polyethylene-ethylene block copolymer foams of various densities were considered.

  9. A survey of computational methods and error rate estimation procedures for peptide and protein identification in shotgun proteomics

    PubMed Central

    Nesvizhskii, Alexey I.

    2010-01-01

    This manuscript provides a comprehensive review of the peptide and protein identification process using tandem mass spectrometry (MS/MS) data generated in shotgun proteomic experiments. The commonly used methods for assigning peptide sequences to MS/MS spectra are critically discussed and compared, from basic strategies to advanced multi-stage approaches. A particular attention is paid to the problem of false-positive identifications. Existing statistical approaches for assessing the significance of peptide to spectrum matches are surveyed, ranging from single-spectrum approaches such as expectation values to global error rate estimation procedures such as false discovery rates and posterior probabilities. The importance of using auxiliary discriminant information (mass accuracy, peptide separation coordinates, digestion properties, and etc.) is discussed, and advanced computational approaches for joint modeling of multiple sources of information are presented. This review also includes a detailed analysis of the issues affecting the interpretation of data at the protein level, including the amplification of error rates when going from peptide to protein level, and the ambiguities in inferring the identifies of sample proteins in the presence of shared peptides. Commonly used methods for computing protein-level confidence scores are discussed in detail. The review concludes with a discussion of several outstanding computational issues. PMID:20816881

  10. Reliability of perceived neighborhood conditions and the effects of measurement error on self-rated health across urban and rural neighborhoods

    PubMed Central

    Pruitt, Sandi L.; Jeffe, Donna B.; Yan, Yan; Schootman, Mario

    2011-01-01

    Background Limited psychometric research has examined the reliability of self-reported measures of neighborhood conditions, the effect of measurement error on associations between neighborhood conditions and health, and potential differences in the reliabilities between neighborhood strata(urban vs. rural and low vs. high poverty). We assessed overall and stratified reliability of self-reported perceived neighborhood conditions using 5 scales (Social and Physical Disorder, Social Control, Social Cohesion, Fear) and 4 single items (Multidimensional Neighboring). We also assessed measurement error-corrected associations of these conditions with self-rated health. Methods Using random-digit dialing, 367 women without breast cancer (matched controls from a larger study) were interviewed twice, 2–3 weeks apart. We assessed test-retest (intraclass correlation coefficients [ICC]/weighted kappa [k]) and internal consistency reliability (Cronbach’sα). Differences in reliability across neighborhood strata were tested using bootstrap methods. Regression calibration corrected estimates for measurement error. Results All measures demonstrated satisfactory internal consistency (α≥.70) and either moderate (ICC/k=.41–.60) or substantial (ICC/k=.61–.80) test-retest reliability in the full sample. Internal consistency did not differ by neighborhood strata. Test-retest reliability was significantly lower among rural (vs. urban) residents for 2 scales (Social Control, Physical Disorder) and 2 Multidimensional Neighboring items; test-retest reliability was higher for Physical Disorder and lower for 1 item Multidimensional Neighboring item among the high (vs. low) poverty strata. After measurement error correction, the magnitude of associations between neighborhood conditions and self-rated health were larger, particularly in the rural population. Conclusion Research is needed to develop and test reliable measures of perceived neighborhood conditions relevant to the health

  11. The safety of electronic prescribing: manifestations, mechanisms, and rates of system-related errors associated with two commercial systems in hospitals

    PubMed Central

    Westbrook, Johanna I; Baysari, Melissa T; Li, Ling; Burke, Rosemary; Richardson, Katrina L; Day, Richard O

    2013-01-01

    Objectives To compare the manifestations, mechanisms, and rates of system-related errors associated with two electronic prescribing systems (e-PS). To determine if the rate of system-related prescribing errors is greater than the rate of errors prevented. Methods Audit of 629 inpatient admissions at two hospitals in Sydney, Australia using the CSC MedChart and Cerner Millennium e-PS. System related errors were classified by manifestation (eg, wrong dose), mechanism, and severity. A mechanism typology comprised errors made: selecting items from drop-down menus; constructing orders; editing orders; or failing to complete new e-PS tasks. Proportions and rates of errors by manifestation, mechanism, and e-PS were calculated. Results 42.4% (n=493) of 1164 prescribing errors were system-related (78/100 admissions). This result did not differ by e-PS (MedChart 42.6% (95% CI 39.1 to 46.1); Cerner 41.9% (37.1 to 46.8)). For 13.4% (n=66) of system-related errors there was evidence that the error was detected prior to study audit. 27.4% (n=135) of system-related errors manifested as timing errors and 22.5% (n=111) wrong drug strength errors. Selection errors accounted for 43.4% (34.2/100 admissions), editing errors 21.1% (16.5/100 admissions), and failure to complete new e-PS tasks 32.0% (32.0/100 admissions). MedChart generated more selection errors (OR=4.17; p=0.00002) but fewer new task failures (OR=0.37; p=0.003) relative to the Cerner e-PS. The two systems prevented significantly more errors than they generated (220/100 admissions (95% CI 180 to 261) vs 78 (95% CI 66 to 91)). Conclusions System-related errors are frequent, yet few are detected. e-PS require new tasks of prescribers, creating additional cognitive load and error opportunities. Dual classification, by manifestation and mechanism, allowed identification of design features which increase risk and potential solutions. e-PS designs with fewer drop-down menu selections may reduce error risk. PMID:23721982

  12. Highly stable high-rate discriminator for nuclear counting

    NASA Technical Reports Server (NTRS)

    English, J. J.; Howard, R. H.; Rudnick, S. J.

    1969-01-01

    Pulse amplitude discriminator is specially designed for nuclear counting applications. At very high rates, the threshold is stable. The output-pulse width and the dead time change negligibly. The unit incorporates a provision for automatic dead-time correction.

  13. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  14. Optical system error analysis and calibration method of high-accuracy star trackers.

    PubMed

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  15. High strain rate damage of Carrara marble

    NASA Astrophysics Data System (ADS)

    Doan, Mai-Linh; Billi, Andrea

    2011-10-01

    Several cases of rock pulverization have been observed along major active faults in granite and other crystalline rocks. They have been interpreted as due to coseismic pervasive microfracturing. In contrast, little is known about pulverization in carbonates. With the aim of understanding carbonate pulverization, we investigate the high strain rate (c. 100 s-1) behavior of unconfined Carrara marble through a set of experiments with a Split Hopkinson Pressure Bar. Three final states were observed: (1) at low strain, the sample is kept intact, without apparent macrofractures; (2) failure is localized along a few fractures once stress is larger than 100 MPa, corresponding to a strain of 0.65%; (3) above 1.3% strain, the sample is pulverized. Contrary to granite, the transition to pulverization is controlled by strain rather than strain rate. Yet, at low strain rate, a sample from the same marble displayed only a few fractures. This suggests that the experiments were done above the strain rate transition to pulverization. Marble seems easier to pulverize than granite. This creates a paradox: finely pulverized rocks should be prevalent along any high strain zone near faults through carbonates, but this is not what is observed. A few alternatives are proposed to solve this paradox.

  16. High temperature electrochemical corrosion rate probes

    SciTech Connect

    Bullard, Sophie J.; Covino, Bernard S., Jr.; Holcomb, Gordon R.; Ziomek-Moroz, M.

    2005-09-01

    Corrosion occurs in the high temperature sections of energy production plants due to a number of factors: ash deposition, coal composition, thermal gradients, and low NOx conditions, among others. Electrochemical corrosion rate (ECR) probes have been shown to operate in high temperature gaseous environments that are similar to those found in fossil fuel combustors. ECR probes are rarely used in energy production plants at the present time, but if they were more fully understood, corrosion could become a process variable at the control of plant operators. Research is being conducted to understand the nature of these probes. Factors being considered are values selected for the Stern-Geary constant, the effect of internal corrosion, and the presence of conductive corrosion scales and ash deposits. The nature of ECR probes will be explored in a number of different atmospheres and with different electrolytes (ash and corrosion product). Corrosion rates measured using an electrochemical multi-technique capabilities instrument will be compared to those measured using the linear polarization resistance (LPR) technique. In future experiments, electrochemical corrosion rates will be compared to penetration corrosion rates determined using optical profilometry measurements.

  17. Slow-growing cells within isogenic populations have increased RNA polymerase error rates and DNA damage.

    PubMed

    van Dijk, David; Dhar, Riddhiman; Missarova, Alsu M; Espinar, Lorena; Blevins, William R; Lehner, Ben; Carey, Lucas B

    2015-01-01

    Isogenic cells show a large degree of variability in growth rate, even when cultured in the same environment. Such cell-to-cell variability in growth can alter sensitivity to antibiotics, chemotherapy and environmental stress. To characterize transcriptional differences associated with this variability, we have developed a method--FitFlow--that enables the sorting of subpopulations by growth rate. The slow-growing subpopulation shows a transcriptional stress response, but, more surprisingly, these cells have reduced RNA polymerase fidelity and exhibit a DNA damage response. As DNA damage is often caused by oxidative stress, we test the addition of an antioxidant, and find that it reduces the size of the slow-growing population. More generally, we find a significantly altered transcriptome in the slow-growing subpopulation that only partially resembles that of cells growing slowly due to environmental and culture conditions. Slow-growing cells upregulate transposons and express more chromosomal, viral and plasmid-borne transcripts, and thus explore a larger genotypic--and so phenotypic--space. PMID:26268986

  18. Slow-growing cells within isogenic populations have increased RNA polymerase error rates and DNA damage

    PubMed Central

    van Dijk, David; Dhar, Riddhiman; Missarova, Alsu M.; Espinar, Lorena; Blevins, William R.; Lehner, Ben; Carey, Lucas B.

    2015-01-01

    Isogenic cells show a large degree of variability in growth rate, even when cultured in the same environment. Such cell-to-cell variability in growth can alter sensitivity to antibiotics, chemotherapy and environmental stress. To characterize transcriptional differences associated with this variability, we have developed a method—FitFlow—that enables the sorting of subpopulations by growth rate. The slow-growing subpopulation shows a transcriptional stress response, but, more surprisingly, these cells have reduced RNA polymerase fidelity and exhibit a DNA damage response. As DNA damage is often caused by oxidative stress, we test the addition of an antioxidant, and find that it reduces the size of the slow-growing population. More generally, we find a significantly altered transcriptome in the slow-growing subpopulation that only partially resembles that of cells growing slowly due to environmental and culture conditions. Slow-growing cells upregulate transposons and express more chromosomal, viral and plasmid-borne transcripts, and thus explore a larger genotypic—and so phenotypic — space. PMID:26268986

  19. HIGH ENERGY RATE EXTRUSION OF URANIUM

    DOEpatents

    Lewis, L.

    1963-07-23

    A method of extruding uranium at a high energy rate is described. Conditions during the extrusion are such that the temperature of the metal during extrusion reaches a point above the normal alpha to beta transition, but the metal nevertheless remains in the alpha phase in accordance with the Clausius- Clapeyron equation. Upon exiting from the die, the metal automatically enters the beta phase, after which the metal is permitted to cool. (AEC)

  20. High Rate Data Delivery Thrust Area

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul

    2000-01-01

    In this paper, a brief description of the high rate data delivery (HRDD) thrust area, its focus and current technical activities being carried out by NASA centers including JPL, academia and industry under this program is provided. The processes and methods being used to achieve active participation in this program are presented. The developments in space communication technologies, which will shape NASA enterprise missions in the 21 st. century, are highlighted.

  1. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  2. Reserve, flowing electrolyte, high rate lithium battery

    NASA Astrophysics Data System (ADS)

    Puskar, M.; Harris, P.

    Flowing electrolyte Li/SOCl2 tests in single cell and multicell bipolar fixtures have been conducted, and measurements are presented for electrolyte flow rates, inlet and outlet temperatures, fixture temperatures at several points, and the pressure drop across the fixture. Reserve lithium batteries with flowing thionyl-chloride electrolytes are found to be capable of very high energy densities with usable voltages and capacities at current densities as high as 500 mA/sq cm. At this current density, a battery stack 10 inches in diameter is shown to produce over 60 kW of power while maintaining a safe operating temperature.

  3. Optimization of coplanar high rate supercapacitors

    NASA Astrophysics Data System (ADS)

    Sun, Leimeng; Wang, Xinghui; Liu, Wenwen; Zhang, Kang; Zou, Jianping; Zhang, Qing

    2016-05-01

    In this work, we describe two efficient methods to enhance the electrochemical performance of high-rate coplanar micro-supercapacitors (MSCs). Through introducing MnO2 nanosheets on vertical-aligned carbon nanotube (VACNT) array, the areal capacitance and volumetric energy density exhibit tremendous improvements which have been increased from 0.011 mF cm-2 to 0.017 mWh cm-3 to 0.479 mF cm-2 and 0.426 mWh cm-3 respectively at an ultrahigh scan rate of 50000 mV s-1. Subsequently, by fabricating an asymmetric MSC, the energy density could be increased to 0.167 mWh cm-3 as well. Moreover, as a result of applying MnO2/VACNT as the positive electrode and VACNT as the negative electrode, the cell operating voltage in aqueous electrolyte could be increased to as high as 2.0 V. Our advanced planar MSCs could operate well at different high scan rates and offer a promising integration potential with other in-plane devices on the same substrate.

  4. Optimization of coplanar high rate supercapacitors

    NASA Astrophysics Data System (ADS)

    Sun, Leimeng; Wang, Xinghui; Liu, Wenwen; Zhang, Kang; Zou, Jianping; Zhang, Qing

    2016-05-01

    In this work, we describe two efficient methods to enhance the electrochemical performance of high-rate coplanar micro-supercapacitors (MSCs). Through introducing MnO2 nanosheets on vertical-aligned carbon nanotube (VACNT) array, the areal capacitance and volumetric energy density exhibit tremendous improvements which have been increased from 0.011 mF cm-2 to 0.017 mWh cm-3 to 0.479 mF cm-2 and 0.426 mWh cm-3 respectively at an ultrahigh scan rate of 50000 mV s-1. Subsequently, by fabricating an asymmetric MSC, the energy density could be increased to 0.167 mWh cm-3 as well. Moreover, as a result of applying MnO2/VACNT as the positive electrode and VACNT as the negative electrode, the cell operating voltage in aqueous electrolyte could be increased to as high as 2.0 V. Our advanced planar MSCs could operate well at different high scan rates and offer a promising integration potential with other in-plane devices on the same substrate.

  5. Civilian residential fire fatality rates: Six high-rate states versus six low-rate states

    NASA Astrophysics Data System (ADS)

    Hall, J. R., Jr.; Helzer, S. G.

    1983-08-01

    Results of an analysis of 1,600 fire fatalities occurring in six states with high fire-death rates and six states with low fire-death rates are presented. Reasons for the differences in rates are explored, with special attention to victim age, sex, race, and condition at time of ignition. Fire cause patterns are touched on only lightly but are addressed more extensively in the companion piece to this report, "Rural and Non-Rural Civilian Residential Fire Fatalities in Twelve States', NBSIR 82-2519.

  6. Error Analysis in High School Mathematics. Conceived as Information-Processing Pathology.

    ERIC Educational Resources Information Center

    Davis, Robert B.

    This paper, presented at the 1979 meeting of the American Educational Research Association (AERA), investigates student errors in high school mathematics. A conceptual framework of hypothetical information-handling processes such as procedures, frames, retrieval from memory, visually-moderated sequences (VMS sequences), the integrated sequence,…

  7. Senior High School Students' Errors on the Use of Relative Words

    ERIC Educational Resources Information Center

    Bao, Xiaoli

    2015-01-01

    Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…

  8. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    The various methods of high bit transition density encoding are presented, their relative performance is compared in so far as error propagation characteristics, transition properties and system constraints are concerned. A computer simulation of the system using the specific PN code recommended, is included.

  9. Movement error rate for evaluation of machine learning methods for sEMG-based hand movement classification.

    PubMed

    Gijsberts, Arjan; Atzori, Manfredo; Castellini, Claudio; Muller, Henning; Caputo, Barbara

    2014-07-01

    There has been increasing interest in applying learning algorithms to improve the dexterity of myoelectric prostheses. In this work, we present a large-scale benchmark evaluation on the second iteration of the publicly released NinaPro database, which contains surface electromyography data for 6 DOF force activations as well as for 40 discrete hand movements. The evaluation involves a modern kernel method and compares performance of three feature representations and three kernel functions. Both the force regression and movement classification problems can be learned successfully when using a nonlinear kernel function, while the exp- χ(2) kernel outperforms the more popular radial basis function kernel in all cases. Furthermore, combining surface electromyography and accelerometry in a multimodal classifier results in significant increases in accuracy as compared to when either modality is used individually. Since window-based classification accuracy should not be considered in isolation to estimate prosthetic controllability, we also provide results in terms of classification mistakes and prediction delay. To this extent, we propose the movement error rate as an alternative to the standard window-based accuracy. This error rate is insensitive to prediction delays and it allows us therefore to quantify mistakes and delays as independent performance characteristics. This type of analysis confirms that the inclusion of accelerometry is superior, as it results in fewer mistakes while at the same time reducing prediction delay. PMID:24760932

  10. Influence of nonhomogeneous earth on the rms phase error and beam-pointing errors of large, sparse high-frequency receiving arrays

    NASA Astrophysics Data System (ADS)

    Weiner, M. M.

    1994-01-01

    The performance of ground-based high-frequency (HF) receiving arrays is reduced when the array elements have electrically small ground planes. The array rms phase error and beam-pointing errors, caused by multipath rays reflected from a nonhomogeneous Earth, are determined for a sparse array of elements that are modeled as Hertzian dipoles in close proximity to Earth with no ground planes. Numerical results are presented for cases of randomly distributed and systematically distributed Earth nonhomogeneities where one-half of vertically polarized array elements are located in proximity to one type of Earth and the remaining half are located in proximity to a second type of Earth. The maximum rms phase errors, for the cases examined, are 18 deg and 9 deg for randomly distributed and systematically distributed nonhomogeneities, respectively. The maximum beampointing errors are 0 and 0.3 beam widths for randomly distributed and systematically distributed nonhomogeneities, respectively.

  11. Reducing Systematic Centroid Errors Induced by Fiber Optic Faceplates in Intensified High-Accuracy Star Trackers

    PubMed Central

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  12. Reducing systematic centroid errors induced by fiber optic faceplates in intensified high-accuracy star trackers.

    PubMed

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  13. Evaluation of soft error rates using nuclear probes in bulk and SOI SRAMs with a technology node of 90 nm

    NASA Astrophysics Data System (ADS)

    Abo, Satoshi; Masuda, Naoyuki; Wakaya, Fujio; Onoda, Shinobu; Hirao, Toshio; Ohshima, Takeshi; Iwamatsu, Toshiaki; Takai, Mikio

    2010-06-01

    The difference of soft error rates (SERs) in conventional bulk Si and silicon-on-insulator (SOI) static random access memories (SRAMs) with a technology node of 90 nm has been investigated by helium ion probes with energies ranging from 0.8 to 6.0 MeV and a dose of 75 ions/μm 2. The SERs in the SOI SRAM were also investigated by oxygen ion probes with energies ranging from 9.0 to 18.0 MeV and doses of 0.14-0.76 ions/μm 2. The soft error in the bulk and SOI SRAMs occurred by helium ion irradiation with energies at and above 1.95 and 2.10 MeV, respectively. The SER in the bulk SRAM saturated with ion energies at and above 2.5 MeV. The SER in the SOI SRAM became the highest by helium ion irradiation at 2.5 MeV and drastically decreased with increasing the ion energies above 2.5 MeV, in which helium ions at this energy range generated the maximum amount of excess charge carriers in a SOI body. The soft errors occurred by helium ions were induced by a floating body effect due to generated excess charge carriers in the channel regions. The soft error occurred by oxygen ion irradiation with energies at and above 10.5 MeV in the SOI SRAM. The SER in the SOI SRAM gradually increased with energies from 10.5 to 13.5 MeV and saturated at 18 MeV, in which the amount of charge carriers induced by oxygen ions in this energy range gradually increased. The computer calculation indicated that the oxygen ions with energies above 13.0 MeV generated more excess charge carriers than the critical charge of the 90 nm node SOI SRAM with the designed over-layer thickness. The soft errors, occurred by oxygen ions with energies at and below 12.5 MeV, were induced by a floating body effect due to the generated excess charge carriers in the channel regions and those with energies at and above 13.0 MeV were induced by both the floating body effect and generated excess carriers. The difference of the threshold energy of the oxygen ions between the experiment and the computer calculation might

  14. High rate pulse processing algorithms for microcalorimeters

    SciTech Connect

    Rabin, Michael; Hoover, Andrew S; Bacrania, Mnesh K; Tan, Hui; Breus, Dimitry; Henning, Wolfgang; Sabourov, Konstantin; Collins, Jeff; Warburton, William K; Dorise, Bertrand; Ullom, Joel N

    2009-01-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensor can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small to maintain good energy resolution, and pulse decay times are normally in the order of milliseconds due to slow thermal relaxation. Consequently, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. Large arrays, however, require as much pulse processing as possible to be performed at the front end of the readout electronics to avoid transferring large amounts of waveform data to a host computer for processing. In this paper, they present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in the readout electronics that they are also currently developing, is to achieve sufficiently good energy resolution for most applications while being (a) simple enough to be implemented in the readout electronics and (b) capable of processing overlapping pulses and thus achieving much higher output count rates than the rates that existing algorithms are currently achieving. Details of these algorithms are presented, and their performance was compared to that of the 'optimal filter' that is the dominant pulse processing algorithm in the cryogenic-detector community.

  15. Effects of Two Commercial Electronic Prescribing Systems on Prescribing Error Rates in Hospital In-Patients: A Before and After Study

    PubMed Central

    Westbrook, Johanna I.; Reckmann, Margaret; Li, Ling; Runciman, William B.; Burke, Rosemary; Lo, Connie; Baysari, Melissa T.; Braithwaite, Jeffrey; Day, Richard O.

    2012-01-01

    Background Considerable investments are being made in commercial electronic prescribing systems (e-prescribing) in many countries. Few studies have measured or evaluated their effectiveness at reducing prescribing error rates, and interactions between system design and errors are not well understood, despite increasing concerns regarding new errors associated with system use. This study evaluated the effectiveness of two commercial e-prescribing systems in reducing prescribing error rates and their propensities for introducing new types of error. Methods and Results We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system) at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders) and clinical (e.g., wrong dose, wrong drug) errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious) by hospital and study period; and rates and categories of postintervention “system-related” errors (where system functionality or design contributed to the error) were calculated. Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards (respectively reductions of 66.1% [95% CI 53.9%–78.3%]; 57.5% [33.8%–81.2%]; and 60.5% [48.5%–72.4%]). The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission (95% CI 5.23–7.28) to 2.12 (95% CI 1.71–2.54; p<0.0001) and at Hospital B from 3.62 (95% CI 3.30–3.93) to 1.46 (95% CI 1.20–1.73; p<0

  16. High strain-rate magnetoelasticity in Galfenol

    NASA Astrophysics Data System (ADS)

    Domann, J. P.; Loeffler, C. M.; Martin, B. E.; Carman, G. P.

    2015-09-01

    This paper presents the experimental measurements of a highly magnetoelastic material (Galfenol) under impact loading. A Split-Hopkinson Pressure Bar was used to generate compressive stress up to 275 MPa at strain rates of either 20/s or 33/s while measuring the stress-strain response and change in magnetic flux density due to magnetoelastic coupling. The average Young's modulus (44.85 GPa) was invariant to strain rate, with instantaneous stiffness ranging from 25 to 55 GPa. A lumped parameters model simulated the measured pickup coil voltages in response to an applied stress pulse. Fitting the model to the experimental data provided the average piezomagnetic coefficient and relative permeability as functions of field strength. The model suggests magnetoelastic coupling is primarily insensitive to strain rates as high as 33/s. Additionally, the lumped parameters model was used to investigate magnetoelastic transducers as potential pulsed power sources. Results show that Galfenol can generate large quantities of instantaneous power (80 MW/m3 ), comparable to explosively driven ferromagnetic pulse generators (500 MW/m3 ). However, this process is much more efficient and can be cyclically carried out in the linear elastic range of the material, in stark contrast with explosively driven pulsed power generators.

  17. High Strain Rate Behavior of Polyurea Compositions

    NASA Astrophysics Data System (ADS)

    Joshi, Vasant; Milby, Christopher

    2011-06-01

    Polyurea has been gaining importance in recent years due to its impact resistance properties. The actual compositions of this viscoelastic material must be tailored for specific use. It is therefore imperative to study the effect of variations in composition on the properties of the material. High-strain-rate response of three polyurea compositions with varying molecular weights has been investigated using a Split Hopkinson Pressure Bar arrangement equipped with titanium bars. The polyurea compositions were synthesized from polyamines (Versalink, Air Products) with a multi-functional isocyanate (Isonate 143L, Dow Chemical). Amines with molecular weights of 1000, 650, and a blend of 250/1000 have been used in the current investigation. The materials have been tested up to strain rates of 6000/s. Results from these tests have shown interesting trends on the high rate behavior. While higher molecular weight composition show lower yield, they do not show dominant hardening behavior. On the other hand, the blend of 250/1000 show higher load bearing capability but lower strain hardening effects than the 600 and 1000 molecular weight amine based materials. Refinement in experimental methods and comparison of results using aluminum Split Hopkinson Bar is presented.

  18. High strain rate behavior of polyurea compositions

    NASA Astrophysics Data System (ADS)

    Joshi, Vasant S.; Milby, Christopher

    2012-03-01

    High-strain-rate response of three polyurea compositions with varying molecular weights has been investigated using a Split Hopkinson Pressure Bar arrangement equipped with aluminum bars. Three polyurea compositions were synthesized from polyamines (Versalink, Air Products) with a multi-functional isocyanate (Isonate 143L, Dow Chemical). Amines with molecular weights of 1000, 650, and a blend of 250/1000 have been used in the current investigation. These materials have been tested to strain rates of over 6000/s. High strain rate results from these tests have shown varying trends as a function of increasing strain. While higher molecular weight composition show lower yield, they do not show dominant hardening behavior at lower strain. On the other hand, the blend of 250/1000 show higher load bearing capability but lower strain hardening effects than the 600 and 1000 molecular weight amine based materials. Results indicate that the initial increase in the modulus of the blend of 250/1000 may lead to the loss of strain hardening characteristics as the material is compressed to 50% strain, compared to 1000 molecular weight amine based material.

  19. Optical and electronic error correction schemes for highly parallel access memories

    NASA Astrophysics Data System (ADS)

    Neifeld, Mark A.; Hayes, Jerry D.

    1993-11-01

    We have fabricated and tested an optically addressed, parallel electronic Reed-Solomon decoder for use with parallel access optical memories. A comparison with various serial implementations has demonstrated that for many instances of code block size and error correction capability, the parallel approach is superior from the perspectives of VLSI layout area and decoding latency. The demonstrated Reed-Solomon parallel pipeline decoder operates on 60 bit input words and has been demonstrated at a clock rate of 5 MHz yielding a demonstrated data rate of 300 Mbps.

  20. Assessment of error rates in acoustic monitoring with the R package monitoR

    USGS Publications Warehouse

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were  for song event detection.

  1. High strain rate deformation of layered nanocomposites

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Hwang; Veysset, David; Singer, Jonathan P.; Retsch, Markus; Saini, Gagan; Pezeril, Thomas; Nelson, Keith A.; Thomas, Edwin L.

    2012-11-01

    Insight into the mechanical behaviour of nanomaterials under the extreme condition of very high deformation rates and to very large strains is needed to provide improved understanding for the development of new protective materials. Applications include protection against bullets for body armour, micrometeorites for satellites, and high-speed particle impact for jet engine turbine blades. Here we use a microscopic ballistic test to report the responses of periodic glassy-rubbery layered block-copolymer nanostructures to impact from hypervelocity micron-sized silica spheres. Entire deformation fields are experimentally visualized at an exceptionally high resolution (below 10 nm) and we discover how the microstructure dissipates the impact energy via layer kinking, layer compression, extreme chain conformational flattening, domain fragmentation and segmental mixing to form a liquid phase. Orientation-dependent experiments show that the dissipation can be enhanced by 30% by proper orientation of the layers.

  2. High strain rate deformation of layered nanocomposites.

    PubMed

    Lee, Jae-Hwang; Veysset, David; Singer, Jonathan P; Retsch, Markus; Saini, Gagan; Pezeril, Thomas; Nelson, Keith A; Thomas, Edwin L

    2012-01-01

    Insight into the mechanical behaviour of nanomaterials under the extreme condition of very high deformation rates and to very large strains is needed to provide improved understanding for the development of new protective materials. Applications include protection against bullets for body armour, micrometeorites for satellites, and high-speed particle impact for jet engine turbine blades. Here we use a microscopic ballistic test to report the responses of periodic glassy-rubbery layered block-copolymer nanostructures to impact from hypervelocity micron-sized silica spheres. Entire deformation fields are experimentally visualized at an exceptionally high resolution (below 10 nm) and we discover how the microstructure dissipates the impact energy via layer kinking, layer compression, extreme chain conformational flattening, domain fragmentation and segmental mixing to form a liquid phase. Orientation-dependent experiments show that the dissipation can be enhanced by 30% by proper orientation of the layers. PMID:23132014

  3. High frame-rate digital radiographic videography

    SciTech Connect

    King, N.S.P.; Cverna, F.H.; Albright, K.L.; Jaramillo, S.A.; Yates, G.J.; McDonald, T.E.; Flynn, M.J.; Tashman, S.

    1994-09-01

    High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100-microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.

  4. High-frame-rate digital radiographic videography

    NASA Astrophysics Data System (ADS)

    King, Nicholas S. P.; Cverna, Frank H.; Albright, Kevin L.; Jaramillo, Steven A.; Yates, George J.; McDonald, Thomas E.; Flynn, Michael J.; Tashman, Scott

    1994-10-01

    High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100 microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.

  5. Fuel droplet burning rates at high pressures.

    NASA Technical Reports Server (NTRS)

    Canada, G. S.; Faeth, G. M.

    1973-01-01

    Combustion of methanol, ethanol, propanol-1, n-pentane, n-heptane, and n-decane was observed in air under natural convection conditions, at pressures up to 100 atm. The droplets were simulated by porous spheres, with diameters in the range from 0.63 to 1.90 cm. The pressure levels of the tests were high enough so that near-critical combustion was observed for methanol and ethanol. Due to the high pressures, the phase-equilibrium models of the analysis included both the conventional low-pressure approach as well as high-pressure versions, allowing for real gas effects and the solubility of combustion-product gases in the liquid phase. The burning-rate predictions of the various theories were similar, and in fair agreement with the data. The high-pressure theory gave the best prediction for the liquid-surface temperatures of ethanol and propanol-1 at high pressure. The experiments indicated the approach of critical burning conditions for methanol and ethanol at pressures on the order of 80 to 100 atm, which was in good agreement with the predictions of both the low- and high-pressure analysis.

  6. Microalgal separation from high-rate ponds

    SciTech Connect

    Nurdogan, Y.

    1988-01-01

    High rate ponding (HRP) processes are playing an increasing role in the treatment of organic wastewaters in sunbelt communities. Photosynthetic oxygenation by algae has proved to cost only one-seventh as much as mechanical aeration for activated sludge systems. During this study, an advanced HRP, which produces an effluent equivalent to tertiary treatment has been studied. It emphasizes not only waste oxidation but also algal separation and nutrient removal. This new system is herein called advanced tertiary high rate ponding (ATHRP). Phosphorus removal in HRP systems is normally low because algal uptake of phosphorus is about one percent of their 200-300 mg/L dry weights. Precipitation of calcium phosphates by autofluocculation also occurs in HRP at high pH levels, but it is generally not complete due to insufficient calcium concentration in the pond. In the case of Richmond where the studies were conducted, the sewage is very low in calcium. Therefore, enhancement of natural autoflocculation was studied by adding small amounts of lime to the pond. Through this simple procedure phosphorus and nitrogen removals were virtually complete justifying the terminology ATHRP.

  7. The Influence of Relatives on the Efficiency and Error Rate of Familial Searching

    PubMed Central

    Rohlfs, Rori V.; Murphy, Erin; Song, Yun S.; Slatkin, Montgomery

    2013-01-01

    We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011), for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler® Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability () of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins) there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases. PMID:23967076

  8. High Rate Pulse Processing Algorithms for Microcalorimeters

    NASA Astrophysics Data System (ADS)

    Tan, Hui; Breus, Dimitry; Hennig, Wolfgang; Sabourov, Konstantin; Collins, Jeffrey W.; Warburton, William K.; Bertrand Doriese, W.; Ullom, Joel N.; Bacrania, Minesh K.; Hoover, Andrew S.; Rabin, Michael W.

    2009-12-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensors can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small in order to maintain good energy resolution, and pulse decay times are normally on the order of milliseconds due to slow thermal relaxation. Therefore, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. For large arrays, however, as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of waveform data to a host computer for post-processing. In this paper, we present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in readout electronics that we are also currently developing, is to achieve sufficiently good energy resolution for most applications while being: a) simple enough to be implemented in the readout electronics; and, b) capable of processing overlapping pulses, and thus achieving much higher output count rates than those achieved by existing algorithms. Details of our algorithms are presented, and their performance is compared to that of the "optimal filter" that is currently the predominantly used pulse processing algorithm in the cryogenic-detector community.

  9. High Prevalence of Refractive Errors in 7 Year Old Children in Iran

    PubMed Central

    HASHEMI, Hassan; YEKTA, Abbasali; JAFARZADEHPUR, Ebrahim; OSTADIMOGHADDAM, Hadi; ETEMAD, Koorosh; ASHARLOUS, Amir; NABOVATI, Payam; KHABAZKHOOB, Mehdi

    2016-01-01

    Background: The latest WHO report indicates that refractive errors are the leading cause of visual impairment throughout the world. The aim of this study was to determine the prevalence of myopia, hyperopia, and astigmatism in 7 yr old children in Iran. Methods: In a cross-sectional study in 2013 with multistage cluster sampling, first graders were randomly selected from 8 cities in Iran. All children were tested by an optometrist for uncorrected and corrected vision, and non-cycloplegic and cycloplegic refraction. Refractive errors in this study were determined based on spherical equivalent (SE) cyloplegic refraction. Results: From 4614 selected children, 89.0% participated in the study, and 4072 were eligible. The prevalence rates of myopia, hyperopia and astigmatism were 3.04% (95% CI: 2.30–3.78), 6.20% (95% CI: 5.27–7.14), and 17.43% (95% CI: 15.39–19.46), respectively. Prevalence of myopia (P=0.925) and astigmatism (P=0.056) were not statistically significantly different between the two genders, but the odds of hyperopia were 1.11 (95% CI: 1.01–2.05) times higher in girls (P=0.011). The prevalence of with-the-rule astigmatism was 12.59%, against-the-rule was 2.07%, and oblique 2.65%. Overall, 22.8% (95% CI: 19.7–24.9) of the schoolchildren in this study had at least one type of refractive error. Conclusion: One out of every 5 schoolchildren had some refractive error. Conducting multicenter studies throughout the Middle East can be very helpful in understanding the current distribution patterns and etiology of refractive errors compared to the previous decade. PMID:27114984

  10. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  11. High-Rate Digital Receiver Board

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder; Bialas, Thomas; Brambora, Clifford; Fisher, David

    2004-01-01

    A high-rate digital receiver (HRDR) implemented as a peripheral component interface (PCI) board has been developed as a prototype of compact, general-purpose, inexpensive, potentially mass-producible data-acquisition interfaces between telemetry systems and personal computers. The installation of this board in a personal computer together with an analog preprocessor enables the computer to function as a versatile, highrate telemetry-data-acquisition and demodulator system. The prototype HRDR PCI board can handle data at rates as high as 600 megabits per second, in a variety of telemetry formats, transmitted by diverse phase-modulation schemes that include binary phase-shift keying and various forms of quadrature phaseshift keying. Costing less than $25,000 (as of year 2003), the prototype HRDR PCI board supplants multiple racks of older equipment that, when new, cost over $500,000. Just as the development of standard network-interface chips has contributed to the proliferation of networked computers, it is anticipated that the development of standard chips based on the HRDR could contribute to reductions in size and cost and increases in performance of telemetry systems.

  12. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami-[InlineEquation not available: see fulltext.] Fading Channels

    NASA Astrophysics Data System (ADS)

    Li, Zexian; Latva-aho, Matti

    2004-12-01

    Multicarrier code division multiple access (MC-CDMA) is a promising technique that combines orthogonal frequency division multiplexing (OFDM) with CDMA. In this paper, based on an alternative expression for the[InlineEquation not available: see fulltext.]-function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER) of multiuser MC-CDMA systems in frequency-selective Nakagami-[InlineEquation not available: see fulltext.] fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC) or equal gain combining (EGC). The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.

  13. Effect of media property variations on shingled magnetic recording channel bit error rate and signal to noise ratio performance

    NASA Astrophysics Data System (ADS)

    Lin, Maria Yu; Teo, Kim Keng; Chan, Kheong Sann

    2015-05-01

    Shingled Magnetic Recording (SMR) is an upcoming technology to see the hard disk drive industry over until heat assisted magnetic recording or another technology matures. In this work, we study the impact of variations in media parameters on the raw channel bit error rate (BER) through micromagnetic simulations and the grain flipping probability channel model in the SMR situation. This study aims to provide feedback to media designers on how media property variations influence the SMR channel performance. In particular, we analyse the effect of variations in the anisotropy constant (Ku), saturation magnetization (Ms), easy axis (ez), grain size (gs), and exchange coupling (Ax), on the written micromagnetic output and the ensuing hysteresis loop. We also compare these analyses with the channel performance on signal to noise ratio (SNR) and the raw channel BER.

  14. Outage Performance and Average Symbol Error Rate of M-QAM for Maximum Ratio Combining with Multiple Interferers

    NASA Astrophysics Data System (ADS)

    Ahn, Kyung Seung

    In this paper, we investigate the performance of maximum ratio combining (MRC) in the presence of multiple cochannel interferences over a flat Rayleigh fading channel. Closed-form expressions of signal-to-interference-plus-noise ratio (SINK), outage probability, and average symbol error rate (SER) of quadrature amplitude modulation (QAM) with Mary signaling are obtained for unequal-power interference-to-noise ratio (INR). We also provide an upper-bound for the average SER using moment generating function (MGF) of the SINR. Moreover, we quantify the array gain loss between pure MRC (MRC system in the absence of CCI) and MRC system in the presence of CCI. Finally, we verify our analytical results by numerical simulations.

  15. Bit-error-rate performance of non-line-of-sight UV transmission with spatial diversity reception.

    PubMed

    Xiao, Houfei; Zuo, Yong; Wu, Jian; Li, Yan; Lin, Jintong

    2012-10-01

    In non-line-of-sight (NLOS) UV communication links using intensity modulation with direct detection, atmospheric turbulence-induced intensity fluctuations can significantly impair link performance. To mitigate turbulence-induced fading and, therefore, to improve the bit error rate (BER) performance, spatial diversity reception can be used over NLOS UV links, which involves the deployment of multiple receivers. The maximum-likelihood (ML) spatial diversity scheme is derived for spatially correlated NLOS UV links, and the influence of various fading correlation at different receivers on the BER performance is investigated. For the dual-receiver case, ML diversity detection is compared with equal gain combining and optimal combining schemes under different turbulence intensity conditions. PMID:23027306

  16. Advanced Communications Technology Satellite (ACTS) Fade Compensation Protocol Impact on Very Small-Aperture Terminal Bit Error Rate Performance

    NASA Technical Reports Server (NTRS)

    Cox, Christina B.; Coney, Thom A.

    1999-01-01

    The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.

  17. Choice of Reference Sequence and Assembler for Alignment of Listeria monocytogenes Short-Read Sequence Data Greatly Influences Rates of Error in SNP Analyses

    PubMed Central

    Pightling, Arthur W.; Petronella, Nicholas; Pagotto, Franco

    2014-01-01

    The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should

  18. Choice of reference sequence and assembler for alignment of Listeria monocytogenes short-read sequence data greatly influences rates of error in SNP analyses.

    PubMed

    Pightling, Arthur W; Petronella, Nicholas; Pagotto, Franco

    2014-01-01

    The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should

  19. The margin for error when releasing the high bar for dismounts.

    PubMed

    Hiley, M J; Yeadon, M R

    2003-03-01

    In Men's Artistic Gymnastics the current trend in elite high bar dismounts is to perform two somersaults in an extended body shape with a number of twists. Two techniques have been identified in the backward giant circles leading up to release for these dismounts (J. Biomech. 32 (1999) 811). At the Sydney 2000 Olympic Games 95% of gymnasts used the "scooped" backward giant circle technique rather than the "traditional" technique. It was speculated that the advantage gained from the scooped technique was an increased margin for error when releasing the high bar. A four segment planar simulation model of the gymnast and high bar was used to determine the margin for error when releasing the bar in performances at the Sydney 2000 Olympic Games. The eight high bar finalists and the three gymnasts who used the traditional backward giant circle technique were chosen for analysis. Model parameters were optimised to obtain a close match between simulated and actual performances in terms of rotation angle (1.2 degrees ), bar displacements (0.014 m) and release velocities (2%). Each matching simulation was used to determine the time window around the actual point of release for which the model had appropriate release parameters to complete the dismount successfully. The scooped backward giant circle technique resulted in a greater margin for error (release window 88-157 ms) when releasing the bar compared to the traditional technique (release window 73-84 ms). PMID:12594979

  20. Estimation of chromatic errors from broadband images for high contrast imaging

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Belikov, Ruslan

    2015-09-01

    Usage of an internal coronagraph with an adaptive optical system for wavefront correction for direct imaging of exoplanets is currently being considered for many mission concepts, including as an instrument addition to the WFIRST-AFTA mission to follow the James Web Space Telescope. The main technical challenge associated with direct imaging of exoplanets with an internal coronagraph is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, wavefront errors are usually estimated using probes on the DM. To date, most broadband lab demonstrations use narrowband filters to estimate the chromaticity of the wavefront error, but this reduces the photon flux per filter and requires a filter system. Here, we propose a method to estimate the chromaticity of wavefront errors using only a broadband image. This is achieved by using special DM probes that have sufficient chromatic diversity. As a case example, we simulate the retrieval of the spectrum of the central wavelength from broadband images for a simple shaped- pupil coronagraph with a conjugate DM and compute the resulting estimation error.

  1. Study on high rate MRPC for high luminosity experiments

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Huang, X.; Lv, P.; Zhu, W.; Shi, L.; Xie, B.; Cheng, J.; Li, Y.

    2014-08-01

    Multi-gap Resistive Plate Chambers (MRPC) has been used to construct time-of-flight system in the field of nuclear and particle physics, due to their high-precision timing properties, high efficiency, reliability and coverage of large area. With the increase of accelerator luminosity, MRPCs have to withstand particle fluxes up to several tens of kHz/cm2 in view of the next generation physics experiments, such as the SIS-100/300 at FAIR-CBM, SoLID at JLab and NICA at JINR. But the MRPC assembled with float glass has very low rate capability not exceeding some hundreds of Hz/cm2. Two possible solutions for increasing rate capability, one is to reduce the bulk resistivity of glass and the other is to reduce the electrode thickness. Tsinghua University has done R&D on high rate MRPC for many years. A special low resistive glass with bulk resistivity around 1010Ω.cm was developed. We also studied the rate capability changes with glass thickness. In this paper we describe the performance of low resistive glass and two kinds of high rate MRPC (Pad readout and Strip readout) tested by deuterium beams. The results show that the tolerable particle flux can reach 70 kHz/cm2. In the mean time, MRPCs assembled with three thickness (0.7 mm, 0.5 mm and 0.35 mm) of float glass were also tested with deuteron beams, the results show that the three detectors can afford particle rate up to 500 Hz/cm2, 0.75 kHz/cm2 and 3 kHz/cm2, respectively.

  2. A Comparative Study of Heavy Ion and Proton Induced Bit Error Sensitivity and Complex Burst Error Modes in Commercially Available High Speed SiGe BiCMOS

    NASA Technical Reports Server (NTRS)

    Marshall, Paul; Carts, Marty; Campbell, Art; Reed, Robert; Ladbury, Ray; Seidleck, Christina; Currie, Steve; Riggs, Pam; Fritz, Karl; Randall, Barb

    2004-01-01

    A viewgraph presentation that reviews recent SiGe bit error test data for different commercially available high speed SiGe BiCMOS chips that were subjected to various levels of heavy ion and proton radiation. Results for the tested chips at different operating speeds are displayed in line graphs.

  3. Detecting Glaucoma Progression From Localized Rates of Retinal Changes in Parametric and Nonparametric Statistical Framework With Type I Error Control

    PubMed Central

    Balasubramanian, Madhusudhanan; Arias-Castro, Ery; Medeiros, Felipe A.; Kriegman, David J.; Bowd, Christopher; Weinreb, Robert N.; Holst, Michael; Sample, Pamela A.; Zangwill, Linda M.

    2014-01-01

    Purpose. We evaluated three new pixelwise rates of retinal height changes (PixR) strategies to reduce false-positive errors while detecting glaucomatous progression. Methods. Diagnostic accuracy of nonparametric PixR-NP cluster test (CT), PixR-NP single threshold test (STT), and parametric PixR-P STT were compared to statistic image mapping (SIM) using the Heidelberg Retina Tomograph. We included 36 progressing eyes, 210 nonprogressing patient eyes, and 21 longitudinal normal eyes from the University of California, San Diego (UCSD) Diagnostic Innovations in Glaucoma Study. Multiple comparison problem due to simultaneous testing of retinal locations was addressed in PixR-NP CT by controlling family-wise error rate (FWER) and in STT methods by Lehmann-Romano's k-FWER. For STT methods, progression was defined as an observed progression rate (ratio of number of pixels with significant rate of decrease; i.e., red-pixels, to disk size) > 2.5%. Progression criterion for CT and SIM methods was presence of one or more significant (P < 1%) red-pixel clusters within disk. Results. Specificity in normals: CT = 81% (90%), PixR-NP STT = 90%, PixR-P STT = 90%, SIM = 90%. Sensitivity in progressing eyes: CT = 86% (86%), PixR-NP STT = 75%, PixR-P STT = 81%, SIM = 39%. Specificity in nonprogressing patient eyes: CT = 49% (55%), PixR-NP STT = 56%, PixR-P STT = 50%, SIM = 79%. Progression detected by PixR in nonprogressing patient eyes was associated with early signs of visual field change that did not yet meet our definition of glaucomatous progression. Conclusions. The PixR provided higher sensitivity in progressing eyes and similar specificity in normals than SIM, suggesting that PixR strategies can improve our ability to detect glaucomatous progression. Longer follow-up is necessary to determine whether nonprogressing eyes identified as progressing by these methods will develop glaucomatous progression. (ClinicalTrials.gov number, NCT00221897.) PMID:24519427

  4. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  5. Application of high-rate cutting tools

    NASA Astrophysics Data System (ADS)

    Moriarty, John L., Jr.

    1989-03-01

    Widespread application of the newest high-rate cutting tools to the most appropriate jobs is slowed by the sheer magnitude of developments in tool types, materials, workpiece applications, and by the rapid pace of change. Therefore, a study of finishing and roughing sizes of coated carbide inserts having a variety of geometries for single point turning was completed. The cutting tools were tested for tool life, chip quality, and workpiece surface finish at various cutting conditions with medium alloy steel. An empirical wear-life data base was established, and a computer program was developed to facilitate technology transfer, assist selection of carbide insert grades, and provide machine operating parameters. A follow-on test program was implemented suitable for next generation coated carbides, rotary cutting tools, cutting fluids, and ceramic tool materials.

  6. A high data rate recorder for astronomy

    NASA Technical Reports Server (NTRS)

    Hinteregger, H. F.; Rogers, A. E. E.; Cappallo, R. J.; Webber, J. C.; Petrachenko, W. T.

    1991-01-01

    A magnetic tape recorder developed for the special requirements of radio astronomy and geodesy is described. These requirements include a high bit packing density and long record times. The current version of this longitudinal recorder used by the Very Long Baseline Array (VLBA) records 5.5 Terabits on a 14-in diameter reel of inch-wide tape. A maximum record rate of 256 Mb/s is achieved in the VLBA configuration with one recorder operating at 4 ms and utilizing 32 of the heads in a single stack. The VLBA recorders have been tested using a longitudinal density of 2.25 fr/micron; 448 data + 56 system tracks are recorded in 14 passes, each lasting 50 min, for a total record time (at 128 Mb/s) of 12 h on 14-in diameter reel of inch-wide 13-microns-thick D1-equivalent tape.

  7. Minimizing high spatial frequency residual error in active space telescope mirrors

    NASA Astrophysics Data System (ADS)

    Gray, Thomas L.; Smith, Matthew W.; Cohan, Lucy E.; Miller, David W.

    2009-08-01

    The trend in future space telescopes is towards larger apertures, which provide increased sensitivity and improved angular resolution. Lightweight, segmented, rib-stiffened, actively controlled primary mirrors are an enabling technology, permitting large aperture telescopes to meet the mass and volume restrictions imposed by launch vehicles. Such mirrors, however, are limited in the extent to which their discrete surface-parallel electrostrictive actuators can command global prescription changes. Inevitably some amount of high spatial frequency residual error is added to the wavefront due to the discrete nature of the actuators. A parameterized finite element mirror model is used to simulate this phenomenon and determine designs that mitigate high spatial frequency residual errors in the mirror surface figure. Two predominant residual components are considered: dimpling induced by embedded actuators and print-through induced by facesheet polishing. A gradient descent algorithm is combined with the parameterized mirror model to allow rapid trade space navigation and optimization of the mirror design, yielding advanced design heuristics formulated in terms of minimum machinable rib thickness. These relationships produce mirrors that satisfy manufacturing constraints and minimize uncorrectable high spatial frequency error.

  8. Consideration of wear rates at high velocity

    NASA Astrophysics Data System (ADS)

    Hale, Chad S.

    The development of the research presented here is one in which high velocity relative sliding motion between two bodies in contact has been considered. Overall, the wear environment is truly three-dimensional. The attempt to characterize three-dimensional wear was not economically feasible because it must be analyzed at the micro-mechanical level to get results. Thus, an engineering approximation was carried out. This approximation was based on a metallographic study identifying the need to include viscoplasticity constitutive material models, coefficient of friction, relationships between the normal load and velocity, and the need to understand wave propagation. A sled test run at the Holloman High Speed Test Track (HHSTT) was considered for the determination of high velocity wear rates. In order to adequately characterize high velocity wear, it was necessary to formulate a numerical model that contained all of the physical events present. The experimental results of a VascoMax 300 maraging steel slipper sliding on an AISI 1080 steel rail during a January 2008 sled test mission were analyzed. During this rocket sled test, the slipper traveled 5,816 meters in 8.14 seconds and reached a maximum velocity of 1,530 m/s. This type of environment was never considered previously in terms of wear evaluation. Each of the features of the metallography were obtained through micro-mechanical experimental techniques. The byproduct of this analysis is that it is now possible to formulate a model that contains viscoplasticity, asperity collisions, temperature and frictional features. Based on the observations of the metallographic analysis, these necessary features have been included in the numerical model, which makes use of a time-dynamic program which follows the movement of a slipper during its experimental test run. The resulting velocity and pressure functions of time have been implemented in the explicit finite element code, ABAQUS. Two-dimensional, plane strain models

  9. Talc lubrication at high strain rate

    NASA Astrophysics Data System (ADS)

    Doan, M.; Hirose, T.; Andreani, M.; Boullier, A.; Calugaru, D.; Boutareaud, S.

    2012-12-01

    Talc is a very soft material that has been found in small quantities in active fault zones. Its presence, even in small amount, has been demonstrated in numerous weak faults where microseismicity activity may also occur. Although talc properties have been investigated at low slip rate, its effects at coseismic rate have not been investigated. Here we show that a few weight percents of talc are enough to significantly alter the frictional behavior of natural serpentinite gouge at seismic slip rate. We performed high velocity friction experiments on wet powders mixing talc and serpentinite in varying proportions. At 1.3 m/s, pure natural serpentinite starts sliding with a high friction peak of 0.5 that falls exponentially to a steady-state value of ~0.2 over slip greater than 5 m. By introducing only 5%wt of talc, the initial peak in friction of serpentinite is cut-off: friction levels to 0.35 below 2 m of displacement before merging the exponential decay curve observed for pure serpentinite. For a larger amount of talc, friction curve becomes closer to the talc behavior, which exhibits a friction of 0.2, regardless of displacement. Increasing the amount of talc not only alters the mechanical properties of the mixture, it also changes deformation mechanism and the resulting microstructure. Below 5%wt of talc, deformation is accommodated by cataclastic comminution of serpentine grains, without any thermal decomposition. When talc is present in larger proportion, it accommodates slip with intense delamination. Principal slip zone is composed of serpentine grains smaller than 0.5 μm, 40 times smaller than the size of the initials serpentine grains. Talc grains inserted within the mixture shows extensive delamination after only 3 m of displacement. Talc lamellae are observed along the microscopic shear planes that pervade the principal slip zone and the remaining gouge. We infer that easy delamination of talc multiplies the number of talc grains and increases its

  10. Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1998-01-01

    In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.

  11. Separable and Error-Free Reversible Data Hiding in Encrypted Image with High Payload

    PubMed Central

    Yin, Zhaoxia; Luo, Bin; Hong, Wien

    2014-01-01

    This paper proposes a separable reversible data-hiding scheme in encrypted image which offers high payload and error-free data extraction. The cover image is partitioned into nonoverlapping blocks and multigranularity encryption is applied to obtain the encrypted image. The data hider preprocesses the encrypted image and randomly selects two basic pixels in each block to estimate the block smoothness and indicate peak points. Additional data are embedded into blocks in the sorted order of block smoothness by using local histogram shifting under the guidance of the peak points. At the receiver side, image decryption and data extraction are separable and can be free to choose. Compared to previous approaches, the proposed method is simpler in calculation while offering better performance: larger payload, better embedding quality, and error-free data extraction, as well as image recovery. PMID:24977214

  12. Triangle network motifs predict complexes by complementing high-error interactomes with structural information

    PubMed Central

    Andreopoulos, Bill; Winter, Christof; Labudde, Dirk; Schroeder, Michael

    2009-01-01

    Background A lot of high-throughput studies produce protein-protein interaction networks (PPINs) with many errors and missing information. Even for genome-wide approaches, there is often a low overlap between PPINs produced by different studies. Second-level neighbors separated by two protein-protein interactions (PPIs) were previously used for predicting protein function and finding complexes in high-error PPINs. We retrieve second level neighbors in PPINs, and complement these with structural domain-domain interactions (SDDIs) representing binding evidence on proteins, forming PPI-SDDI-PPI triangles. Results We find low overlap between PPINs, SDDIs and known complexes, all well below 10%. We evaluate the overlap of PPI-SDDI-PPI triangles with known complexes from Munich Information center for Protein Sequences (MIPS). PPI-SDDI-PPI triangles have ~20 times higher overlap with MIPS complexes than using second-level neighbors in PPINs without SDDIs. The biological interpretation for triangles is that a SDDI causes two proteins to be observed with common interaction partners in high-throughput experiments. The relatively few SDDIs overlapping with PPINs are part of highly connected SDDI components, and are more likely to be detected in experimental studies. We demonstrate the utility of PPI-SDDI-PPI triangles by reconstructing myosin-actin processes in the nucleus, cytoplasm, and cytoskeleton, which were not obvious in the original PPIN. Using other complementary datatypes in place of SDDIs to form triangles, such as PubMed co-occurrences or threading information, results in a similar ability to find protein complexes. Conclusion Given high-error PPINs with missing information, triangles of mixed datatypes are a promising direction for finding protein complexes. Integrating PPINs with SDDIs improves finding complexes. Structural SDDIs partially explain the high functional similarity of second-level neighbors in PPINs. We estimate that relatively little structural

  13. Quantifying the Representation Error of Land Biosphere Models using High Resolution Footprint Analyses and UAS Observations

    NASA Astrophysics Data System (ADS)

    Hanson, C. V.; Schmidt, A.; Law, B. E.; Moore, W.

    2015-12-01

    The validity of land biosphere model outputs rely on accurate representations of ecosystem processes within the model. Typically, a vegetation or land cover type for a given area (several Km squared or larger resolution), is assumed to have uniform properties. The limited spacial and temporal resolution of models prevents resolving finer scale heterogeneous flux patterns that arise from variations in vegetation. This representation error must be quantified carefully if models are informed through data assimilation in order to assign appropriate weighting of model outputs and measurement data. The representation error is usually only estimated or ignored entirely due to the difficulty in determining reasonable values. UAS based gas sensors allow measurements of atmospheric CO2 concentrations with unprecedented spacial resolution, providing a means of determining the representation error for CO2 fluxes empirically. In this study we use three dimensional CO2 concentration data in combination with high resolution footprint analyses in order to quantify the representation error for modelled CO2 fluxes for typical resolutions of regional land biosphere models. CO2 concentration data were collected using an Atlatl X6A hexa-copter, carrying a highly calibrated closed path infra-red gas analyzer based sampling system with an uncertainty of ≤ ±0.2 ppm CO2. Gas concentration data was mapped in three dimensions using the UAS on-board position data and compared to footprints generated using WRF 3.61. Chad Hanson, Oregon State University, Corvallis, OR Andres Schmidt, Oregon State University, Corvallis, OR Bev Law, Oregon State University, Corvallis, OR

  14. High rate PLD of diamond-like-carbon utilizing high repetition rate visible lasers

    SciTech Connect

    McLean, W. II; Fehring, E.J.; Dragon, E.P.; Warner, B.E.

    1994-09-15

    Pulsed Laser Deposition (PLD) has been shown to be an effective method for producing a wide variety of thin films of high-value-added materials. The high average powers and high pulse repetition frequencies of lasers under development at LLNL make it possible to scale-up PLD processes that have been demonstrated in small systems in a number of university, government, and private laboratories to industrially meaningful, economically feasible technologies. A copper vapor laser system at LLNL has been utilized to demonstrate high rate PLD of high quality diamond-like-carbon (DLC) from graphite targets. The deposition rates for PLD obtained with a 100 W laser were {approx} 2000 {mu}m{center_dot}cm{sup 2}/h, or roughly 100 times larger than those reported by chemical vapor deposition (CVD) or physical vapor deposition (PVD) methods. Good adhesion of thin (up to 2 pm) films has been achieved on a small number of substrates that include SiO{sub 2} and single crystal Si. Present results indicate that the best quality DLC films can be produced at optimum rates at power levels and wavelengths compatible with fiber optic delivery systems. If this is also true of other desirable coating systems, this PLD technology could become an extremely attractive industrial tool for high value added coatings.

  15. A cascaded coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, T.; Lin, S.

    1985-01-01

    A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.

  16. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    ERIC Educational Resources Information Center

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  17. Estimation of chromatic errors from broadband images for high contrast imaging: sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Belikov, Ruslan

    2016-01-01

    Many concepts have been proposed to enable direct imaging of planets around nearby stars, and which would enable spectroscopic observations of their atmospheric observations and the potential discovery of biomarkers. The main technical challenge associated with direct imaging of exoplanets is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. Usage of an internal coronagraph with an adaptive optical system for wavefront correction is one of the most mature methods and is being developed as an instrument addition to the WFIRST-AFTA space mission. In addition, such instruments as GPI and SPHERE are already being used on the ground and are yielding spectra of giant planets. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, mid-spatial frequency wavefront errors must be estimated. To date, most broadband lab demonstrations use narrowband filters to obtain an estimate of the the chromaticity of the wavefront error and this can result in usage of a large percentage of the total integration time. Previously, we have proposed a method to estimate the chromaticity of wavefront errors using only broadband images; we have demonstrated that under idealized conditions wavefront errors can be estimated from images composed of discrete wavelengths. This is achieved by using DM probes with sufficient spatially-localized chromatic diversity. Here we report on the results of a study of the performance of this method with respect to realistic broadband images including noise. Additionally, we study optimal probe patterns that enable reduction of the number of probes used and compare the integration time with narrowband and IFS estimation methods.

  18. Control System for Suppressing Tracking Error Offset and Multiharmonic Disturbance in High-Speed Optical Disk Systems

    NASA Astrophysics Data System (ADS)

    Nabata, Yuta; Nakazaki, Tatsuya; Ogata, Tokoku; Ohishi, Kiyoshi; Miyazaki, Toshimasa; Sazawa, Masaki; Koide, Daiichi; Takano, Yoshimichi; Tokumaru, Haruki

    This paper proposes a control system for suppressing tracking error offset and multiharmonic disturbance in high speed optical disk systems. Residual tracking error consists of primary harmonics, high-order harmonics, and offset. Therefore, this paper proposes a tracking control system for suppressing residual tracking error, including primary harmonics, high order harmonics disturbance, and offset. The cause of the offset included in the residual tracking error is discussed. The cause is found to be the operation error in the fixed-point DSP (digital signal processor) and the phase lag of the LPF (low pass filter). Moreover, the proposed control system is designed for two types of high-speed optical disk system. The experimental results show that the proposed system enables an optical disk system to achieve a fine tracking performance.

  19. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    PubMed

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PMID:24773354

  20. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    NASA Astrophysics Data System (ADS)

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-06-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.

  1. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media.

    PubMed

    Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  2. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    PubMed Central

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  3. Bit error rate performance of pi/4-DQPSK in a frequency-selective fast Rayleigh fading channel

    NASA Technical Reports Server (NTRS)

    Liu, Chia-Liang; Feher, Kamilo

    1991-01-01

    The bit error rate (BER) performance of pi/4-differential quadrature phase shift keying (DQPSK) modems in cellular mobile communication systems is derived and analyzed. The system is modeled as a frequency-selective fast Rayleigh fading channel corrupted by additive white Gaussian noise (AWGN) and co-channel interference (CCI). The probability density function of the phase difference between two consecutive symbols of M-ary differential phase shift keying (DPSK) signals is first derived. In M-ary DPSK systems, the information is completely contained in this phase difference. For pi/4-DQPSK, the BER is derived in a closed form and calculated directly. Numerical results show that for the 24 kBd (48 kb/s) pi/4-DQPSK operated at a carrier frequency of 850 MHz and C/I less than 20 dB, the BER will be dominated by CCI if the vehicular speed is below 100 mi/h. In this derivation, frequency-selective fading is modeled by two independent Rayleigh signal paths. Only one co-channel is assumed in this derivation. The results obtained are also shown to be valid for discriminator detection of M-ary DPSK signals.

  4. High-pressure, High-strain-rate Materials Effects

    SciTech Connect

    Kalantar, D; Belak, J; Bringa, E; Budil, K; Colvin, J; Kumar, M; Meyers, M; Rosolankova, K; Rudd, R; Schneider, M; Stolken, J; Wark, J

    2004-03-04

    A 3-year LDRD-ER project to study the response of shocked materials at high pressure and high strain rate has concluded. This project involved a coordinated effort to study single crystal samples that were shock loaded by direct laser irradiation, in-situ and post-recovery measurements, and molecular dynamics and continuum modeling. Laser-based shock experiments have been conducted to study the dynamic response of materials under shock loading materials at a high strain-rate. Experiments were conducted at pressures above the published Hugoniot Elastic Limit (HEL). The residual deformation present in recovered samples was characterized by transmission electron microscopy, and the response of the shocked lattice during shock loading was measured by in-situ x-ray diffraction. Static film and x-ray streak cameras recorded x-rays diffracted from lattice planes of Cu and Si both parallel and perpendicular to the shock direction. Experiments were also conducted using a wide-angle detector to record x-rays diffracted from multiple lattice planes simultaneously. This data showed uniaxial compression of Si (100) along the shock direction and 3-dimensional compression of Cu (100). In the case of the Si diffraction, there was a multiple wave structure observed. We present results of shocked Si and Cu obtained with a new large angle diffraction diagnostic, and discuss the results in the context of detailed molecular dynamics simulations and post-processing.

  5. On verifying a high-level design. [cost and error analysis

    NASA Technical Reports Server (NTRS)

    Mathew, Ben; Wehbeh, Jalal A.; Saab, Daniel G.

    1993-01-01

    An overview of design verification techniques is presented, and some of the current research in high-level design verification is described. Formal hardware description languages that are capable of adequately expressing the design specifications have been developed, but some time will be required before they can have the expressive power needed to be used in real applications. Simulation-based approaches are more useful in finding errors in designs than they are in proving the correctness of a certain design. Hybrid approaches that combine simulation with other formal design verification techniques are argued to be the most promising over the short term.

  6. A High-Precision Instrument for Mapping of Rotational Errors in Rotary Stages

    DOE PAGESBeta

    Xu, W.; Lauer, K.; Chu, Y.; Nazaretski, E.

    2014-11-02

    A rotational stage is a key component of every X-ray instrument capable of providing tomographic or diffraction measurements. To perform accurate three-dimensional reconstructions, runout errors due to imperfect rotation (e.g. circle of confusion) must be quantified and corrected. A dedicated instrument capable of full characterization and circle of confusion mapping in rotary stages down to the sub-10 nm level has been developed. A high-stability design, with an array of five capacitive sensors, allows simultaneous measurements of wobble, radial and axial displacements. The developed instrument has been used for characterization of two mechanical stages which are part of an X-ray microscope.

  7. Estimation of sampling errors in a high-resolution TV microscope image-processing system.

    PubMed

    Harms, H; Aus, H M

    1984-05-01

    The basic postulate of this paper is that the commonly accepted sampling density of 2-4 pixels/micron in a high-resolution TV microscope system is too low to digitize exactly and analyze the complex cellular detail found in stained cell images. Depending on the specific microscope system, the required sampling density is much higher, lying between 15 and 30 pixels/micron. This sampling density is derived from the aliasing error, the resolution loss, and computational limitations. The mathematical and optical methods and equipment used to obtain these results are described in detail. PMID:6375997

  8. Accurate human microsatellite genotypes from high-throughput resequencing data using informed error profiles.

    PubMed

    Highnam, Gareth; Franck, Christopher; Martin, Andy; Stephens, Calvin; Puthige, Ashwin; Mittelman, David

    2013-01-01

    Repetitive sequences are biologically and clinically important because they can influence traits and disease, but repeats are challenging to analyse using short-read sequencing technology. We present a tool for genotyping microsatellite repeats called RepeatSeq, which uses Bayesian model selection guided by an empirically derived error model that incorporates sequence and read properties. Next, we apply RepeatSeq to high-coverage genomes from the 1000 Genomes Project to evaluate performance and accuracy. The software uses common formats, such as VCF, for compatibility with existing genome analysis pipelines. Source code and binaries are available at http://github.com/adaptivegenome/repeatseq. PMID:23090981

  9. Effects of diffraction and static wavefront errors on high-contrast imaging from the Thirty Meter Telescope

    NASA Technical Reports Server (NTRS)

    Troya, Mitchell; Chananb, Gary; Crossfielda, Ian; Dumonta, Philip; Green, Joseph J.; Macintosh, Bruce

    2006-01-01

    High-contrast imaging, particularly direct detection of extrasolar planets, is a major science driver for the next generation of extremely large telescopes such as the segmented Thirty Meter Telescope. This goal requires more than merely diffraction-limited imaging, but also attention to residual scattered light from wavefront errors and diffraction effects at the contrast level of 10-8-10-9. Using a wave-optics simulation of adaptive optics and a diffraction suppression system we investigate diffraction from the segmentation geometry, intersegment gaps, obscuration by the secondary mirror and its supports. We find that the large obscurations pose a greater challenge than the much smaller segment gaps. In addition the impact of wavefront errors from the primary mirror, including segment alignment and figure errors, are analyzed. Segment-to-segment reflectivity variations and residual segment figure error will be the dominant error contributors from the primary mirror. Strategies to mitigate these errors are discussed.

  10. Effect of mid- and high-spatial frequencies on optical performance. [surface error effects on reflecting telescopes

    NASA Technical Reports Server (NTRS)

    Noll, R. J.

    1979-01-01

    In many of today's telescopes the effects of surface errors on image quality and scattered light are very important. The influence of optical fabrication surface errors on the performance of an optical system is discussed. The methods developed by Hopkins (1957) for aberration tolerancing and Barakat (1972) for random wavefront errors are extended to the examination of mid- and high-spatial frequency surface errors. The discussion covers a review of the basic concepts of image quality, an examination of manufacturing errors as a function of image quality performance, a demonstration of mirror scattering effects in relation to surface errors, and some comments on the nature of the correlation functions. Illustrative examples are included.

  11. High voltage high repetition rate pulse using Marx topology

    NASA Astrophysics Data System (ADS)

    Hakki, A.; Kashapov, N.

    2015-06-01

    The paper describes Marx topology using MOSFET transistors. Marx circuit with 10 stages has been done, to obtain pulses about 5.5KV amplitude, and the width of the pulses was about 30μsec with a high repetition rate (PPS > 100), Vdc = 535VDC is the input voltage for supplying the Marx circuit. Two Ferrite ring core transformers were used to control the MOSFET transistors of the Marx circuit (the first transformer to control the charging MOSFET transistors, the second transformer to control the discharging MOSFET transistors).

  12. Dislocation Mechanics of High-Rate Deformations

    NASA Astrophysics Data System (ADS)

    Armstrong, Ronald W.; Li, Qizhen

    2015-10-01

    Four topics associated with constitutive equation descriptions of rate-dependent metal plastic deformation behavior are reviewed in honor of previous research accomplished on the same issues by Professor Marc Meyers along with colleagues and students, as follow: (1) increasing strength levels attributed to thermally activated dislocation migration at higher loading rates; (2) inhomogeneous adiabatic shear banding; (3) controlling mechanisms of deformation in shock as compared with shock-less isentropic compression experiments and (4) Hall-Petch-based grain size-dependent strain rate sensitivities exhibited by nanopolycrystalline materials. Experimental results are reviewed on the topics for a wide range of metals.

  13. HIgh Rate X-ray Fluorescence Detector

    SciTech Connect

    Grudberg, Peter Matthew

    2013-04-30

    The purpose of this project was to develop a compact, modular multi-channel x-ray detector with integrated electronics. This detector, based upon emerging silicon drift detector (SDD) technology, will be capable of high data rate operation superior to the current state of the art offered by high purity germanium (HPGe) detectors, without the need for liquid nitrogen. In addition, by integrating the processing electronics inside the detector housing, the detector performance will be much less affected by the typically noisy electrical environment of a synchrotron hutch, and will also be much more compact than current systems, which can include a detector involving a large LN2 dewar and multiple racks of electronics. The combined detector/processor system is designed to match or exceed the performance and features of currently available detector systems, at a lower cost and with more ease of use due to the small size of the detector. In addition, the detector system is designed to be modular, so a small system might just have one detector module, while a larger system can have many you can start with one detector module, and add more as needs grow and budget allows. The modular nature also serves to simplify repair. In large part, we were successful in achieving our goals. We did develop a very high performance, large area multi-channel SDD detector, packaged with all associated electronics, which is easy to use and requires minimal external support (a simple power supply module and a closed-loop water cooling system). However, we did fall short of some of our stated goals. We had intended to base the detector on modular, large-area detectors from Ketek GmbH in Munich, Germany; however, these were not available in a suitable time frame for this project, so we worked instead with pnDetector GmbH (also located in Munich). They were able to provide a front-end detector module with six 100 m^2 SDD detectors (two monolithic arrays of three elements each) along with

  14. High data rate systems for the future

    NASA Technical Reports Server (NTRS)

    Chitwood, John

    1991-01-01

    Information systems in the next century will transfer data at rates that are much greater than those in use today. Satellite based communication systems will play an important role in networking users. Typical data rates; use of microwave, millimeter wave, or optical systems; millimeter wave communication technology; modulators/exciters; solid state power amplifiers; beam waveguide transmission systems; low noise receiver technology; optical communication technology; and the potential commercial applications of these technologies are discussed.

  15. Bipolar high-repetition-rate high-voltage nanosecond pulser

    SciTech Connect

    Tian Fuqiang; Wang Yi; Shi Hongsheng; Lei Qingquan

    2008-06-15

    The pulser designed is mainly used for producing corona plasma in waste water treatment system. Also its application in study of dielectric electrical properties will be discussed. The pulser consists of a variable dc power source for high-voltage supply, two graded capacitors for energy storage, and the rotating spark gap switch. The key part is the multielectrode rotating spark gap switch (MER-SGS), which can ensure wider range modulation of pulse repetition rate, longer pulse width, shorter pulse rise time, remarkable electrical field distortion, and greatly favors recovery of the gap insulation strength, insulation design, the life of the switch, etc. The voltage of the output pulses switched by the MER-SGS is in the order of 3-50 kV with pulse rise time of less than 10 ns and pulse repetition rate of 1-3 kHz. An energy of 1.25-125 J per pulse and an average power of up to 10-50 kW are attainable. The highest pulse repetition rate is determined by the driver motor revolution and the electrode number of MER-SGS. Even higher voltage and energy can be switched by adjusting the gas pressure or employing N{sub 2} as the insulation gas or enlarging the size of MER-SGS to guarantee enough insulation level.

  16. High strain rate behavior of alloy 800H at high temperatures

    NASA Astrophysics Data System (ADS)

    Shafiei, E.

    2016-05-01

    In this paper, a new model using linear estimation of strain hardening rate vs. stress, has been developed to predict dynamic behavior of alloy 800H at high temperatures. In order to prove the accuracy and competency of the presented model, Johnson-Cook model pertaining modeling of flow stress curves was used. Evaluation of mean error of flow stress at deformation temperatures from 850 °C to 1050 °C and at strain rates of 5 S-1 to 20 S-1 indicates that the predicted results are in a good agreement with experimentally measured ones. This analysis has been done for the stress-strain curves under hot working condition for alloy 800H. However, this model is not dependent on the type of material and can be extended for any similar conditions.

  17. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  18. Automated measurement of the bit-error rate as a function of signal-to-noise ratio for microwave communications systems

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Daugherty, Elaine S.; Kramarchuk, Ihor

    1987-01-01

    The performance of microwave systems and components for digital data transmission can be characterized by a plot of the bit-error rate as a function of the signal to noise ratio (or E sub b/E sub o). Methods for the efficient automated measurement of bit-error rates and signal-to-noise ratios, developed at NASA Lewis Research Center, are described. Noise measurement considerations and time requirements for measurement accuracy, as well as computer control and data processing methods, are discussed.

  19. Estimation of hominoid ancestral population sizes under bayesian coalescent models incorporating mutation rate variation and sequencing errors.

    PubMed

    Burgess, Ralph; Yang, Ziheng

    2008-09-01

    Estimation of population parameters for the common ancestors of humans and the great apes is important in understanding our evolutionary history. In particular, inference of population size for the human-chimpanzee common ancestor may shed light on the process by which the 2 species separated and on whether the human population experienced a severe size reduction in its early evolutionary history. In this study, the Bayesian method of ancestral inference of Rannala and Yang (2003. Bayes estimation of species divergence times and ancestral population sizes using DNA sequences from multiple loci. Genetics. 164:1645-1656) was extended to accommodate variable mutation rates among loci and random species-specific sequencing errors. The model was applied to analyze a genome-wide data set of approximately 15,000 neutral loci (7.4 Mb) aligned for human, chimpanzee, gorilla, orangutan, and macaque. We obtained robust and precise estimates for effective population sizes along the hominoid lineage extending back approximately 30 Myr to the cercopithecoid divergence. The results showed that ancestral populations were 5-10 times larger than modern humans along the entire hominoid lineage. The estimates were robust to the priors used and to model assumptions about recombination. The unusually low X chromosome divergence between human and chimpanzee could not be explained by variation in the male mutation bias or by current models of hybridization and introgression. Instead, our parameter estimates were consistent with a simple instantaneous process for human-chimpanzee speciation but showed a major reduction in X chromosome effective population size peculiar to the human-chimpanzee common ancestor, possibly due to selective sweeps on the X prior to separation of the 2 species. PMID:18603620

  20. Primer ID Validates Template Sampling Depth and Greatly Reduces the Error Rate of Next-Generation Sequencing of HIV-1 Genomic RNA Populations

    PubMed Central

    Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr

    2015-01-01

    ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize

  1. Low-Power, High Data Rate Transceiver System for Implantable Prostheses

    PubMed Central

    Kahn, A. R.; Chow, E. Y.; Abdel-Latief, O.; Irazoqui, P. P.

    2010-01-01

    Wireless telemetry is crucial for long-term implantable neural recording systems. RF-encoded neurological signals often require high data-rates to transmit information from multiple electrodes with a sufficient sampling frequency and resolution. In this work, we quantify the effects of interferers and tissue attenuation on a wireless link for optimal design of future systems. The wireless link consists of an external receiver capable of demodulating FSK/OOK transmission at speeds up to 8 Mbps, with <1e-5 bit-error rate (BER) without error correction, and a fully implanted transmitter consuming about 1.05 mW. The external receiver is tested with the transmitter in vivo to show demodulation efficacy of the transcutaneous link at high data-rates. Transmitter/Receiver link BER is quantified in typical and controlled RF environments for ex vivo and in vivo performance. PMID:21317982

  2. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Astrophysics Data System (ADS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-02-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  3. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  4. Correcting for Sequencing Error in Maximum Likelihood Phylogeny Inference

    PubMed Central

    Kuhner, Mary K.; McGill, James

    2014-01-01

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. PMID:25378476

  5. Smoking Rates Still High in Some Racial Groups, CDC Reports

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_160256.html Smoking Rates Still High in Some Racial Groups, CDC ... lot of progress in getting Americans to stop smoking, some groups still have high smoking rates, a ...

  6. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case

    NASA Astrophysics Data System (ADS)

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-07-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.

  7. High frame rate fluorescence lifetime imaging

    NASA Astrophysics Data System (ADS)

    Agronskaia, A. V.; Tertoolen, L.; Gerritsen, H. C.

    2003-07-01

    A fast time-domain based fluorescence lifetime imaging (FLIM) microscope is presented that can operate at frame rates of hundreds of frames per second. A beam splitter in the detection path of a wide-field fluorescence microscope divides the fluorescence in two parts. One part is optically delayed with respect to the other. Both parts are viewed with a single time-gated intensified CCD camera with a gate width of 5 ns. The fluorescence lifetime image is obtained from the ratio of these two images. The fluorescence lifetime resolution of the FLIM microscope is verified both with dye solutions and fluorescent latex beads. The fluorescence lifetimes obtained from the reference specimens are in good agreement with values obtained from time correlated single photon counting measurements on the same specimens. The acquisition speed of the FLIM system is evaluated with a measurement of the calcium fluxes in neonatal rat myocytes stained with the calcium probe Oregon Green 488-Bapta. Fluorescence lifetime images of the calcium fluxes related to the beating of the myocytes are acquired with frame rates of up to 100 Hz.

  8. Infant mortality rates declining, but still high.

    PubMed

    Hoffman, M

    1992-10-01

    Family planning can improve infant survival. Specifically, use of family planning methods can minimize family size, increase birth spacing, and reduce the likelihood of pregnancy for teenagers and women aged 40 or older. Immunizations and oral rehydration are responsible for the falling infant mortality rats since 1977 in developing countries, especially among 1-12 month old infants. Yet, neonatal mortality in developing countries had not changed. WHO intends to step up efforts to improve newborn survival. Accurate data are needed, however. Even in developed countries which keep good statistics, infant mortality bias exists. For example, in Japan, some infant deaths are called fetal deaths. In developing countries, much of the data come from hospitals, yet most birth do not occur in hospitals. Even in surveys, bias exists, such as problems with recall. Many researchers use traditional birth attendants (TBAs) to follow up on all births in an area which may eliminate some biases. Such a prospective and longitudinal study in Trairi county in northeastern Brazil shows the infant mortality rate to be less than half of the official rate (65 vs. 142). The major causes of infant death in developed countries, which tends to occur in the neonatal period, are low birth weight, prematurity, birth complications, and congenital defects; developing countries; they are vaccine preventable infectious diseases, diarrhea and dehydration, and respiratory illnesses, all complicated by malnutrition. To make further strides in reducing infant mortality, public health workers must concentrate on the neonatal period. Training TBAs in sterile techniques, appropriate technology, resuscitation of infants, and identification of potential problems is a positive step. Yet, unpredictable conditions (e.g., AIDS) exist and/or will arise which erode improvements. For example, in Nicaragua, within 1 year after the new government introduced health budget cuts which resulted in the poor paying for

  9. High Count Rate Electron Probe Microanalysis

    PubMed Central

    Geller, Joseph D.; Herrington, Charles

    2002-01-01

    Reducing the measurement uncertainty of quantitative analyses made using electron probe microanalyzers (EPMA) requires a careful study of the individual uncertainties from each definable step of the measurement. Those steps include measuring the incident electron beam current and voltage, knowing the angle between the electron beam and the sample (takeoff angle), collecting the emitted x rays from the sample, comparing the emitted x-ray flux to known standards (to determine the k-ratio) and transformation of the k-ratio to concentration using algorithms which includes, as a minimum, the atomic number, absorption, and fluorescence corrections. This paper discusses the collection and counting of the emitted x rays, which are diffracted into the gas flow or sealed proportional x-ray detectors. The representation of the uncertainty in the number of collected x rays collected reduces as the number of counts increase. The uncertainty of the collected signal is fully described by Poisson statistics. Increasing the number of x rays collected involves either counting longer or at a higher counting rate. Counting longer means the analysis time increases and may become excessive to get to the desired uncertainty. Instrument drift also becomes an issue. Counting at higher rates has its limitations, which are a function of the detector physics and the detecting electronics. Since the beginning of EPMA analysis, analog electronics have been used to amplify and discriminate the x-ray induced ionizations within the proportional counter. This paper will discuss the use of digital electronics for this purpose. These electronics are similar to that used for energy dispersive analysis of x rays with either Si(Li) or Ge(Li) detectors except that the shaping time constants are much smaller. PMID:27446749

  10. Heritability and molecular genetic basis of antisaccade eye tracking error rate: a genome-wide association study.

    PubMed

    Vaidyanathan, Uma; Malone, Stephen M; Donnelly, Jennifer M; Hammer, Micah A; Miller, Michael B; McGue, Matt; Iacono, William G

    2014-12-01

    Antisaccade deficits reflect abnormalities in executive function linked to various disorders including schizophrenia, externalizing psychopathology, and neurological conditions. We examined the genetic bases of antisaccade error in a sample of community-based twins and parents (N = 4,469). Biometric models showed that about half of the variance in the antisaccade response was due to genetic factors and half due to nonshared environmental factors. Molecular genetic analyses supported these results, showing that the heritability accounted for by common molecular genetic variants approximated biometric estimates. Genome-wide analyses revealed several SNPs as well as two genes-B3GNT7 and NCL-on Chromosome 2 associated with antisaccade error. SNPs and genes hypothesized to be associated with antisaccade error based on prior work, although generating some suggestive findings for MIR137, GRM8, and CACNG2, could not be confirmed. PMID:25387707

  11. High-rate counting efficiency of VLPC

    SciTech Connect

    Hogue, H.H.

    1998-11-01

    A simple model is applied to describe dependencies of Visible Light Photon Counter (VLPC) characteristics on temperature and operating voltage. Observed counting efficiency losses at high illumination, improved by operating at higher temperature, are seen to be a consequence of de-biasing within the VLPC structure. A design improvement to minimize internal de-biasing for future VLPC generations is considered. {copyright} {ital 1998 American Institute of Physics.}

  12. Assessment of high-rate GPS using a single-axis shake table

    NASA Astrophysics Data System (ADS)

    Häberling, S.; Rothacher, M.; Zhang, Y.; Clinton, J. F.; Geiger, A.

    2015-07-01

    The developments in GNSS receiver and antenna technologies, especially the increased sampling rate up to 100 sps, open up the possibility to measure high-rate earthquake ground motions with GNSS. In this paper we focus on the GPS errors in the frequency band above 1 Hz. The dominant error sources are mainly the carrier phase jitter caused by thermal noise and the stress error caused by the dynamics, e.g. antenna motions. To generate a large set of different motions, we used a single-axis shake table, where a GNSS antenna and a strong motion seismometer were mounted with a well-known ground truth. The generated motions were recorded with three different GNSS receivers with sampling rates up to 100 sps and different receiver baseband parameters. The baseband parameters directly dictate the carrier phase jitter and the correlations between subsequent epochs. A narrow loop filter bandwidth keeps the carrier phase jitter on a low level, but has an extreme impact on the receiver response for motions above 1 Hz. The amplitudes above 3 Hz are overestimated up to 50 % or reduced by well over half. The corresponding phase errors are between 30 and 90 degrees. Compared to the GNSS receiver response, the strong motion seismometer measurements do not show any amplitude or phase variations for the frequency range from 1 to 20 Hz. Due to the large errors for dynamic GNSS measurements, it is essential to account for the baseband parameters of the GNSS receivers if high-rate GNSS is to become a valuable tool for seismic displacement measurements above 1 Hz. Fortunately, the receiver response can be corrected by an inverse filter if the baseband parameters are known.

  13. High-deposition-rate ceramics synthesis

    SciTech Connect

    Allendorf, M.D.; Osterheld, T.H.; Outka, D.A.

    1995-05-01

    Parallel experimental and computational investigations are conducted in this project to develop validated numerical models of ceramic synthesis processes. Experiments are conducted in the High-Temperature Materials Synthesis Laboratory in Sandia`s Combustion Research Facility. A high-temperature flow reactor that can accommodate small preforms (1-3 cm diameter) generates conditions under which deposition can be observed, with flexibility to vary both deposition temperature (up to 1500 K) and pressure (as low as 10 torr). Both mass spectrometric and laser diagnostic probes are available to provide measurements of gas-phase compositions. Experiments using surface analytical techniques are also applied to characterize important processes occuring on the deposit surface. Computational tools developed through extensive research in the combustion field are employed to simulate the chemically reacting flows present in typical industrial reactors. These include the CHEMKIN and Surface-CHEMKIN suites of codes, which permit facile development of complex reaction mechanisms and vastly simplify the implementation of multi-component transport and thermodynamics. Quantum chemistry codes are also used to estimate thermodynamic and kinetic data for species and reactions for which this information is unavailable.

  14. Resident Physicians' Clinical Training and Error Rate: The Roles of Autonomy, Consultation, and Familiarity with the Literature

    ERIC Educational Resources Information Center

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-01-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores…

  15. Solar Cell Short Circuit Current Errors and Uncertainties During High Altitude Calibrations

    NASA Technical Reports Server (NTRS)

    Snyder, David D.

    2012-01-01

    High altitude balloon based facilities can make solar cell calibration measurements above 99.5% of the atmosphere to use for adjusting laboratory solar simulators. While close to on-orbit illumination, the small attenuation to the spectra may result in under measurements of solar cell parameters. Variations of stratospheric weather, may produce flight-to-flight measurement variations. To support the NSCAP effort, this work quantifies some of the effects on solar cell short circuit current (Isc) measurements on triple junction sub-cells. This work looks at several types of high altitude methods, direct high altitude meas urements near 120 kft, and lower stratospheric Langley plots from aircraft. It also looks at Langley extrapolation from altitudes above most of the ozone, for potential small balloon payloads. A convolution of the sub-cell spectral response with the standard solar spectrum modified by several absorption processes is used to determine the relative change from AMO, lscllsc(AMO). Rayleigh scattering, molecular scatterin g from uniformly mixed gases, Ozone, and water vapor, are included in this analysis. A range of atmosph eric pressures are examined, from 0. 05 to 0.25 Atm to cover the range of atmospheric altitudes where solar cell calibrations a reperformed. Generally these errors and uncertainties are less than 0.2%

  16. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  17. The Effect of Minimum Wage Rates on High School Completion

    ERIC Educational Resources Information Center

    Warren, John Robert; Hamrock, Caitlin

    2010-01-01

    Does increasing the minimum wage reduce the high school completion rate? Previous research has suffered from (1. narrow time horizons, (2. potentially inadequate measures of states' high school completion rates, and (3. potentially inadequate measures of minimum wage rates. Overcoming each of these limitations, we analyze the impact of changes in…

  18. High rate fabrication of compression molded components

    DOEpatents

    Matsen, Marc R.; Negley, Mark A.; Dykstra, William C.; Smith, Glen L.; Miller, Robert J.

    2016-04-19

    A method for fabricating a thermoplastic composite component comprises inductively heating a thermoplastic pre-form with a first induction coil by inducing current to flow in susceptor wires disposed throughout the pre-form, inductively heating smart susceptors in a molding tool to a leveling temperature with a second induction coil by applying a high-strength magnetic field having a magnetic flux that passes through surfaces of the smart susceptors, shaping the magnetic flux that passes through surfaces of the smart susceptors to flow substantially parallel to a molding surface of the smart susceptors, placing the heated pre-form between the heated smart susceptors; and applying molding pressure to the pre-form to form the composite component.

  19. Dose rate in brachytherapy using after-loading machine: pulsed or high-dose rate?

    PubMed

    Hannoun-Lévi, J-M; Peiffert, D

    2014-10-01

    Since February 2014, it is no longer possible to use low-dose rate 192 iridium wires due to the end of industrial production of IRF1 and IRF2 sources. The Brachytherapy Group of the French society of radiation oncology (GC-SFRO) has recommended switching from iridium wires to after-loading machines. Two types of after-loading machines are currently available, based on the dose rate used: pulsed-dose rate or high-dose rate. In this article, we propose a comparative analysis between pulsed-dose rate and high-dose rate brachytherapy, based on biological, technological, organizational and financial considerations. PMID:25195117

  20. High data rate optical transceiver terminal

    NASA Technical Reports Server (NTRS)

    Clarke, E. S.

    1973-01-01

    The objectives of this study were: (1) to design a 400 Mbps optical transceiver terminal to operate from a high-altitude balloon-borne platform in order to permit the quantitative evaluation of a space-qualifiable optical communications system design, (2) to design an atmospheric propagation experiment to operate in conjunction with the terminal to measure the degrading effects of the atmosphere on the links, and (3) to design typical optical communications experiments for space-borne laboratories in the 1980-1990 time frame. As a result of the study, a transceiver package has been configured for demonstration flights during late 1974. The transceiver contains a 400 Mbps transmitter, a 400 Mbps receiver, and acquisition and tracking receivers. The transmitter is a Nd:YAG, 200 Mhz, mode-locked, CW, diode-pumped laser operating at 1.06 um requiring 50 mW for 6 db margin. It will be designed to implement Pulse Quaternary Modulation (PQM). The 400 Mbps receiver utilizes a Dynamic Crossed-Field Photomultiplier (DCFP) detector. The acquisition receiver is a Quadrant Photomultiplier Tube (QPMT) and receives a 400 Mbps signal chopped at 0.1 Mhz.

  1. High HIV Rates for Gay Men in Some Southern Cities

    MedlinePlus

    ... news/fullstory_158914.html High HIV Rates for Gay Men in Some Southern Cities In Jackson, Miss., ... 2016 (HealthDay News) -- Rates of HIV infection among gay and bisexual men are approaching 30 percent to ...

  2. High-Rate Strong-Signal Quantum Cryptography

    NASA Technical Reports Server (NTRS)

    Yuen, Horace P.

    1996-01-01

    Several quantum cryptosystems utilizing different kinds of nonclassical lights, which can accommodate high intensity fields and high data rate, are described. However, they are all sensitive to loss and both the high rate and the strong-signal character rapidly disappear. A squeezed light homodyne detection scheme is proposed which, with present-day technology, leads to more than two orders of magnitude data rate improvement over other current experimental systems for moderate loss.

  3. Correlation of anomalous write error rates and ferromagnetic resonance spectrum in spin-transfer-torque-magnetic-random-access-memory devices containing in-plane free layers

    SciTech Connect

    Evarts, Eric R.; Rippard, William H.; Pufall, Matthew R.; Heindl, Ranko

    2014-05-26

    In a small fraction of magnetic-tunnel-junction-based magnetic random-access memory devices with in-plane free layers, the write-error rates (WERs) are higher than expected on the basis of the macrospin or quasi-uniform magnetization reversal models. In devices with increased WERs, the product of effective resistance and area, tunneling magnetoresistance, and coercivity do not deviate from typical device properties. However, the field-swept, spin-torque, ferromagnetic resonance (FS-ST-FMR) spectra with an applied DC bias current deviate significantly for such devices. With a DC bias of 300 mV (producing 9.9 × 10{sup 6} A/cm{sup 2}) or greater, these anomalous devices show an increase in the fraction of the power present in FS-ST-FMR modes corresponding to higher-order excitations of the free-layer magnetization. As much as 70% of the power is contained in higher-order modes compared to ≈20% in typical devices. Additionally, a shift in the uniform-mode resonant field that is correlated with the magnitude of the WER anomaly is detected at DC biases greater than 300 mV. These differences in the anomalous devices indicate a change in the micromagnetic resonant mode structure at high applied bias.

  4. Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu

    2015-07-01

    Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in

  5. Tradeoff between no-call reduction in genotyping error rate and loss of sample size for genetic case/control association studies.

    PubMed

    Kang, S J; Gordon, D; Brown, A M; Ott, J; Finch, S J

    2004-01-01

    Single nucleotide polymorphisms (SNP) may be genotyped for use in case-control designs to test for association between a SNP marker and a disease using a 2 x 3 chi-squared test of independence. Genotyping is often based on underlying continuous measurements, which are classified into genotypes. A "no-call" procedure is sometimes used in which borderline observations are not classified. This procedure has the simultaneous effect of reducing the genotype error rate and the expected number of genotypes observed. Both quantities affect the power of the statistic. We develop methods for calculating the genotype error rate, the expected number of genotypes observed, and the expected power of the resulting test as a function of the no-call procedure. We examine the statistical properties of the chi-squared test using a no-call procedure when the underlying continuous measure of genotype classification is a three-component mixture of univariate normal distributions under a range of parameter specifications. The genotype error rate decreases as the no-call region is increased. The expected number of observations genotyped also decreases. Our key finding is that the expected power of the chi-squared test is not sensitive to the no-call procedure. That is, the benefits of reduced genotype error rate are almost exactly balanced by the losses due to reduced genotype observations. For an underlying univariate normal mixture of genotype classification to be analyzed with a 2 x 3 chi-squared test, there is little, if any, increase in power using a no-call procedure. PMID:14992497

  6. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case.

    PubMed

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275

  7. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case

    PubMed Central

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275

  8. Bit error rate optimization of an acousto-optic tracking system for free-space laser communications

    NASA Astrophysics Data System (ADS)

    Sofka, J.; Nikulin, V.

    2006-02-01

    Optical communications systems have been gaining momentum with the increasing demand for transmission bandwidth in the last several years. Optical cable based solutions have become an attractive alternative to copper based system in the most bandwidth demanding applications due to increased bandwidth and longer inter-repeater distances. The promise of similar benefits over radio communications systems is driving the research into free space laser communications. Along with increased communications bandwidth, a free space laser communications system offers lower power consumption and the possibility for covert data links due to the concentration of the energy of the laser into a narrow beam. A narrow beam, however, results in a requirement for much more accurate and agile steering, so that a data link can be maintained in a scenario of communication platforms in relative motion or in the presence of vibrations. This paper presents a laser beam tracking system employing an acousto-optic cell capable of deflecting a laser beam at a very high rate (order of tens of kHz). The tracking system is subjected to vibrations to simulate a realistic implementation, resulting in the increase of BER. The performance of the system can be significantly improved through digital control. A constant gain controller is complemented by a Kalman filter the parameters of which are optimized to achieve the lowest possible BER for a given vibrations spectrum.

  9. Managing Errors to Reduce Accidents in High Consequence Networked Information Systems

    SciTech Connect

    Ganter, J.H.

    1999-02-01

    Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classify these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.

  10. Combinatorial FSK modulation for power-efficient high-rate communications

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Budinger, James M.; Vanderaar, Mark J.

    1991-01-01

    Deep-space and satellite communications systems must be capable of conveying high-rate data accurately with low transmitter power, often through dispersive channels. A class of noncoherent Combinatorial Frequency Shift Keying (CFSK) modulation schemes is investigated which address these needs. The bit error rate performance of this class of modulation formats is analyzed and compared to the more traditional modulation types. Candidate modulator, demodulator, and digital signal processing (DSP) hardware structures are examined in detail. System-level issues are also discussed.

  11. The Rate of Return to the High/Scope Perry Preschool Program

    PubMed Central

    Heckman, James J.; Moon, Seong Hyeok; Pinto, Rodrigo; Savelyev, Peter A.; Yavitz, Adam

    2010-01-01

    This paper estimates the rate of return to the High/Scope Perry Preschool Program, an early intervention program targeted toward disadvantaged African-American youth. Estimates of the rate of return to the Perry program are widely cited to support the claim of substantial economic benefits from preschool education programs. Previous studies of the rate of return to this program ignore the compromises that occurred in the randomization protocol. They do not report standard errors. The rates of return estimated in this paper account for these factors. We conduct an extensive analysis of sensitivity to alternative plausible assumptions. Estimated annual social rates of return generally fall between 7–10 percent, with most estimates substantially lower than those previously reported in the literature. However, returns are generally statistically significantly different from zero for both males and females and are above the historical return on equity. Estimated benefit-to-cost ratios support this conclusion. PMID:21804653

  12. High-shear-rate capillary viscometer for inkjet inks

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Carr, Wallace W.; Bucknall, David G.; Morris, Jeffrey F.

    2010-06-01

    A capillary viscometer developed to measure the apparent shear viscosity of inkjet inks at high apparent shear rates encountered during inkjet printing is described. By using the Weissenberg-Rabinowitsch equation, true shear viscosity versus true shear rate is obtained. The device is comprised of a constant-flow generator, a static pressure monitoring device, a high precision submillimeter capillary die, and a high stiffness flow path. The system, which is calibrated using standard Newtonian low-viscosity silicone oil, can be easily operated and maintained. Results for measurement of the shear-rate-dependent viscosity of carbon-black pigmented water-based inkjet inks at shear rates up to 2×105 s-1 are discussed. The Cross model was found to closely fit the experimental data. Inkjet ink samples with similar low-shear-rate viscosities exhibited significantly different shear viscosities at high shear rates depending on particle loading.

  13. High-shear-rate capillary viscometer for inkjet inks

    SciTech Connect

    Wang Xi; Carr, Wallace W.; Bucknall, David G.; Morris, Jeffrey F.

    2010-06-15

    A capillary viscometer developed to measure the apparent shear viscosity of inkjet inks at high apparent shear rates encountered during inkjet printing is described. By using the Weissenberg-Rabinowitsch equation, true shear viscosity versus true shear rate is obtained. The device is comprised of a constant-flow generator, a static pressure monitoring device, a high precision submillimeter capillary die, and a high stiffness flow path. The system, which is calibrated using standard Newtonian low-viscosity silicone oil, can be easily operated and maintained. Results for measurement of the shear-rate-dependent viscosity of carbon-black pigmented water-based inkjet inks at shear rates up to 2x10{sup 5} s{sup -1} are discussed. The Cross model was found to closely fit the experimental data. Inkjet ink samples with similar low-shear-rate viscosities exhibited significantly different shear viscosities at high shear rates depending on particle loading.

  14. High School Graduation Rates: Alternative Methods and Implications

    ERIC Educational Resources Information Center

    Miao, Jing; Haney, Walt

    2004-01-01

    The No Child Left Behind Act has brought great attention to the high school graduation rate as one of the mandatory accountability measures for public school systems. However, there is no consensus on how to calculate the high school graduation rate given the lack of longitudinal databases that track individual students. This study reviews…

  15. HIGH-RATE DISINFECTION TECHNIQUES FOR COMBIND SEWER OVERFLOW

    EPA Science Inventory

    This paper presents high-rate disinfection technologies for combined sewer overflow (CSO). The high-rate disinfection technologies of interest are: chlorination/dechlorination, ultraviolet light irradiation (UV), chlorine dioxide (ClO2 ), ozone (O3), peracetic acid (CH3COOOH )...

  16. High Graduate Unemployment Rate and Taiwanese Undergraduate Education

    ERIC Educational Resources Information Center

    Wu, Chih-Chun

    2011-01-01

    An expansion in higher education in combination with the recent global economic recession has resulted in a high college graduate unemployment rate in Taiwan. This study investigates how the high unemployment rate and financial constraints caused by economic cutbacks have shaped undergraduates' class choices, job needs, and future income…

  17. Continuous operation of high bit rate quantum key distribution

    NASA Astrophysics Data System (ADS)

    Dixon, A. R.; Yuan, Z. L.; Dynes, J. F.; Sharpe, A. W.; Shields, A. J.

    2010-04-01

    We demonstrate a quantum key distribution with a secure bit rate exceeding 1 Mbit/s over 50 km fiber averaged over a continuous 36 h period. Continuous operation of high bit rates is achieved using feedback systems to control path length difference and polarization in the interferometer and the timing of the detection windows. High bit rates and continuous operation allows finite key size effects to be strongly reduced, achieving a key extraction efficiency of 96% compared to keys of infinite lengths.

  18. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  19. High speed imaging for material parameters calibration at high strain rate

    NASA Astrophysics Data System (ADS)

    Sasso, M.; Fardmoshiri, M.; Mancini, E.; Rossi, M.; Cortese, L.

    2016-05-01

    To describe the material behaviour at high strain rates dynamic experimental tests are necessary, and appropriate constitutive models are to be calibrated accordingly. A way to achieve this is through an inverse procedure, based on the minimization of an error function calculated as the difference between experimental and numerical data coming from Finite Element analysis. This approach, widely used in the literature, has a heavy computational cost associated with the minimization process that requires, for each variation of the material model parameters, the execution of FE calculations. In this work, a faster but yet effective calibration procedure is studied Experimental tests were performed on an aluminium alloy AA6061-T6, by means of a direct tension-compression Split Hopkinson bar. A fast camera with a resolution of 192 × 128 pixels and capable of a sample rate of 100,000 fps captured images of the deformation process undergone by the samples during the tests. The profile of the sample obtained after the image binarization and processing, was postprocessed to derive the deformation history; afterwards it was possible to calculate the true stress and strain, and carry out the inverse calibration by analytical computations. The results of this method were compared with the ones coming from the Finite Element approach.

  20. GPS-based Real-Time and High-Rate Estimation of Earth Orientation Parameters

    NASA Astrophysics Data System (ADS)

    Bertiger, W. I.; Bar-Sever, Y. E.; Gross, R. S.

    2014-12-01

    Accurate real-time values of Earth orientation parameters (EOP) (X and Y polar motion and rates, UT1-UTC rates) are desirable for a number of real-time applications, including GNSS orbit determination, positioning and timing, and deep space navigation. We will demonstrate a new capability to estimate EOPs, namely, X and Y polar motion and rates, UT1-UTC rates from GPS data alone in real time, and at high rate. Using RTGx, a new GNSS modeling and data analysis software developed at JPL to replace GIPSY and Real Time GIPSY (RTG), we explore estimation of EOPs within a real-time GPS orbit determination operation. We first characterize the errors in the conventional 1-2 days EOP predictions available through the IERS Bulletin A, and assess the impact of these errors on GPS orbit determination accuracy and on other key performance metrics. We then evaluate a variety of EOP stochastic estimation schemes, and demonstrate the ability to recover, in real-time, accurate EOP values with very high temporal granularity.

  1. Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto

    2006-01-01

    We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.

  2. High-resolution error detection in the capture process of a single-electron pump

    NASA Astrophysics Data System (ADS)

    Giblin, S. P.; See, P.; Petrie, A.; Janssen, T. J. B. M.; Farrer, I.; Griffiths, J. P.; Jones, G. A. C.; Ritchie, D. A.; Kataoka, M.

    2016-01-01

    The dynamic capture of electrons in a semiconductor quantum dot (QD) by raising a potential barrier is a crucial stage in metrological quantized charge pumping. In this work, we use a quantum point contact (QPC) charge sensor to study errors in the electron capture process of a QD formed in a GaAs heterostructure. Using a two-step measurement protocol to compensate for 1/f noise in the QPC current, and repeating the protocol more than 106 times, we are able to resolve errors with probabilities of order 10 - 6 . For the studied sample, one-electron capture is affected by errors in ˜ 30 out of every million cycles, while two-electron capture was performed more than 106 times with only one error. For errors in one-electron capture, we detect both failure to capture an electron and capture of two electrons. Electron counting measurements are a valuable tool for investigating non-equilibrium charge capture dynamics, and necessary for validating the metrological accuracy of semiconductor electron pumps.

  3. Authoritative school climate and high school dropout rates.

    PubMed

    Jia, Yuane; Konold, Timothy R; Cornell, Dewey

    2016-06-01

    This study tested the association between school-wide measures of an authoritative school climate and high school dropout rates in a statewide sample of 315 high schools. Regression models at the school level of analysis used teacher and student measures of disciplinary structure, student support, and academic expectations to predict overall high school dropout rates. Analyses controlled for school demographics of school enrollment size, percentage of low-income students, percentage of minority students, and urbanicity. Consistent with authoritative school climate theory, moderation analyses found that when students perceive their teachers as supportive, high academic expectations are associated with lower dropout rates. (PsycINFO Database Record PMID:26641957

  4. A software solution to estimate the SEU-induced soft error rate for systems implemented on SRAM-based FPGAs

    NASA Astrophysics Data System (ADS)

    Zhongming, Wang; Zhibin, Yao; Hongxia, Guo; Min, Lu

    2011-05-01

    SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach.

  5. Determination and Modeling of Error Densities in Ephemeris Prediction

    SciTech Connect

    Jones, J.P.; Beckerman, M.

    1999-02-07

    The authors determined error densities of ephemeris predictions for 14 LEO satellites. The empirical distributions are not inconsistent with the hypothesis of a Gaussian distribution. The growth rate of radial errors are most highly correlated with eccentricity ({vert_bar}r{vert_bar} = 0.63, {alpha} < 0.05). The growth rate of along-track errors is most highly correlated with the decay rate of the semimajor axis ({vert_bar}r{vert_bar} = 0.97; {alpha} < 0.01).

  6. HIGH-RATE FORMABILITY OF HIGH-STRENGTH ALUMINUM ALLOYS: A STUDY ON OBJECTIVITY OF MEASURED STRAIN AND STRAIN RATE

    SciTech Connect

    Upadhyay, Piyush; Rohatgi, Aashish; Stephens, Elizabeth V.; Davies, Richard W.; Catalini, David

    2015-02-18

    Al alloy AA7075 sheets were deformed at room temperature at strain-rates exceeding 1000 /s using the electrohydraulic forming (EHF) technique. A method that combines high speed imaging and digital image correlation technique, developed at Pacific Northwest National Laboratory, is used to investigate high strain rate deformation behavior of AA7075. For strain-rate sensitive materials, the ability to accurately model their high-rate deformation behavior is dependent upon the ability to accurately quantify the strain-rate that the material is subjected to. This work investigates the objectivity of software-calculated strain and strain rate by varying different parameters within commonly used commercially available digital image correlation software. Except for very close to the time of crack opening the calculated strain and strain rates are very consistent and independent of the adjustable parameters of the software.

  7. Experimental investigation on the high chip rate of 2D incoherent optical CDMA system

    NASA Astrophysics Data System (ADS)

    Su, Guorui; Wang, Rong; Pu, Tao; Fang, Tao; Zheng, Jilin; Zhu, Huatao; Wu, Weijiang

    2015-08-01

    An innovative approach to realise high chip rate in OCDMA transmission system is proposed and experimentally investigation, the high chip rate is achieved through a 2-D wavelength-hopping time-spreading en/decoder based on the supercontinuum light source. The source used in the experiment is generated by high nonlinear optical fiber (HNLF), Erbium-doped fiber amplifier (EDFA) which output power is 26 dBm, and distributed feed-back laser diode which works in the gain switch state. The span and the flatness of the light source are 20 nm and 3 dB, respectively, after equalization of wavelength selective switch (WSS). The wavelength-hopping time-spreading coder can be changed 20 nm in the wavelength and 400 ps in the time, is consist of WSS and delay lines. Therefore, the experimental results show that the chip rate can achieve 500 Gchip/s, in the case of 2.5 Gbit/s, while keeping a bit error rate below forward error correction limit after 40 km transmission.

  8. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Burr, Tom; Favalli, Andrea; Nicholson, Andrew

    2016-03-01

    The declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar - Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to model the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. We find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters.

  9. High Capacity Reversible Watermarking for Audio by Histogram Shifting and Predicted Error Expansion

    PubMed Central

    Wang, Fei; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability. PMID:25097883

  10. Miniature high stability high temperature space rated blackbody radiance source

    NASA Technical Reports Server (NTRS)

    Jones, J. A.; Beswick, A. G.

    1987-01-01

    This paper presents the design and test performance of a conical cavity type blackbody radiance source that will meet the requirements of the Halogen Occultation Experiment on the NASA Upper Atmospheric Research Satellite program. The thrust of this design effort was to minimize the heat losses, in order to keep the power usage under 7.5 watts, and to minimize the amount of silica in the materials. Silica in the presence of the platinum heater winding used in this design would cause the platinum to erode, changing the operating temperature set-point. The design required the development of fabrication techniques which would provide very small, close tolerance parts from extremely difficult-to-machine materials. Also, a space rated ceramic core and unique, low thermal conductance, ceramic-to-metal joint was developed, tested and incorporated in this design. The completed flight qualification hardware has undergone performance, environmental and life testing. The design configuration and test results are discussed in detail.

  11. Correction of beam errors in high power laser diode bars and stacks

    NASA Astrophysics Data System (ADS)

    Monjardin, J. F.; Nowak, K. M.; Baker, H. J.; Hall, D. R.

    2006-09-01

    The beam errors of an 11 bar laser diode stack fitted with fast-axis collimator lenses have been corrected by a single refractive plate, produced by laser cutting and polishing. The so-called smile effect is virtually eliminated and collimator aberration greatly reduced, improving the fast-axis beam quality of each bar by a factor of up to 5. The single corrector plate for the whole stack ensures that the radiation from all the laser emitters is parallel to a common axis. Beam-pointing errors of the bars have been reduced to below 0.7 mrad.

  12. Line-Bisecting Performance in Highly Skilled Athletes: Does Preponderance of Rightward Error Reflect Unique Cortical Organization and Functioning?

    ERIC Educational Resources Information Center

    Carlstedt, Roland A.

    2004-01-01

    A line-bisecting test was administered to 250 highly skilled right-handed athletes and a control group of 60 right-handed age matched non-athletes. Results revealed that athletes made overwhelmingly more rightward errors than non-athletes, who predominantly bisected lines to the left of the veridical center. These findings were interpreted in the…

  13. High-Velocity Angular Vestibulo-Ocular Reflex Adaptation to Position Error Signals

    PubMed Central

    Scherer, Matthew; Schubert, Michael C.

    2010-01-01

    Background and Purpose Vestibular rehabilitation strategies including gaze stabilization exercises have been shown to increase gain of the angular vestibulo-ocular reflex (aVOR) using a retinal slip error signal (ES). The identification of additional ESs capable of promoting substitution strategies or aVOR adaptation is an important goal in the management of vestibular hypofunction. Position ESs have been shown to increase both aVOR gain and recruitment of compensatory saccades (CSs) during passive whole body rotation. This may be a useful compensatory strategy for gaze instability during active head rotation as well. In vestibular rehabilitation, the imaginary target exercise is often prescribed to improve gaze stability. This exercise uses a position ES; however, the mechanism for its effect has not been investigated. We compared aVOR gain adaptation using 2 types of small position ES: constant versus incremental. Methods Ten subjects with normal vestibular function were assessed with unpredictable and active head rotations before and after a 20-minute training session. Subjects performed 9 epochs of 40 active, high-velocity head impulses using a position ES stimulus to increase aVOR gain. Results Five subjects demonstrated significant aVOR gain increases with the constant-position ES (mean, 2%; range, −18% to 12%) compared with another 5 subjects showing significant aVOR gain increases to the incremental-position ES (mean, 3.7%; range, −2% to 22.6%). There was no difference in aVOR gain adaptation or CS recruitment between the 2 paradigms. Discussion and Conclusion These findings suggest that some subjects can increase their aVOR gain in response to high-velocity active head movement training using a position ES. The primary mechanism for this seems to be aVOR gain adaptation because CS use was not modified. The overall low change in aVOR gain adaptation with position ES suggests that retinal slip is a more powerful aVOR gain modifier. PMID:20588093

  14. Prediction of error rates in dose-imprinted memories on board CRRES by two different methods. [Combined Release and Radiation Effects Satellite

    NASA Technical Reports Server (NTRS)

    Brucker, G. J.; Stassinopoulos, E. G.

    1991-01-01

    An analysis of the expected space radiation effects on the single event upset (SEU) properties of CMOS/bulk memories onboard the Combined Release and Radiation Effects Satellite (CRRES) is presented. Dose-imprint data from ground test irradiations of identical devices are applied to the predictions of cosmic-ray-induced space upset rates in the memories onboard the spacecraft. The calculations take into account the effect of total dose on the SEU sensitivity of the devices as the dose accumulates in orbit. Estimates of error rates, which involved an arbitrary selection of a single pair of threshold linear energy transfer (LET) and asymptotic cross-section values, were compared to the results of an integration over the cross-section curves versus LET. The integration gave lower upset rates than the use of the selected values of the SEU parameters. Since the integration approach is more accurate and eliminates the need for an arbitrary definition of threshold LET and asymptotic cross section, it is recommended for all error rate predictions where experimental sigma-versus-LET curves are available.

  15. In vivo TLD dose measurements in catheter-based high-dose-rate brachytherapy.

    PubMed

    Adlienė, Diana; Jakštas, Karolis; Urbonavičius, Benas Gabrielis

    2015-07-01

    Routine in vivo dosimetry is well established in external beam radiotherapy; however, it is restricted mainly to detection of gross errors in high-dose-rate (HDR) brachytherapy due to complicated measurements in the field of steep dose gradients in the vicinity of radioactive source and high uncertainties. The results of in vivo dose measurements using TLD 100 mini rods and TLD 'pin worms' in catheter-based HDR brachytherapy are provided in this paper alongside with their comparison with corresponding dose values obtained using calculation algorithm of the treatment planning system. Possibility to perform independent verification of treatment delivery in HDR brachytherapy using TLDs is discussed. PMID:25809111

  16. High repetition rate optical switch using an electroabsorption modulator in TOAD configuration

    NASA Astrophysics Data System (ADS)

    Huo, Li; Yang, Yanfu; Lou, Caiyun; Gao, Yizhi

    2007-07-01

    A novel optical switch featured with high repetition rate, short switching window width, and high contrast ratio is proposed and demonstrated for the first time by placing an electroabsorption modulator (EAM) in a terahertz optical asymmetric demultiplexer (TOAD) configuration. The feasibility and main characteristics of the switch are investigated by numerical simulations and experiments. With this EAM-based TOAD, an error-free return-to-zero signal wavelength conversion with 0.62 dB power penalty at 20 Gbit/s is demonstrated.

  17. Error propagation equations and tables for estimating the uncertainty in high-speed wind tunnel test results

    SciTech Connect

    Clark, E.L.

    1993-08-01

    Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, calibration Mach number and Reynolds number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-stream Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for nine fundamental aerodynamic ratios, most of which relate free-stream test conditions (pressure, temperature, density or velocity) to a reference condition. Tables of the ratios, R, absolute sensitivity coefficients, {partial_derivative}R/{partial_derivative}M{infinity}, and relative sensitivity coefficients, (M{infinity}/R) ({partial_derivative}R/{partial_derivative}M{infinity}), are provided as functions of M{infinity}.

  18. High data-rate atom interferometer for measuring acceleration

    SciTech Connect

    McGuinness, Hayden J.; Rakholia, Akash V.; Biedermann, Grant W.

    2012-01-02

    We demonstrate a high data-rate light-pulse atom interferometer for measuring acceleration. The device is optimized to operate at rates between 50 Hz to 330 Hz with sensitivities of 0.57{mu}g/{radical}(Hz) to 36.7{mu}g/{radical}(Hz), respectively. Our method offers a dramatic increase in data rate and demonstrates a path to applications in highly dynamic environments. The performance of the device can largely be attributed to the high recapture efficiency of atoms from one interferometer measurement cycle to another.

  19. Quantum data locking for high-rate private communication

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Lloyd, Seth

    2015-03-01

    We show that, if the accessible information is used as a security quantifier, quantum channels with a certain symmetry can convey private messages at a tremendously high rate, as high as less than one bit below the rate of non-private classical communication. This result is obtained by exploiting the quantum data locking effect. The price to pay to achieve such a high private communication rate is that accessible information security is in general not composable. However, composable security holds against an eavesdropper who is forced to measure her share of the quantum system within a finite time after she gets it.

  20. Estimates of rates and errors for measurements of direct-. gamma. and direct-. gamma. + jet production by polarized protons at RHIC

    SciTech Connect

    Beddo, M.E.; Spinka, H.; Underwood, D.G.

    1992-08-14

    Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.

  1. Uncovering high-strain rate protection mechanism in nacre

    NASA Astrophysics Data System (ADS)

    Huang, Zaiwang; Li, Haoze; Pan, Zhiliang; Wei, Qiuming; Chao, Yuh J.; Li, Xiaodong

    2011-11-01

    Under high-strain-rate compression (strain rate ~103 s-1), nacre (mother-of-pearl) exhibits surprisingly high fracture strength vis-à-vis under quasi-static loading (strain rate 10-3 s-1). Nevertheless, the underlying mechanism responsible for such sharply different behaviors in these two loading modes remains completely unknown. Here we report a new deformation mechanism, adopted by nacre, the best-ever natural armor material, to protect itself against predatory penetrating impacts. It involves the emission of partial dislocations and the onset of deformation twinning that operate in a well-concerted manner to contribute to the increased high-strain-rate fracture strength of nacre. Our findings unveil that Mother Nature delicately uses an ingenious strain-rate-dependent stiffening mechanism with a purpose to fight against foreign attacks. These findings should serve as critical design guidelines for developing engineered body armor materials.

  2. Uncovering high-strain rate protection mechanism in nacre.

    PubMed

    Huang, Zaiwang; Li, Haoze; Pan, Zhiliang; Wei, Qiuming; Chao, Yuh J; Li, Xiaodong

    2011-01-01

    Under high-strain-rate compression (strain rate approximately 10(3) s(-1)), nacre (mother-of-pearl) exhibits surprisingly high fracture strength vis-à-vis under quasi-static loading (strain rate 10(-3) s(-1)). Nevertheless, the underlying mechanism responsible for such sharply different behaviors in these two loading modes remains completely unknown. Here we report a new deformation mechanism, adopted by nacre, the best-ever natural armor material, to protect itself against predatory penetrating impacts. It involves the emission of partial dislocations and the onset of deformation twinning that operate in a well-concerted manner to contribute to the increased high-strain-rate fracture strength of nacre. Our findings unveil that Mother Nature delicately uses an ingenious strain-rate-dependent stiffening mechanism with a purpose to fight against foreign attacks. These findings should serve as critical design guidelines for developing engineered body armor materials. PMID:22355664

  3. Uncovering high-strain rate protection mechanism in nacre

    PubMed Central

    Huang, Zaiwang; Li, Haoze; Pan, Zhiliang; Wei, Qiuming; Chao, Yuh J.; Li, Xiaodong

    2011-01-01

    Under high-strain-rate compression (strain rate ∼103 s−1), nacre (mother-of-pearl) exhibits surprisingly high fracture strength vis-à-vis under quasi-static loading (strain rate 10−3 s−1). Nevertheless, the underlying mechanism responsible for such sharply different behaviors in these two loading modes remains completely unknown. Here we report a new deformation mechanism, adopted by nacre, the best-ever natural armor material, to protect itself against predatory penetrating impacts. It involves the emission of partial dislocations and the onset of deformation twinning that operate in a well-concerted manner to contribute to the increased high-strain-rate fracture strength of nacre. Our findings unveil that Mother Nature delicately uses an ingenious strain-rate-dependent stiffening mechanism with a purpose to fight against foreign attacks. These findings should serve as critical design guidelines for developing engineered body armor materials. PMID:22355664

  4. Coexistence of High-Bit-Rate Quantum Key Distribution and Data on Optical Fiber

    NASA Astrophysics Data System (ADS)

    Patel, K. A.; Dynes, J. F.; Choi, I.; Sharpe, A. W.; Dixon, A. R.; Yuan, Z. L.; Penty, R. V.; Shields, A. J.

    2012-10-01

    Quantum key distribution (QKD) uniquely allows the distribution of cryptographic keys with security verified by quantum mechanical limits. Both protocol execution and subsequent applications require the assistance of classical data communication channels. While using separate fibers is one option, it is economically more viable if data and quantum signals are simultaneously transmitted through a single fiber. However, noise-photon contamination arising from the intense data signal has severely restricted both the QKD distances and secure key rates. Here, we exploit a novel temporal-filtering effect for noise-photon rejection. This allows high-bit-rate QKD over fibers up to 90 km in length and populated with error-free bidirectional Gb/s data communications. With a high-bit rate and range sufficient for important information infrastructures, such as smart cities and 10-Gbit Ethernet, QKD is a significant step closer toward wide-scale deployment in fiber networks.

  5. FAST TRACK COMMUNICATION High rate straining of tantalum and copper

    NASA Astrophysics Data System (ADS)

    Armstrong, R. W.; Zerilli, F. J.

    2010-12-01

    High strain rate measurements reported recently for several tantalum and copper crystal/polycrystal materials are shown to follow dislocation mechanics-based constitutive relations, first at lower strain rates, for dislocation velocity control of the imposed plastic deformations and, then at higher rates, transitioning to nano-scale dislocation generation control by twinning or slip. For copper, there is the possibility of added-on slip dislocation displacements to be accounted for from the newly generated dislocations.

  6. INVESTIGATION OF FLOW RATE CALIBRATION PROCEDURES ASSOCIATED WITH THE HIGH VOLUME METHOD FOR DETERMINATION OF SUSPENDED PARTICULATES

    EPA Science Inventory

    Determination of total suspended particulate (TSP) in the ambient air by the high-volume method requires three independent measurements, mass of particulate collected, sampling flow rate, and sampling time. Several potential sources of error in each of the three above measurement...

  7. Impact of Surface Curvature on Dose Delivery in Intraoperative High-Dose-Rate Brachytherapy

    SciTech Connect

    Oh, Moonseong Wang Zhou; Malhotra, Harish K.; Jaggernauth, Wainwright; Podgorsak, Matthew B.

    2009-04-01

    In intraoperative high-dose-rate (IOHDR) brachytherapy, a 2-dimensional (2D) geometry is typically used for treatment planning. The assumption of planar geometry may cause serious errors in dose delivery for target surfaces that are, in reality, curved. A study to evaluate the magnitude of these errors in clinical practice was undertaken. Cylindrical phantoms with 6 radii (range: 1.35-12.5 cm) were used to simulate curved treatment geometries. Treatment plans were developed for various planar geometries and were delivered to the cylindrical phantoms using catheters inserted into Freiburg applicators of varying dimension. Dose distributions were measured using radiographic film. In comparison to the treatment plan (for a planar geometry), the doses delivered to prescription points were higher on the concave side of the geometry, up to 15% for the phantom with the smallest radius. On the convex side of the applicator, delivered doses were up to 10% lower for small treated areas ({<=} 5 catheters) but, interestingly, the dose error was negligible for large treated areas (>5 catheters). Our measurements have shown inaccuracy in dose delivery when the original planar treatment plan is delivered with a curved applicator. Dose delivery errors arising from the use of planar treatment plans with curved applicators may be significant.

  8. Rural and Urban High School Dropout Rates: Are They Different?

    ERIC Educational Resources Information Center

    Jordan, Jeffrey L.; Kostandini, Genti; Mykerezi, Elton

    2012-01-01

    This study estimates the high school dropout rate in rural and urban areas, the determinants of dropping out, and whether the differences in graduation rates have changed over time. We use geocoded data from two nationally representative panel household surveys (NLSY 97 and NLSY 79) and a novel methodology that corrects for biases in graduation…

  9. Effects of spectral discrimination in high-spectral-resolution lidar on the retrieval errors for atmospheric aerosol optical properties.

    PubMed

    Cheng, Zhongtao; Liu, Dong; Luo, Jing; Yang, Yongying; Su, Lin; Yang, Liming; Huang, Hanlu; Shen, Yibing

    2014-07-10

    This paper presents detailed analysis about the effects of spectral discrimination on the retrieval errors for atmospheric aerosol optical properties in high-spectral-resolution lidar (HSRL). To the best of our knowledge, this is the first study that focuses on this topic comprehensively, and our goal is to provide some heuristic guidelines for the design of the spectral discrimination filter in HSRL. We first introduce a theoretical model for retrieval error evaluation of an HSRL instrument with a general three-channel configuration. The model only takes the error sources related to the spectral discrimination parameters into account, while other error sources not associated with these focused parameters are excluded on purpose. Monte Carlo (MC) simulations are performed to validate the correctness of the theoretical model. Results from both the model and MC simulations agree very well, and they illustrate one important, although not well realized, fact: a large molecular transmittance and a large spectral discrimination ratio (SDR, i.e., ratio of the molecular transmittance to the aerosol transmittance) are beneficial to promote the retrieval accuracy. More specifically, we find that a large SDR can reduce retrieval errors conspicuously for atmosphere at low altitudes, while its effect on the retrieval for high altitudes is very limited. A large molecular transmittance contributes to good retrieval accuracy everywhere, particularly at high altitudes, where the signal-to-noise ratio is small. Since the molecular transmittance and SDR are often trade-offs, we suggest considering a suitable SDR for higher molecular transmittance instead of using unnecessarily high SDR when designing the spectral discrimination filter. These conclusions are expected to be applicable to most of the HSRL instruments, which have similar configurations as the one discussed here. PMID:25090057

  10. Completely automated, highly error-tolerant macromolecular structure determination from multidimensional nuclear overhauser enhancement spectra and chemical shift assignments.

    PubMed

    Kuszewski, John; Schwieters, Charles D; Garrett, Daniel S; Byrd, R Andrew; Tjandra, Nico; Clore, G Marius

    2004-05-26

    The major rate-limiting step in high-throughput NMR protein structure determination involves the calculation of a reliable initial fold, the elimination of incorrect nuclear Overhauser enhancement (NOE) assignments, and the resolution of NOE assignment ambiguities. We present a robust approach to automatically calculate structures with a backbone coordinate accuracy of 1.0-1.5 A from datasets in which as much as 80% of the long-range NOE information (i.e., between residues separated by more than five positions in the sequence) is incorrect. The current algorithm differs from previously published methods in that it has been expressly designed to ensure that the results from successive cycles are not biased by the global fold of structures generated in preceding cycles. Consequently, the method is highly error tolerant and is not easily funnelled down an incorrect path in either three-dimensional structure or NOE assignment space. The algorithm incorporates three main features: a linear energy function representation of the NOE restraints to allow maximization of the number of simultaneously satisfied restraints during the course of simulated annealing; a method for handling the presence of multiple possible assignments for each NOE cross-peak which avoids local minima by treating each possible assignment as if it were an independent restraint; and a probabilistic method to permit both inactivation and reactivation of all NOE restraints on the fly during the course of simulated annealing. NOE restraints are never removed permanently, thereby significantly reducing the likelihood of becoming trapped in a false minimum of NOE assignment space. The effectiveness of the algorithm is demonstrated using completely automatically peak-picked experimental NOE data from two proteins: interleukin-4 (136 residues) and cyanovirin-N (101 residues). The limits of the method are explored using simulated data on the 56-residue B1 domain of Streptococcal protein G. PMID:15149223

  11. High density bit transition requirements versus the effects on BCH error correcting code. [bit synchronization

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.

  12. General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.

    2011-01-01

    The Coronagraph Performance Error Budget (CPEB) tool automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. The tool uses a Code V prescription of the optical train, and uses MATLAB programs to call ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled fine-steering mirrors (FSMs). The sensitivity matrices are imported by macros into Excel 2007, where the error budget is evaluated. The user specifies the particular optics of interest, and chooses the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions, and combines that with the sensitivity matrices to generate an error budget for the system. CPEB also contains a combination of form and ActiveX controls with Visual Basic for Applications code to allow for user interaction in which the user can perform trade studies such as changing engineering requirements, and identifying and isolating stringent requirements. It contains summary tables and graphics that can be instantly used for reporting results in view graphs. The entire process to obtain a coronagraphic telescope performance error budget has been automated into three stages: conversion of optical prescription from Zemax or Code V to MACOS (in-house optical modeling and analysis tool), a linear models process, and an error budget tool process. The first process was improved by developing a MATLAB package based on the Class Constructor Method with a number of user-defined functions that allow the user to modify the MACOS optical prescription. The second process was modified by creating a MATLAB package that contains user-defined functions that automate the process. The user interfaces with the process by utilizing an initialization file where the user defines the parameters of the linear model

  13. High Heating Rates Affect Greatly the Inactivation Rate of Escherichia coli

    PubMed Central

    Huertas, Juan-Pablo; Aznar, Arantxa; Esnoz, Arturo; Fernández, Pablo S.; Iguaz, Asunción; Periago, Paula M.; Palop, Alfredo

    2016-01-01

    Heat resistance of microorganisms can be affected by different influencing factors. Although, the effect of heating rates has been scarcely explored by the scientific community, recent researches have unraveled its important effect on the thermal resistance of different species of vegetative bacteria. Typically heating rates described in the literature ranged from 1 to 20°C/min but the impact of much higher heating rates is unclear. The aim of this research was to explore the effect of different heating rates, such as those currently achieved in the heat exchangers used in the food industry, on the heat resistance of Escherichia coli. A pilot plant tubular heat exchanger and a thermoresistometer Mastia were used for this purpose. Results showed that fast heating rates had a deep impact on the thermal resistance of E. coli. Heating rates between 20 and 50°C/min were achieved in the heat exchanger, which were much slower than those around 20°C/s achieved in the thermoresistometer. In all cases, these high heating rates led to higher inactivation than expected: in the heat exchanger, for all the experiments performed, when the observed inactivation had reached about seven log cycles, the predictions estimated about 1 log cycle of inactivation; in the thermoresistometer these differences between observed and predicted values were even more than 10 times higher, from 4.07 log cycles observed to 0.34 predicted at a flow rate of 70 mL/min and a maximum heating rate of 14.7°C/s. A quantification of the impact of the heating rates on the level of inactivation achieved was established. These results point out the important effect that the heating rate has on the thermal resistance of E. coli, with high heating rates resulting in an additional sensitization to heat and therefore an effective food safety strategy in terms of food processing. PMID:27563300

  14. High Heating Rates Affect Greatly the Inactivation Rate of Escherichia coli.

    PubMed

    Huertas, Juan-Pablo; Aznar, Arantxa; Esnoz, Arturo; Fernández, Pablo S; Iguaz, Asunción; Periago, Paula M; Palop, Alfredo

    2016-01-01

    Heat resistance of microorganisms can be affected by different influencing factors. Although, the effect of heating rates has been scarcely explored by the scientific community, recent researches have unraveled its important effect on the thermal resistance of different species of vegetative bacteria. Typically heating rates described in the literature ranged from 1 to 20°C/min but the impact of much higher heating rates is unclear. The aim of this research was to explore the effect of different heating rates, such as those currently achieved in the heat exchangers used in the food industry, on the heat resistance of Escherichia coli. A pilot plant tubular heat exchanger and a thermoresistometer Mastia were used for this purpose. Results showed that fast heating rates had a deep impact on the thermal resistance of E. coli. Heating rates between 20 and 50°C/min were achieved in the heat exchanger, which were much slower than those around 20°C/s achieved in the thermoresistometer. In all cases, these high heating rates led to higher inactivation than expected: in the heat exchanger, for all the experiments performed, when the observed inactivation had reached about seven log cycles, the predictions estimated about 1 log cycle of inactivation; in the thermoresistometer these differences between observed and predicted values were even more than 10 times higher, from 4.07 log cycles observed to 0.34 predicted at a flow rate of 70 mL/min and a maximum heating rate of 14.7°C/s. A quantification of the impact of the heating rates on the level of inactivation achieved was established. These results point out the important effect that the heating rate has on the thermal resistance of E. coli, with high heating rates resulting in an additional sensitization to heat and therefore an effective food safety strategy in terms of food processing. PMID:27563300

  15. Average bit error rate performance analysis of subcarrier intensity modulated MRC and EGC FSO systems with dual branches over M distribution turbulence channels

    NASA Astrophysics Data System (ADS)

    Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang

    2015-07-01

    Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.

  16. High strain rate loading of polymeric foams and solid plastics

    NASA Astrophysics Data System (ADS)

    Dick, Richard D.; Chang, Peter C.; Fourney, William L.

    2000-04-01

    The split-Hopkinson pressure bar (SHPB) provided a technique to determine the high strain rate response for low density foams and solid ABS and polypropylene plastics. These materials are used in the interior safety panels of automobiles and crash test dummies. Because the foams have a very low impedance, polycarbonate bars were used to acquire the strain rate data in the 100 to 1600 l/s range. An aluminum SPHB setup was used to obtain the solid plastics data which covered strain rates of 1000 to 4000 l/s. The curves for peak strain rate versus peak stress for the foams over the test range studied indicates only a slight strain rate dependence. Peak strain rate versus peak stress curves for polypropylene shows a strain rate dependence up to about 1500 l/s. At that rate the solid poly propylene indicates no strain rate dependence. The ABS plastics are strain rate dependent up to 3500 l/s and then are independent at larger strain rates.

  17. Phase errors in high line density CGH used for aspheric testing: beyond scalar approximation.

    PubMed

    Peterhänsel, S; Pruss, C; Osten, W

    2013-05-20

    One common way to measure asphere and freeform surfaces is the interferometric Null test, where a computer generated hologram (CGH) is placed in the object path of the interferometer. If undetected phase errors are present in the CGH, the measurement will show systematic errors. Therefore the absolute phase of this element has to be known. This phase is often calculated using scalar diffraction theory. In this paper we discuss the limitations of this theory for the prediction of the absolute phase generated by different implementations of CGH. Furthermore, for regions where scalar approximation is no longer valid, rigorous simulations are performed to identify phase sensitive structure parameters and evaluate fabrication tolerances for typical gratings. PMID:23736387

  18. Performance of high flow rate samplers for respirable particle collection.

    PubMed

    Lee, Taekhee; Kim, Seung Won; Chisholm, William P; Slaven, James; Harper, Martin

    2010-08-01

    The American Conference of Governmental Industrial hygienists (ACGIH) lowered the threshold limit value (TLV) for respirable crystalline silica (RCS) exposure from 0.05 to 0.025 mg m(-3) in 2006. For a working environment with an airborne dust concentration near this lowered TLV, the sample collected with current standard respirable aerosol samplers might not provide enough RCS for quantitative analysis. Adopting high flow rate sampling devices for respirable dust containing silica may provide a sufficient amount of RCS to be above the limit of quantification even for samples collected for less than full shift. The performances of three high flow rate respirable samplers (CIP10-R, GK2.69, and FSP10) have been evaluated in this study. Eleven different sizes of monodisperse aerosols of ammonium fluorescein were generated with a vibrating orifice aerosol generator in a calm air chamber in order to determine the sampling efficiency of each sampler. Aluminum oxide particles generated by a fluidized bed aerosol generator were used to test (i) the uniformity of a modified calm air chamber, (ii) the effect of loading on the sampling efficiency, and (iii) the performance of dust collection compared to lower flow rate cyclones in common use in the USA (10-mm nylon and Higgins-Dewell cyclones). The coefficient of variation for eight simultaneous samples in the modified calm air chamber ranged from 1.9 to 6.1% for triplicate measures of three different aerosols. The 50% cutoff size ((50)d(ae)) of the high flow rate samplers operated at the flow rates recommended by manufacturers were determined as 4.7, 4.1, and 4.8 microm for CIP10-R, GK2.69, and FSP10, respectively. The mass concentration ratio of the high flow rate samplers to the low flow rate cyclones decreased with decreasing mass median aerodynamic diameter (MMAD) and high flow rate samplers collected more dust than low flow rate samplers by a range of 2-11 times based on gravimetric analysis. Dust loading inside the

  19. Performance of High Flow Rate Samplers for Respirable Particle Collection

    PubMed Central

    Lee, Taekhee; Kim, Seung Won; Chisholm, William P.; Slaven, James; Harper, Martin

    2010-01-01

    The American Conference of Governmental Industrial hygienists (ACGIH) lowered the threshold limit value (TLV) for respirable crystalline silica (RCS) exposure from 0.05 to 0.025 mg m−3 in 2006. For a working environment with an airborne dust concentration near this lowered TLV, the sample collected with current standard respirable aerosol samplers might not provide enough RCS for quantitative analysis. Adopting high flow rate sampling devices for respirable dust containing silica may provide a sufficient amount of RCS to be above the limit of quantification even for samples collected for less than full shift. The performances of three high flow rate respirable samplers (CIP10-R, GK2.69, and FSP10) have been evaluated in this study. Eleven different sizes of monodisperse aerosols of ammonium fluorescein were generated with a vibrating orifice aerosol generator in a calm air chamber in order to determine the sampling efficiency of each sampler. Aluminum oxide particles generated by a fluidized bed aerosol generator were used to test (i) the uniformity of a modified calm air chamber, (ii) the effect of loading on the sampling efficiency, and (iii) the performance of dust collection compared to lower flow rate cyclones in common use in the USA (10-mm nylon and Higgins–Dewell cyclones). The coefficient of variation for eight simultaneous samples in the modified calm air chamber ranged from 1.9 to 6.1% for triplicate measures of three different aerosols. The 50% cutoff size (50dae) of the high flow rate samplers operated at the flow rates recommended by manufacturers were determined as 4.7, 4.1, and 4.8 μm for CIP10-R, GK2.69, and FSP10, respectively. The mass concentration ratio of the high flow rate samplers to the low flow rate cyclones decreased with decreasing mass median aerodynamic diameter (MMAD) and high flow rate samplers collected more dust than low flow rate samplers by a range of 2–11 times based on gravimetric analysis. Dust loading inside the high

  20. A General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.; Shaklan, Stuart B.

    2009-01-01

    This paper describes a general purpose Coronagraph Performance Error Budget (CPEB) tool that we have developed under the NASA Exoplanet Exploration Program. The CPEB automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. It operates in 3 steps: first, a CodeV or Zemax prescription is converted into a MACOS optical prescription. Second, a Matlab program calls ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled coarse and fine-steering mirrors. Third, the sensitivity matrices are imported by macros into Excel 2007 where the error budget is created. Once created, the user specifies the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions and combines them with the sensitivity matrices to generate an error budget for the system. The user can easily modify the motion allocations to perform trade studies.

  1. Slow rate of molecular evolution in high-elevation hummingbirds.

    PubMed

    Bleiweiss, R

    1998-01-20

    Estimates of relative rates of molecular evolution from a DNA-hybridization phylogeny for 26 hummingbird species provide evidence for a negative association between elevation and rate of single-copy genome evolution. This effect of elevation on rate remains significant even after taking into account a significant negative association between body mass and molecular rate. Population-level processes do not appear to account for these patterns because (i) all hummingbirds breed within their first year and (ii) the more extensive subdivision and speciation of bird populations living at high elevations predicts a positive association between elevation and rate. The negative association between body mass and molecular rate in other organisms has been attributed to higher mutation rates in forms with higher oxidative metabolism. As ambient oxygen tensions and temperature decrease with elevation, the slow rate of molecular evolution in high-elevation hummingbirds also may have a metabolic basis. A slower rate of single-copy DNA change at higher elevations suggests that the dynamics of molecular evolution cannot be separated from the environmental context. PMID:9435240

  2. Numerical Simulation of Coherent Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, Mark

    A major goal in quantum computation is the implementation of error correction to produce a logical qubit with an error rate lower than that of the underlying physical qubits. Recent experimental progress demonstrates physical qubits can achieve error rates sufficiently low for error correction, particularly for codes with relatively high thresholds such as the surface code and color code. Motivated by experimental capabilities of neutral atom systems, we use numerical simulation to investigate whether coherent error correction can be effectively used with the 7-qubit color code. The results indicate that coherent error correction does not work at the 10-qubit level in neutral atom array quantum computers. By adding more qubits there is a possibility of making the encoding circuits fault-tolerant which could improve performance.

  3. The Nature of Error in Adolescent Student Writing

    ERIC Educational Resources Information Center

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  4. Solidification at the High and Low Rate Extreme

    SciTech Connect

    Halim Meco

    2004-12-19

    The microstructures formed upon solidification are strongly influenced by the imposed growth rates on an alloy system. Depending on the characteristics of the solidification process, a wide range of growth rates is accessible. The prevailing solidification mechanisms, and thus the final microstructure of the alloy, are governed by these imposed growth rates. At the high rate extreme, for instance, one can have access to novel microstructures that are unattainable at low growth rates. While the low growth rates can be utilized for the study of the intrinsic growth behavior of a certain phase growing from the melt. Although the length scales associated with certain processes, such as capillarity, and the diffusion of heat and solute, are different at low and high rate extremes, the phenomena that govern the selection of a certain microstructural length scale or a growth mode are the same. Consequently, one can analyze the solidification phenomena at both high and low rates by using the same governing principles. In this study, we examined the microstructural control at both low and high extremes. For the high rate extreme, the formation of crystalline products and factors that control the microstructure during rapid solidification by free-jet melt spinning are examined in Fe-Si-B system. Particular attention was given to the behavior of the melt pool at different quench-wheel speeds. Since the solidification process takes place within the melt-pool that forms on the rotating quench-wheel, we examined the influence of melt-pool dynamics on nucleation and growth of crystalline solidification products and glass formation. High-speed imaging of the melt-pool, analysis of ribbon microstructure, and measurement of ribbon geometry and surface character all indicate upper and lower limits for melt-spinning rates for which nucleation can be avoided, and fully amorphous ribbons can be achieved. Comparison of the relevant time scales reveals that surface-controlled melt

  5. The pathophysiology of medication errors: how and where they arise

    PubMed Central

    McDowell, Sarah E; Ferner, Harriet S; Ferner, Robin E

    2009-01-01

    Errors arise when an action is intended but not performed; errors that arise from poor planning or inadequate knowledge are characterized as mistakes; those that arise from imperfect execution of well-formulated plans are called slips when an erroneous act is committed and lapses when a correct act is omitted. Some tasks are intrinsically prone to error. Examples are tasks that are unfamiliar to the operator or performed under pressure. Tasks that require the calculation of a dosage or dilution are especially susceptible to error. The tasks of prescribing, preparation, and administration of medicines are complex, and are carried out within a complex system; errors can occur at each of many steps and the error rate for the overall process is therefore high. The error rate increases when health-care professionals are inexperienced, inattentive, rushed, distracted, fatigued, or depressed; orthopaedic surgeons and nurses may be more likely than other health-care professionals to make medication errors. Medication error rates in hospital are higher in paediatric departments and intensive care units than elsewhere. Rates of medication errors may be higher in very young or very old patients. Intravenous antibiotics are the drugs most commonly involved in medication errors in hospital; antiplatelet agents, diuretics, and non-steroidal anti-inflammatory drugs are most likely to account for ‘preventable admissions’. Computers effectively reduce the rates of easily counted errors. It is not clear whether they can save lives lost through rare but dangerous errors in the medication process. PMID:19594527

  6. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    NASA Astrophysics Data System (ADS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-05-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  7. Authoritative School Climate and High School Dropout Rates

    ERIC Educational Resources Information Center

    Jia, Yuane; Konold, Timothy R.; Cornell, Dewey

    2016-01-01

    This study tested the association between school-wide measures of an authoritative school climate and high school dropout rates in a statewide sample of 315 high schools. Regression models at the school level of analysis used teacher and student measures of disciplinary structure, student support, and academic expectations to predict overall high…

  8. High strain-rate plastic flow in Fe and Al

    NASA Astrophysics Data System (ADS)

    Smith, Raymond; Eggert, Jon; Rudd, Robert; Bolme, Cynthia; Collins, Gilbert

    2011-06-01

    Understanding the nature and time-dependence of material deformation at high strain rates is an important goal in condensed matter physics. Under dynamic loading, the rate of plastic strain is determined by the flow of dislocations through the crystal lattice and is a complex function of time, distance, sample purity, temperature, internal stresses, microstructure and strain rate. Under shock compression time-dependent plasticity is typically inferred by fitting elastic precursor stresses as a function of propagation distance with a phenomenologically based dislocation kinetics model. We employ a laser-driven ramp wave loading technique to compress 6-70 micron thick samples of bcc-Fe and fcc-Al over a strain rate range of 1e6-1e8 1/s. Our data show that for fixed sample thickness, stresses associated the onset of plasticity are highly dependent on the strain rate of compression and do not readily fit into the elastic stress - distance evolution descriptive of instantaneous shock loading. We find that the elastic stress at the onset of plasticity is well correlated with the strain rate at the onset of plastic flow for both shock- and ramp-wave experiments. Our data, combined with data from other dynamic compression platforms, reveal a sharp increase in the peak elastic stress at high strain rates, consistent with a transition in dislocation flow dominated by phonon drag. smith248@llnl.gov

  9. High-repetition-rate short-pulse gas discharge.

    PubMed

    Tulip, J; Seguin, H; Mace, P N

    1979-09-01

    A high-average-power short-pulse gas discharge is described. This consists of a volume-preionized transverse discharge of the type used in gas lasers driven by a Blumlein energy storage circuit. The Blumlein circuit is fabricated from coaxial cable, is pulse-charged from a high-repetition-rate Marx-bank generator, and is switched by a high-repetition-rate segmented rail gap. The operation of this discharge under conditions typical of rare-gas halide lasers is described. A maximum of 900 pps was obtained, giving a power flow into the discharge of 30 kW. PMID:18699678

  10. Blast furnace coal injection system design for high rates

    SciTech Connect

    Snowden, B.

    1994-12-31

    Coal injection into blast furnaces is now well established as a basic technology. However, high rates of coal injection between 300 and 500 lb/thm (160 to 250 kg/thm) are a rarity. Special consideration must be given to the overall concept regarding strategic coal storage, expected equipment reliability, and back-up available to prevent furnace problems, should any of the coal feeding systems fail. British Steel and Simon Macawber now have considerable operational experience at high rates for sustained periods. The paper will discuss the points to be considered and describe the ATSI-Simon Macawber approach to providing a high level of confidence in the coal injection system.

  11. Evolution of High Tooth Replacement Rates in Sauropod Dinosaurs

    PubMed Central

    Smith, Kathlyn M.; Fisher, Daniel C.; Wilson, Jeffrey A.

    2013-01-01

    Background Tooth replacement rate can be calculated in extinct animals by counting incremental lines of deposition in tooth dentin. Calculating this rate in several taxa allows for the study of the evolution of tooth replacement rate. Sauropod dinosaurs, the largest terrestrial animals that ever evolved, exhibited a diversity of tooth sizes and shapes, but little is known about their tooth replacement rates. Methodology/Principal Findings We present tooth replacement rate, formation time, crown volume, total dentition volume, and enamel thickness for two coexisting but distantly related and morphologically disparate sauropod dinosaurs Camarasaurus and Diplodocus. Individual tooth formation time was determined by counting daily incremental lines in dentin. Tooth replacement rate is calculated as the difference between the number of days recorded in successive replacement teeth. Each tooth family in Camarasaurus has a maximum of three replacement teeth, whereas each Diplodocus tooth family has up to five. Tooth formation times are about 1.7 times longer in Camarasaurus than in Diplodocus (315 vs. 185 days). Average tooth replacement rate in Camarasaurus is about one tooth every 62 days versus about one tooth every 35 days in Diplodocus. Despite slower tooth replacement rates in Camarasaurus, the volumetric rate of Camarasaurus tooth replacement is 10 times faster than in Diplodocus because of its substantially greater tooth volumes. A novel method to estimate replacement rate was developed and applied to several other sauropodomorphs that we were not able to thin section. Conclusions/Significance Differences in tooth replacement rate among sauropodomorphs likely reflect disparate feeding strategies and/or food choices, which would have facilitated the coexistence of these gigantic herbivores in one ecosystem. Early neosauropods are characterized by high tooth replacement rates (despite their large tooth size), and derived titanosaurs and diplodocoids independently

  12. High-performance micromachined vibratory rate- and rate-integrating gyroscopes

    NASA Astrophysics Data System (ADS)

    Cho, Jae Yoong

    The performance of vibratory micromachined gyroscopes has been continuously improving for the past two decades. However, to further improve performance of the MEMS gyroscope in harsh environment, it is necessary for gyros to reduce the sensitivity to environmental parameters, including vibration and temperature change. In addition, conventional rate-mode MEMS gyroscopes have limitation in performance due to tradeoff between resolution, bandwidth, and full-scale range. In this research, we aim to reduce vibration sensitivity by developing gyros that operate in the balanced mode. The balanced mode creates zero net momentum and reduces energy loss through an anchor. The gyro can differentially cancel measurement errors from external vibration along both sensor axes. The vibration sensitivity of the balanced-mode gyroscope including structural imbalance from microfabrication reduces as the absolute difference between in-phase parasitic mode and operating mode frequencies increases. The parasitic sensing mode frequency is designed larger than the operating mode frequency to achieve both improved vibration insensitivity and shock resistivity. A single anchor is used in order to minimize thermoresidual stress change. We developed two gyroscope based on these design principles. The Balanced Oscillating Gyro (BOG) is a quad-mass tuning-fork rate gyroscope. The relationship between gyro design and modal characteristics is studied extensively using finite element method (FEM). The gyro is fabricated using the planar Si-on-glass (SOG) process with a device thickness of 100microm. The BOG is evaluated using the first-generation analog interface circuitry. Under a frequency mismatch of 5Hz between driving and sense modes, the angle random walk (ARW) is measured to be 0.44°/sec/✓Hz. The performance is limited by quadrature error and low-frequency noise in the circuit. The Cylindrical Rate-Integrating Gyroscope (CING) operates in whole-angle mode. The gyro is completely

  13. A method for the prevention of high-risk medication errors

    NASA Astrophysics Data System (ADS)

    Allgeyer, Dean

    2007-02-01

    A device and process for preventing medical errors due to the improper administration of an intravenously delivered medication includes the spectroscopic analysis of intravenous fluid components. An emission source and detector are placed adjacent to the intravenous tubing of an administration set to generate signals for spectroscopic analysis. The signals are processed to identify the medication and, in certain embodiments of the invention, can determine the medication's concentration. In a preferred embodiment, the emission source, detector, and hardware and software for the spectroscopic analysis are placed in an infusion pump.

  14. High Rate and Stable Cycling of Lithium Metal Anode

    SciTech Connect

    Qian, Jiangfeng; Henderson, Wesley A.; Xu, Wu; Bhattacharya, Priyanka; Engelhard, Mark H.; Borodin, Oleg; Zhang, Jiguang

    2015-02-20

    Lithium (Li) metal is an ideal anode material for rechargeable batteries. However, dendritic Li growth and limited Coulombic efficiency (CE) during repeated Li deposition/stripping processes have prevented the application of this anode in rechargeable Li metal batteries, especially for use at high current densities. Herein, we report that the use of highly concentrated electrolytes composed of ether solvents and the lithium bis(fluorosulfonyl)imide (LiFSI) salt enables the high rate cycling of a Li metal anode at high CE (up to 99.1 %) without dendrite growth. With 4 M LiFSI in 1,2-dimethoxyethane (DME) as the electrolyte, a Li|Li cell can be cycled at high rates (10 mA cm-2) for more than 6000 cycles with no increase in the cell impedance, and a Cu|Li cell can be cycled at 4 mA cm-2 for more than 1000 cycles with an average CE of 98.4%. These excellent high rate performances can be attributed to the increased solvent coordination and increased availability of Li+ concentration in the electrolyte. Further development of this electrolyte may lead to practical applications for Li metal anode in rechargeable batteries. The fundamental mechanisms behind the high rate ion exchange and stability of the electrolytes also shine light on the stability of other electrochemical systems.

  15. High rate and stable cycling of lithium metal anode

    SciTech Connect

    Qian, Jiangfeng; Henderson, Wesley A.; Xu, Wu; Bhattacharya, Priyanka; Engelhard, Mark H.; Borodin, Oleg; Zhang, Jiguang

    2015-02-20

    Lithium (Li) metal is an ideal anode material for rechargeable batteries. However, dendritic Li growth and limited Coulombic efficiency (CE) during repeated Li deposition/stripping processes have prevented the application of this anode in rechargeable Li metal batteries, especially for use at high current densities. Here, we report that the use of highly concentrated electrolytes composed of ether solvents and the lithium bis(fluorosulfonyl)imide (LiFSI) salt enables the high rate cycling of a Li metal anode at high CE (up to 99.1 %) without dendrite growth. With 4 M LiFSI in 1,2-dimethoxyethane (DME) as the electrolyte, a Li|Li cell can be cycled at high rates (10 mA cm-2) for more than 6000 cycles with no increase in the cell impedance, and a Cu|Li cell can be cycled at 4 mA cm-2 for more than 1000 cycles with an average CE of 98.4%. These excellent high rate performances can be attributed to the increased solvent coordination and increased availability of Li+ concentration in the electrolyte. Lastly, further development of this electrolyte may lead to practical applications for Li metal anode in rechargeable batteries. The fundamental mechanisms behind the high rate ion exchange and stability of the electrolytes also shine light on the stability of other electrochemical systems.

  16. High rate and stable cycling of lithium metal anode

    DOE PAGESBeta

    Qian, Jiangfeng; Henderson, Wesley A.; Xu, Wu; Bhattacharya, Priyanka; Engelhard, Mark H.; Borodin, Oleg; Zhang, Jiguang

    2015-02-20

    Lithium (Li) metal is an ideal anode material for rechargeable batteries. However, dendritic Li growth and limited Coulombic efficiency (CE) during repeated Li deposition/stripping processes have prevented the application of this anode in rechargeable Li metal batteries, especially for use at high current densities. Here, we report that the use of highly concentrated electrolytes composed of ether solvents and the lithium bis(fluorosulfonyl)imide (LiFSI) salt enables the high rate cycling of a Li metal anode at high CE (up to 99.1 %) without dendrite growth. With 4 M LiFSI in 1,2-dimethoxyethane (DME) as the electrolyte, a Li|Li cell can be cycledmore » at high rates (10 mA cm-2) for more than 6000 cycles with no increase in the cell impedance, and a Cu|Li cell can be cycled at 4 mA cm-2 for more than 1000 cycles with an average CE of 98.4%. These excellent high rate performances can be attributed to the increased solvent coordination and increased availability of Li+ concentration in the electrolyte. Lastly, further development of this electrolyte may lead to practical applications for Li metal anode in rechargeable batteries. The fundamental mechanisms behind the high rate ion exchange and stability of the electrolytes also shine light on the stability of other electrochemical systems.« less

  17. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  18. Lithographically encoded polymer microtaggant using high-capacity and error-correctable QR code for anti-counterfeiting of drugs.

    PubMed

    Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook

    2012-11-20

    A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug. PMID:22930454

  19. High Strain Rate Behavior of Polymer Matrix Composites Analyzed

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Roberts, Gary D.

    2001-01-01

    Procedures for modeling the high-speed impact of composite materials are needed for designing reliable composite engine cases that are lighter than the metal cases in current use. The types of polymer matrix composites that are likely to be used in such an application have a deformation response that is nonlinear and that varies with strain rate. To characterize and validate material models that could be used in the design of impactresistant engine cases, researchers must obtain material data over a wide variety of strain rates. An experimental program has been carried out through a university grant with the Ohio State University to obtain deformation data for a representative polymer matrix composite for strain rates ranging from quasi-static to high rates of several hundred per second. This information has been used to characterize and validate a constitutive model that was developed at the NASA Glenn Research Center.

  20. Study of High Strain Rate Response of Composites

    NASA Technical Reports Server (NTRS)

    Gilat, Amos

    2003-01-01

    The objective of the research was to continue the experimental study of the effect of strain rate on mechanical response (deformation and failure) of epoxy resins and carbon fibers/epoxy matrix composites, and to initiate a study of the effects of temperature by developing an elevated temperature test. The experimental data provide the information needed for NASA scientists for the development of a nonlinear, rate dependent deformation and strength models for composites that can subsequently be used in design. This year effort was directed into testing the epoxy resin. Three types of epoxy resins were tested in tension and shear at various strain rates that ranges from 5 x 10(exp -5), to 1000 per second. Pilot shear experiments were done at high strain rate and an elevated temperature of 80 C. The results show that all, the strain rate, the mode of loading, and temperature significantly affect the response of epoxy.

  1. Voigt profile introduces optical depth dependent systematic errors - Detected in high resolution laboratory spectra of water

    NASA Astrophysics Data System (ADS)

    Birk, Manfred; Wagner, Georg

    2016-02-01

    The Voigt profile commonly used in radiative transfer modeling of Earth's and planets' atmospheres for remote sensing/climate modeling produces systematic errors so far not accounted for. Saturated lines are systematically too narrow when calculated from pressure broadening parameters based on the analysis of laboratory data with the Voigt profile. This is caused by line narrowing effects leading to systematically too small fitted broadening parameters when applying the Voigt profile. These effective values are still valid to model non-saturated lines with sufficient accuracy. Saturated lines dominated by the wings of the line profile are sufficiently accurately modeled with a Voigt profile with the correct broadening parameters and are thus systematically too narrow when calculated with the effective values. The systematic error was quantified by mid infrared laboratory spectroscopy of the water ν2 fundamental. Correct Voigt profile based pressure broadening parameters for saturated lines were 3-4% larger than the effective ones in the spectroscopic database. Impacts on remote sensing and climate modeling are expected. Combination of saturated and non-saturated lines in the spectroscopic analysis will quantify line narrowing with unprecedented precision.

  2. THE AMERICAN HIGH SCHOOL GRADUATION RATE: TRENDS AND LEVELS*

    PubMed Central

    Heckman, James J.; LaFontaine, Paul A.

    2009-01-01

    This paper applies a unified methodology to multiple data sets to estimate both the levels and trends in U.S. high school graduation rates. We establish that (a) the true rate is substantially lower than widely used measures; (b) it peaked in the early 1970s; (c) majority/minority differentials are substantial and have not converged for 35 years; (d) lower post-1970 rates are not solely due to increasing immigrant and minority populations; (e) our findings explain part of the slowdown in college attendance and rising college wage premiums; and (f) widening graduation differentials by gender help explain increasing male-female college attendance gaps. PMID:20625528

  3. High-Strain-Rate Compression Testing of Ice

    NASA Technical Reports Server (NTRS)

    Shazly, Mostafa; Prakash, Vikas; Lerch, Bradley A.

    2006-01-01

    In the present study a modified split Hopkinson pressure bar (SHPB) was employed to study the effect of strain rate on the dynamic material response of ice. Disk-shaped ice specimens with flat, parallel end faces were either provided by Dartmouth College (Hanover, NH) or grown at Case Western Reserve University (Cleveland, OH). The SHPB was adapted to perform tests at high strain rates in the range 60 to 1400/s at test temperatures of -10 and -30 C. Experimental results showed that the strength of ice increases with increasing strain rates and this occurs over a change in strain rate of five orders of magnitude. Under these strain rate conditions the ice microstructure has a slight influence on the strength, but it is much less than the influence it has under quasi-static loading conditions. End constraint and frictional effects do not influence the compression tests like they do at slower strain rates, and therefore the diameter/thickness ratio of the samples is not as critical. The strength of ice at high strain rates was found to increase with decreasing test temperatures. Ice has been identified as a potential source of debris to impact the shuttle; data presented in this report can be used to validate and/or develop material models for ice impact analyses for shuttle Return to Flight efforts.

  4. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding.

    SciTech Connect

    Loughry, Thomas A.

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  5. Study on ameliorating the FEC coding techniques in current high-rate optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jianguo; Ye, Wenwei; Jiang, Ze; Mao, Youju; Wang, Wei

    2007-01-01

    In this paper, the three ameliorated new coding schemes of the Super-FEC (Forward Error Correction) concatenatedcodes (namely, the inner-outer concatenated-code, the parallel concatenated-code and the successive concatenated-code with interleaving) are proposed after the development trend of high-rate optical transmission systems and the defects of the FEC codes in the current optical transmission systems have been analyzed. The system simulation of the inner-outer concatenated-codes is implemented and the schemes of encoding/decoding the parallel concatenated-codes are proposed. Furthermore, the two successive concatenated-codes with interleaving of the RS(255,239)+RS(255,239) code and the RS(255,239)+RS(255,223) code are simulated, and the analyses for the simulation results show that the two successive concatenated-codes with interleaving, compared with the classic RS (255,239) code and other codes, are a superior code type with the advantages of the better correction error, moderate redundancy and easy realization. And their net coding gains (NCG) are respectively 1.5dB, 2.5dB more than that of the classic RS(255,239) code at the BER (Bit Error Rate) of 10 -12. At last, based on the ITU-T G.709, the frame format of the new concatenated-code on applying in high-rate optical transmission systems is proposed and designed, this lays a firm foundation in order to design its hardware in future and pioneers a direction in its physical application.

  6. Resonator fiber optic gyro with high backscatter-error suppression using two independent phase-locked lasers

    NASA Astrophysics Data System (ADS)

    Wu, Jiangfeng; Smiciklas, Marc; Strandjord, Lee K.; Qiu, Tiequn; Ho, Waymon; Sanders, Glen A.

    2015-09-01

    A resonator fiber optic gyro was constructed using separate lasers for counter-rotating waves to overcome interference between optical backscatter and signal light that causes dead-zone behavior and scale factor nonlinearity. This approach enabled a 2 MHz frequency separation between waves in the resonator; eliminating the intended backscatter error. The two lasers were phase-locked to prevent increased gyro noise due to laser frequency noise. Dead-band-free operation near zero-rate, scale factor linearity of 25 ppm and stability of 11 ppm were demonstrated - the closest results to navigation-grade performance reported to date. The approach is also free of impractical frequency shifter technology.

  7. High strain rate compression testing of glass fibre reinforced polypropylene

    NASA Astrophysics Data System (ADS)

    Govender, R. A.; Langdon, G. S.; Cloete, T. J.; Nurick, G. N.

    2012-08-01

    This paper details an investigation of the high strain rate compression testing of GFPP with the Split Hopkinson Pressure Bar (SHPB) in the through-thickness and in-plane directions. GFPP posed challenges to SHPB testing as it fails at relatively high stresses, while having relatively low moduli and hence mechanical impedance. The modifications to specimen geometry and incident pulse shaping in order to gather valid test results, where specimen equilibrium was achieved for SHPB tests on GFPP are presented. In addition to conventional SHPB tests to failure, SHPB experiments were designed to achieve specimen equilibration at small strains, which permitted the capture of high strain rate elastic modulus data. The strain rate dependency of GFPP's failure strengths in the in-plane and through-thickness direction is modelled using a logarithmic law.

  8. Online aging study of a high rate MRPC

    NASA Astrophysics Data System (ADS)

    Jie, Wang; Yi, Wang; Q. Feng, S.; Bo, Xie; Pengfei, Lv; Fuyue, Wang; Baohong, Guo; Dong, Han; Yuanjing, Li

    2016-05-01

    With the constant increase of accelerator luminosity, the rate requirements of MRPC detectors have become very important, and the aging characteristics of the detector have to be studied meticulously. An online aging test system has been set up in our lab, and in this paper the setup of the system is described and the performance stability of a high-rate MRPC studied over a long running time under a high luminosity environment. The high rate MRPC was irradiated by X-rays for 36 days and the accumulated charge density reached 0.1 C/cm2. No obvious performance degradation was observed for the detector. Supported by National Natural Science Foundation of China (11420101004, 11461141011, 11275108), Ministry of Science and Technology (2015CB856905)

  9. Semi-solid electrodes having high rate capability

    DOEpatents

    Chiang, Yet-Ming; Duduta, Mihai; Holman, Richard; Limthongkul, Pimpa; Tan, Taison

    2016-07-05

    Embodiments described herein relate generally to electrochemical cells having high rate capability, and more particularly to devices, systems and methods of producing high capacity and high rate capability batteries having relatively thick semi-solid electrodes. In some embodiments, an electrochemical cell includes an anode, a semi-solid cathode that includes a suspension of an active material and a conductive material in a liquid electrolyte, and an ion permeable membrane disposed between the anode and the cathode. The semi-solid cathode has a thickness in the range of about 250 .mu.m-2,500 .mu.m, and the electrochemical cell has an area specific capacity of at least 5 mAh/cm.sup.2 at a C-rate of C/2.

  10. High strain rate superplasticity in metals and composites

    SciTech Connect

    Nieh, T.G.; Wadsworth, J.; Higashi, K.

    1993-07-01

    Superplastic behavior at very high strain rates (at or above 1 s{sup {minus}1}) in metallic-based materials is an area of increasing interest. The phenomenon has been observed quite extensively in metal alloys, metal-matrix composites (MMC), and mechanically-alloyed (MA) materials. In the present paper, experimental results on high strain rate behavior in 2124 Al-based materials, including Zr-modified 2124, SiC-reinforced 2124, MA 2124, and MA 2124 MMC, are presented. Except for the required fine grain size, details of the structural requirements of this phenomenon are not yet understood. Despite this, a systematic approach to produce high strain rate superplasticity (HSRS) in metallic materials is given in this paper. Evidences indicate that the presence of a liquid phase, or a low melting point region, at boundary interfaces is responsible for HSRS.

  11. Semi-solid electrodes having high rate capability

    SciTech Connect

    Chiang, Yet-Ming; Duduta, Mihai; Holman, Richard; Limthongkul, Pimpa; Tan, Taison

    2015-11-10

    Embodiments described herein relate generally to electrochemical cells having high rate capability, and more particularly to devices, systems and methods of producing high capacity and high rate capability batteries having relatively thick semi-solid electrodes. In some embodiments, an electrochemical cell includes an anode, a semi-solid cathode that includes a suspension of an active material and a conductive material in a liquid electrolyte, and an ion permeable membrane disposed between the anode and the cathode. The semi-solid cathode has a thickness in the range of about 250 .mu.m-2,500 .mu.m, and the electrochemical cell has an area specific capacity of at least 5 mAh/cm.sup.2 at a C-rate of C/2.

  12. Glow discharge deposition at high rates using disilane

    SciTech Connect

    Rajeswaran, G.; Corderman, R.R.; Kampas, F.J.; Vanier, P.E.

    1985-01-01

    The research program reported makes use of the fact that amorphous silicon films can be grown faster from disilane in a glow discharge than from the traditional silane. The goal is to find a method to grow films at a high rate and with sufficiently high quality to be used in an efficient solar cell. It must also be demonstrated that the appropriate device structure can be successfully fabricated under conditions which give high deposition rates. High quality intrinsic films have been deposited at 20 A/s. Efficiency of 5.6% on steel substrates and 5.3% on glass substrates were achieved using disilane i-layers deposited at 15 A/s in a basic structure, without wide-gap doped layers or light trapping. Wide gap p-layers were deposited using disilane. Results were compared with those obtained at Vactronic using high power discharges of silane-hydrogen mixtures. (LEW)

  13. Flexible high-repetition-rate ultrafast fiber laser

    PubMed Central

    Mao, Dong; Liu, Xueming; Sun, Zhipei; Lu, Hua; Han, Dongdong; Wang, Guoxi; Wang, Fengqiu

    2013-01-01

    High-repetition-rate pulses have widespread applications in the fields of fiber communications, frequency comb, and optical sensing. Here, we have demonstrated high-repetition-rate ultrashort pulses in an all-fiber laser by exploiting an intracavity Mach-Zehnder interferometer (MZI) as a comb filter. The repetition rate of the laser can be tuned flexibly from about 7 to 1100 GHz by controlling the optical path difference between the two arms of the MZI. The pulse duration can be reduced continuously from about 10.1 to 0.55 ps with the spectral width tunable from about 0.35 to 5.7 nm by manipulating the intracavity polarization controller. Numerical simulations well confirm the experimental observations and show that filter-driven four-wave mixing effect, induced by the MZI, is the main mechanism that governs the formation of the high-repetition-rate pulses. This all-fiber-based laser is a simple and low-cost source for various applications where high-repetition-rate pulses are necessary. PMID:24226153

  14. Low Primary Cesarean Rate and High VBAC Rate With Good Outcomes in an Amish Birthing Center

    PubMed Central

    Deline, James; Varnes-Epstein, Lisa; Dresang, Lee T.; Gideonsen, Mark; Lynch, Laura; Frey, John J.

    2012-01-01

    PURPOSE Recent national guidelines encourage a trial of labor after cesarean (TOLAC) as a means of increasing vaginal births after cesarean (VBACs) and decreasing the high US cesarean birth rate and its consequences (2010 National Institute of Health Consensus Statement and American College of Obstetricians and Gynecologists revised guideline). A birthing center serving Amish women in Southwestern Wisconsin offered an opportunity to look at the effects of local culture and practices that support vaginal birth and TOLAC. This study describes childbirth and perinatal outcomes during a 17-year period in LaFarge, Wisconsin. METHODS We undertook a retrospective analysis of the records of all women admitted to the birth center in labor. Main outcome measures include rates of cesarean deliveries, TOLAC and VBAC deliveries, and perinatal outcomes for 927 deliveries between 1993 and 2010. RESULT S The cesarean rate was 4% (35 of 927), the TOLAC rate was 100%, and the VBAC rate was 95% (88 of 92). There were no cases of uterine rupture and no maternal deaths. The neonatal death rate of 5.4 of 1,000 was comparable to that of Wisconsin (4.6 of 1,000) and the United States (4.5 of 1,000). CONCLUSIONS Both the culture of the population served and a number of factors relating to the management of labor at the birthing center have affected the rates of cesarean delivery and TOLAC. The results of the LaFarge Amish study support a low-technology approach to delivery where good outcomes are achieved with low cesarean and high VBAC rates. PMID:23149530

  15. High removal rate laser-based coating removal system

    DOEpatents

    Matthews, Dennis L.; Celliers, Peter M.; Hackel, Lloyd; Da Silva, Luiz B.; Dane, C. Brent; Mrowka, Stanley

    1999-11-16

    A compact laser system that removes surface coatings (such as paint, dirt, etc.) at a removal rate as high as 1000 ft.sup.2 /hr or more without damaging the surface. A high repetition rate laser with multiple amplification passes propagating through at least one optical amplifier is used, along with a delivery system consisting of a telescoping and articulating tube which also contains an evacuation system for simultaneously sweeping up the debris produced in the process. The amplified beam can be converted to an output beam by passively switching the polarization of at least one amplified beam. The system also has a personal safety system which protects against accidental exposures.

  16. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close ...

  17. Sensitivity to Envelope Interaural Time Differences at High Modulation Rates

    PubMed Central

    Bleeck, Stefan; McAlpine, David

    2015-01-01

    Sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure of low-frequency tones and the modulated envelopes of high-frequency sounds are considered comparable, particularly for envelopes shaped to transmit similar fidelity of temporal information normally present for low-frequency sounds. Nevertheless, discrimination performance for envelope modulation rates above a few hundred Hertz is reported to be poor—to the point of discrimination thresholds being unattainable—compared with the much higher (>1,000 Hz) limit for low-frequency ITD sensitivity, suggesting the presence of a low-pass filter in the envelope domain. Further, performance for identical modulation rates appears to decline with increasing carrier frequency, supporting the view that the low-pass characteristics observed for envelope ITD processing is carrier-frequency dependent. Here, we assessed listeners’ sensitivity to ITDs conveyed in pure tones and in the modulated envelopes of high-frequency tones. ITD discrimination for the modulated high-frequency tones was measured as a function of both modulation rate and carrier frequency. Some well-trained listeners appear able to discriminate ITDs extremely well, even at modulation rates well beyond 500 Hz, for 4-kHz carriers. For one listener, thresholds were even obtained for a modulation rate of 800 Hz. The highest modulation rate for which thresholds could be obtained declined with increasing carrier frequency for all listeners. At 10 kHz, the highest modulation rate at which thresholds could be obtained was 600 Hz. The upper limit of sensitivity to ITDs conveyed in the envelope of high-frequency modulated sounds appears to be higher than previously considered. PMID:26721926

  18. Medication errors during hospital drug rounds.

    PubMed Central

    Ridge, K W; Jenkins, D B; Noyce, P R; Barber, N D

    1995-01-01

    Objective--To determine the nature and rate of drug administration errors in one National Health Service hospital. Design--Covert observational survey be tween January and April 1993 of drug rounds with intervention to stop drug administration errors reaching the patient. Setting--Two medical, two surgical, and two medicine for the elderly wards in a former district general hospital, now a NHS trust hospital. Subjects--37 Nurses performing routine single nurse drug rounds. Main measures--Drug administration errors recorded by trained observers. Results--Seventy four drug rounds were observed in which 115 errors occurred during 3312 drug administrations. The overall error rate was 3.5% (95% confidence interval 2.9% to 4.1%). Errors owing to omissions, because the drug had not been supplied or located or the prescription had not been seen, accounted for most (68%, 78) of the errors. Wrong doses accounted for 15% (17) errors, four of which were greater than the prescribed dose. The dose was given within two hours of the time indicated by the prescriber in 98.2% of cases. Conclusion--The observed rate of drug administration errors is too high. It might be reduced by a multidisciplinary review of practices in prescribing, supply, and administration of drugs. PMID:10156392

  19. Statistical Approach to Decreasing the Error Rate of Noninvasive Prenatal Aneuploid Detection caused by Maternal Copy Number Variation.

    PubMed

    Zhang, Han; Zhao, Yang-Yu; Song, Jing; Zhu, Qi-Ying; Yang, Hua; Zheng, Mei-Ling; Xuan, Zhao-Ling; Wei, Yuan; Chen, Yang; Yuan, Peng-Bo; Yu, Yang; Li, Da-Wei; Liang, Jun-Bin; Fan, Ling; Chen, Chong-Jian; Qiao, Jie

    2015-01-01

    Analyses of cell-free fetal DNA (cff-DNA) from maternal plasma using massively parallel sequencing enable the noninvasive detection of feto-placental chromosome aneuploidy; this technique has been widely used in clinics worldwide. Noninvasive prenatal tests (NIPT) based on cff-DNA have achieved very high accuracy; however, they suffer from maternal copy-number variations (CNV) that may cause false positives and false negatives. In this study, we developed an algorithm to exclude the effect of maternal CNV and refined the Z-score that is used to determine fetal aneuploidy. The simulation results showed that the algorithm is robust against variations of fetal concentration and maternal CNV size. We also introduced a method based on the discrepancy between feto-placental concentrations to help reduce the false-positive ratio. A total of 6615 pregnant women were enrolled in a prospective study to validate the accuracy of our method. All 106 fetuses with T21, 20 with T18, and three with T13 were tested using our method, with sensitivity of 100% and specificity of 99.97%. In the results, two cases with maternal duplications in chromosome 21, which were falsely predicted as T21 by the previous NIPT method, were correctly classified as normal by our algorithm, which demonstrated the effectiveness of our approach. PMID:26534864

  20. Statistical Approach to Decreasing the Error Rate of Noninvasive Prenatal Aneuploid Detection caused by Maternal Copy Number Variation

    PubMed Central

    Zhang, Han; Zhao, Yang-Yu; Song, Jing; Zhu, Qi-Ying; Yang, Hua; Zheng, Mei-Ling; Xuan, Zhao-Ling; Wei, Yuan; Chen, Yang; Yuan, Peng-Bo; Yu, Yang; Li, Da-Wei; Liang, Jun-Bin; Fan, Ling; Chen, Chong-Jian; Qiao, Jie

    2015-01-01

    Analyses of cell-free fetal DNA (cff-DNA) from maternal plasma using massively parallel sequencing enable the noninvasive detection of feto-placental chromosome aneuploidy; this technique has been widely used in clinics worldwide. Noninvasive prenatal tests (NIPT) based on cff-DNA have achieved very high accuracy; however, they suffer from maternal copy-number variations (CNV) that may cause false positives and false negatives. In this study, we developed an algorithm to exclude the effect of maternal CNV and refined the Z-score that is used to determine fetal aneuploidy. The simulation results showed that the algorithm is robust against variations of fetal concentration and maternal CNV size. We also introduced a method based on the discrepancy between feto-placental concentrations to help reduce the false-positive ratio. A total of 6615 pregnant women were enrolled in a prospective study to validate the accuracy of our method. All 106 fetuses with T21, 20 with T18, and three with T13 were tested using our method, with sensitivity of 100% and specificity of 99.97%. In the results, two cases with maternal duplications in chromosome 21, which were falsely predicted as T21 by the previous NIPT method, were correctly classified as normal by our algorithm, which demonstrated the effectiveness of our approach. PMID:26534864

  1. Metasurface-based broadband hologram with high tolerance to fabrication errors

    PubMed Central

    Zhang, Xiaohu; Jin, Jinjin; Wang, Yanqin; Pu, Mingbo; Li, Xiong; Zhao, Zeyu; Gao, Ping; Wang, Changtao; Luo, Xiangang

    2016-01-01

    With new degrees of freedom to achieve full control of the optical wavefront, metasurfaces could overcome the fabrication embarrassment faced by the metamaterials. In this paper, a broadband hologram using metasurface consisting of elongated nanoapertures array with different orientations has been experimentally demonstrated. Owing to broadband characteristic of the polarization-dependent scattering, the performance is verified at working wavelength ranging from 405 nm to 914 nm. Furthermore, the tolerance to the fabrication errors, which include the length and width of the elongated aperture, the shape deformation and the phase noise, has been theoretically investigated to be as large as 10% relative to the original hologram. We believe the method proposed here is promising in emerging applications such as holographic display, optical information processing and lithography technology etc. PMID:26818130

  2. Metasurface-based broadband hologram with high tolerance to fabrication errors

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaohu; Jin, Jinjin; Wang, Yanqin; Pu, Mingbo; Li, Xiong; Zhao, Zeyu; Gao, Ping; Wang, Changtao; Luo, Xiangang

    2016-01-01

    With new degrees of freedom to achieve full control of the optical wavefront, metasurfaces could overcome the fabrication embarrassment faced by the metamaterials. In this paper, a broadband hologram using metasurface consisting of elongated nanoapertures array with different orientations has been experimentally demonstrated. Owing to broadband characteristic of the polarization-dependent scattering, the performance is verified at working wavelength ranging from 405 nm to 914 nm. Furthermore, the tolerance to the fabrication errors, which include the length and width of the elongated aperture, the shape deformation and the phase noise, has been theoretically investigated to be as large as 10% relative to the original hologram. We believe the method proposed here is promising in emerging applications such as holographic display, optical information processing and lithography technology etc.

  3. Metasurface-based broadband hologram with high tolerance to fabrication errors.

    PubMed

    Zhang, Xiaohu; Jin, Jinjin; Wang, Yanqin; Pu, Mingbo; Li, Xiong; Zhao, Zeyu; Gao, Ping; Wang, Changtao; Luo, Xiangang

    2016-01-01

    With new degrees of freedom to achieve full control of the optical wavefront, metasurfaces could overcome the fabrication embarrassment faced by the metamaterials. In this paper, a broadband hologram using metasurface consisting of elongated nanoapertures array with different orientations has been experimentally demonstrated. Owing to broadband characteristic of the polarization-dependent scattering, the performance is verified at working wavelength ranging from 405 nm to 914 nm. Furthermore, the tolerance to the fabrication errors, which include the length and width of the elongated aperture, the shape deformation and the phase noise, has been theoretically investigated to be as large as 10% relative to the original hologram. We believe the method proposed here is promising in emerging applications such as holographic display, optical information processing and lithography technology etc. PMID:26818130

  4. Ultra High-Rate Germanium (UHRGe) Modeling Status Report

    SciTech Connect

    Warren, Glen A.; Rodriguez, Douglas C.

    2012-06-07

    The Ultra-High Rate Germanium (UHRGe) project at Pacific Northwest National Laboratory (PNNL) is conducting research to develop a high-purity germanium (HPGe) detector that can provide both the high resolution typical of germanium and high signal throughput. Such detectors may be beneficial for a variety of potential applications ranging from safeguards measurements of used fuel to material detection and verification using active interrogation techniques. This report describes some of the initial radiation transport modeling efforts that have been conducted to help guide the design of the detector as well as a description of the process used to generate the source spectrum for the used fuel application evaluation.

  5. Quality Control of High-Dose-Rate Brachytherapy: Treatment Delivery Analysis Using Statistical Process Control

    SciTech Connect

    Able, Charles M.; Bright, Megan; Frizzell, Bart

    2013-03-01

    Purpose: Statistical process control (SPC) is a quality control method used to ensure that a process is well controlled and operates with little variation. This study determined whether SPC was a viable technique for evaluating the proper operation of a high-dose-rate (HDR) brachytherapy treatment delivery system. Methods and Materials: A surrogate prostate patient was developed using Vyse ordnance gelatin. A total of 10 metal oxide semiconductor field-effect transistors (MOSFETs) were placed from prostate base to apex. Computed tomography guidance was used to accurately position the first detector in each train at the base. The plan consisted of 12 needles with 129 dwell positions delivering a prescribed peripheral dose of 200 cGy. Sixteen accurate treatment trials were delivered as planned. Subsequently, a number of treatments were delivered with errors introduced, including wrong patient, wrong source calibration, wrong connection sequence, single needle displaced inferiorly 5 mm, and entire implant displaced 2 mm and 4 mm inferiorly. Two process behavior charts (PBC), an individual and a moving range chart, were developed for each dosimeter location. Results: There were 4 false positives resulting from 160 measurements from 16 accurately delivered treatments. For the inaccurately delivered treatments, the PBC indicated that measurements made at the periphery and apex (regions of high-dose gradient) were much more sensitive to treatment delivery errors. All errors introduced were correctly identified by either the individual or the moving range PBC in the apex region. Measurements at the urethra and base were less sensitive to errors. Conclusions: SPC is a viable method for assessing the quality of HDR treatment delivery. Further development is necessary to determine the most effective dose sampling, to ensure reproducible evaluation of treatment delivery accuracy.

  6. High frame rate CCD camera with fast optical shutter

    SciTech Connect

    Yates, G.J.; McDonald, T.E. Jr.; Turko, B.T.

    1998-09-01

    A high frame rate CCD camera coupled with a fast optical shutter has been designed for high repetition rate imaging applications. The design uses state-of-the-art microchannel plate image intensifier (MCPII) technology fostered/developed by Los Alamos National Laboratory to support nuclear, military, and medical research requiring high-speed imagery. Key design features include asynchronous resetting of the camera to acquire random transient images, patented real-time analog signal processing with 10-bit digitization at 40--75 MHz pixel rates, synchronized shutter exposures as short as 200pS, sustained continuous readout of 512 x 512 pixels per frame at 1--5Hz rates via parallel multiport (16-port CCD) data transfer. Salient characterization/performance test data for the prototype camera are presented, temporally and spatially resolved images obtained from range-gated LADAR field testing are included, an alternative system configuration using several cameras sequenced to deliver discrete numbers of consecutive frames at effective burst rates up to 5GHz (accomplished by time-phasing of consecutive MCPII shutter gates without overlap) is discussed. Potential applications including dynamic radiography and optical correlation will be presented.

  7. Machining and grinding: High rate deformation in practice

    SciTech Connect

    Follansbee, P.S.

    1993-04-01

    Machining and grinding are well-established material-working operations involving highly non-uniform deformation and failure processes. A typical machining operation is characterized by uncertain boundary conditions (e.g.,surface interactions), three-dimensional stress states, large strains, high strain rates, non-uniform temperatures, highly localized deformations, and failure by both nominally ductile and brittle mechanisms. While machining and grinding are thought to be dominated by empiricism, even a cursory inspection leads one to the conclusion that this results more from necessity arising out of the complicated and highly interdisciplinary nature of the processes than from the lack thereof. With these conditions in mind, the purpose of this paper is to outline the current understanding of strain rate effects in metals.

  8. Machining and grinding: High rate deformation in practice

    SciTech Connect

    Follansbee, P.S.

    1993-01-01

    Machining and grinding are well-established material-working operations involving highly non-uniform deformation and failure processes. A typical machining operation is characterized by uncertain boundary conditions (e.g.,surface interactions), three-dimensional stress states, large strains, high strain rates, non-uniform temperatures, highly localized deformations, and failure by both nominally ductile and brittle mechanisms. While machining and grinding are thought to be dominated by empiricism, even a cursory inspection leads one to the conclusion that this results more from necessity arising out of the complicated and highly interdisciplinary nature of the processes than from the lack thereof. With these conditions in mind, the purpose of this paper is to outline the current understanding of strain rate effects in metals.

  9. Method and Apparatus for High Data Rate Demodulation

    NASA Technical Reports Server (NTRS)

    Grebowsky, Gerald J. (Inventor); Gray, Andrew A. (Inventor); Srinivasan, Meera (Inventor)

    2001-01-01

    A method to demodulate BPSK or QPSK data using clock rates for the receiver demodulator of one-fourth the data rate is presented. This is accomplished through multirate digital signal processing techniques. The data is sampled with an analog-to-digital converter and then converted from a serial data stream to a parallel data stream. This signal processing requires a clock cycle four times the data rate. Once converted into a parallel data stream, the demodulation operations including complex baseband mixing, lowpass filtering, detection filtering, symbol-timing recovery, and carrier recovery are all accomplished at a rate one-fourth the data rate. The clock cycle required is one-sixteenth that required by a traditional serial receiver based on straight convolution. The high rate data demodulator will demodulate BPSK, QPSK, UQPSK, and DQPSK with data rates ranging from 10 Mega-symbols to more than 300 Mega-symbols per second. This method requires less clock cycles per symbol tan traditional serial convolution techniques.

  10. Evaluation of advanced high rate Li-SOCl2 cells

    NASA Technical Reports Server (NTRS)

    Deligiannis, F.; Ang, V.; Dawson, S.; Frank, H.; Subbarao, S.

    1986-01-01

    Under NASA sponsorship, JPL is developing advanced, high rate Li-SOCl2 cells for future space missions. As part of this effort, Li-SOCl2 cells of various designs were examined for performance and safety. The cells differed from one another in several aspects, such as: nature of carbon cathode, catalysts, cell configuration, case polarity, and safety devices. Performance evaluation included constant-current discharge over a range of currents and temperatures. Abuse-testing consisted of shortcircuiting, charging, and over-discharge. Energy densities greater than 300 Wh/Kg at the C/2 rate were found for some designs. A cell design featuring a high-surface-area carbon cathode was found to deliver nearly 500 Wh/Kg at moderate discharge rates. Temperature influenced the performance significantly.

  11. Dosimetric investigation of high dose rate, gated IMRT

    SciTech Connect

    Lin, Teh; Chen Yan; Hossain, Murshed; Li, Jinsheng; Ma, C.-M.

    2008-11-15

    Increasing the dose rate offers time saving for IMRT delivery but the dosimetric accuracy is a concern, especially in the case of treating a moving target. The objective of this work is to determine the effect of dose rate associated with organ motion and gated treatment using step-and-shoot IMRT delivery. Both measurements and analytical simulation on clinical plans are performed to study the dosimetric differences between high dose rate and low dose rate gated IMRT step-and-shoot delivery. Various sites of IMRT plans for liver, lung, pancreas, and breast cancers were delivered to a custom-made motorized phantom, which simulated sinusoidal movement. Repeated measurements were taken for gated and nongated delivery with different gating settings and three dose rates, 100, 500, and 1000 MU/min using ion chambers and extended dose range films. For the study of the residual motion effect for individual segment dose and composite dose of IMRT plans, our measurements with 30%-60% phase gating and without gating for various dose rates were compared. A small but clinically acceptable difference in delivered dose was observed between 1000, 500, and 100 MU/min at 30%-60% phase gating. A simulation is presented, which can be used for predicting dose profiles for patient cases in the presence of motion and gating to confirm that IMRT step-and-shoot delivery with gating for 1000 MU/min are not much different from 500 MU/min. Based on the authors sample plan analyses, our preliminary results suggest that using 1000 MU/Min dose rate is dosimetrically accurate and efficient for IMRT treatment delivery with gating. Nonetheless, for the concern of patient care and safety, a patient specific QA should be performed as usual for IMRT plans for high dose rate deliveries.

  12. Childhood Onset Schizophrenia: High Rate of Visual Hallucinations

    ERIC Educational Resources Information Center

    David, Christopher N.; Greenstein, Deanna; Clasen, Liv; Gochman, Pete; Miller, Rachel; Tossell, Julia W.; Mattai, Anand A.; Gogtay, Nitin; Rapoport, Judith L.

    2011-01-01

    Objective: To document high rates and clinical correlates of nonauditory hallucinations in childhood onset schizophrenia (COS). Method: Within a sample of 117 pediatric patients (mean age 13.6 years), diagnosed with COS, the presence of auditory, visual, somatic/tactile, and olfactory hallucinations was examined using the Scale for the Assessment…

  13. Cassini High Rate Detector V16.0

    NASA Astrophysics Data System (ADS)

    Economou, T.; DiDonna, P.

    2016-05-01

    The High Rate Detector (HRD) from the University of Chicago is an independent part of the CDA instrument on the Cassini Orbiter that measures the dust flux and particle mass distribution of dust particles hitting the HRD detectors. This data set includes all data from the HRD through December 31, 2015. Please refer to Srama et al. (2004) for a detailed HRD description.

  14. Measuring High School Graduation Rates: A Review of the Literature

    ERIC Educational Resources Information Center

    Savich, Carl

    2007-01-01

    This paper reviewed the research literature on graduation rates in U.S. high schools to evaluate and assess the findings. The methodology employed was to determine the measuring method that researchers used in reaching their findings. The strengths and weaknesses of the method employed were then analyzed. Flaws and inaccuracies were examined and…

  15. Statistical Profiles of Highly-Rated Learning Objects

    ERIC Educational Resources Information Center

    Cechinel, Cristian; Sanchez-Alonso, Salvador; Garcia-Barriocanal, Elena

    2011-01-01

    The continuously growth of learning resources available in on-line repositories has raised the concern for the development of automated methods for quality assessment. The current existence of on-line evaluations in such repositories has opened the possibility of searching for statistical profiles of highly-rated resources that can be used as…

  16. Binary interactions with high accretion rates onto main sequence stars

    NASA Astrophysics Data System (ADS)

    Shiber, Sagiv; Schreier, Ron; Soker, Noam

    2016-07-01

    Energetic outflows from main sequence stars accreting mass at very high rates might account for the powering of some eruptive objects, such as merging main sequence stars, major eruptions of luminous blue variables, e.g., the Great Eruption of Eta Carinae, and other intermediate luminosity optical transients (ILOTs; red novae; red transients). These powerful outflows could potentially also supply the extra energy required in the common envelope process and in the grazing envelope evolution of binary systems. We propose that a massive outflow/jets mediated by magnetic fields might remove energy and angular momentum from the accretion disk to allow such high accretion rate flows. By examining the possible activity of the magnetic fields of accretion disks, we conclude that indeed main sequence stars might accrete mass at very high rates, up to ≈ 10‑2 M ⊙ yr‑1 for solar type stars, and up to ≈ 1 M ⊙ yr‑1 for very massive stars. We speculate that magnetic fields amplified in such extreme conditions might lead to the formation of massive bipolar outflows that can remove most of the disk's energy and angular momentum. It is this energy and angular momentum removal that allows the very high mass accretion rate onto main sequence stars.

  17. Understanding High School Graduation Rates in North Carolina

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  18. Trends in High School Graduation Rates. Research Brief. Volume 0710

    ERIC Educational Resources Information Center

    Romanik, Dale; Froman, Terry

    2008-01-01

    This Research Brief addresses an outcome measure that is of paramount importance to senior high schools--graduation rate. Nationwide a student drops out of school approximately every nine seconds. The significance of this issue locally is exemplified by a recent American Civil Liberties Union filing of a class action law suit against the Palm…

  19. Distance Education: Why Are the Attrition Rates so High?

    ERIC Educational Resources Information Center

    Moody, Johnette

    2004-01-01

    Distance education is being hailed as the next best thing to sliced bread. But is it really? Many problems exist with distance-delivered courses. Everything from course development and management to the student not being adequately prepared are problematic and result in high attrition rates in distance-delivered courses. Students initially…

  20. Plant respirometer enables high resolution of oxygen consumption rates

    NASA Technical Reports Server (NTRS)

    Foster, D. L.

    1966-01-01

    Plant respirometer permits high resolution of relatively small changes in the rate of oxygen consumed by plant organisms undergoing oxidative metabolism in a nonphotosynthetic state. The two stage supply and monitoring system operates by a differential pressure transducer and provides a calibrated output by digital or analog signals.

  1. Corrected High-Frame Rate Anchored Ultrasound with Software Alignment

    ERIC Educational Resources Information Center

    Miller, Amanda L.; Finch, Kenneth B.

    2011-01-01

    Purpose: To improve lingual ultrasound imaging with the Corrected High Frame Rate Anchored Ultrasound with Software Alignment (CHAUSA; Miller, 2008) method. Method: A production study of the IsiXhosa alveolar click is presented. Articulatory-to-acoustic alignment is demonstrated using a Tri-Modal 3-ms pulse generator. Images from 2 simultaneous…

  2. Reducing the High School Dropout Rate. KIDS COUNT Indicator Brief

    ERIC Educational Resources Information Center

    Shore, Rima; Shore, Barbara

    2009-01-01

    Researchers use many different methods to calculate the high school dropout rate, and depending on the approach, the numbers can look very different. But, no matter which method is used, the key finding is the same: too many students are leaving school without the knowledge and skills they need to meet the demands of twenty-first century…

  3. Predicting the College Attendance Rate of Graduating High School Classes.

    ERIC Educational Resources Information Center

    Hoover, Donald R.

    1990-01-01

    An important element of school counseling is providing assessments on the collective future needs and activities of a graduating school class. The College Attendance Rate (CAR) is defined here as the proportion of seniors graduating from a given high school, during a given year, that will enroll full-time at an academic college sometime during the…

  4. High Precision Measurements of Temperature Dependence of Creep Rate of Polycrystalline Forsterite

    NASA Astrophysics Data System (ADS)

    Nakakoji, T.; Hiraga, T.

    2014-12-01

    Obtaining temperature dependence of creep rate, that is, activation energy for the creep is critical in geophysics, since its value can indicate deformation mechanism and also allows to extrapolate the creep rate measured in the room experiments to geological conditions when the creep mechanism is identical in both cases. Although numerous experimental results have been obtained so far, the obtained activation energy often contains error range of >50 kJ/mol, which often causes large uncertainties in strain rate at applied geological conditions. To minimize this error, it is important to collect strain rates at many different temperatures with high accuracy. We conducted high temperature compression experiments on synthetic forsterite (90%vol) and enstatite (10vol %) aggregates under increasing and decreasing temperatures. We applied a constant load of ~20 MPa using uniaxial testing machine (Shimadzu AG-X 50kN). The temperature was changed from 1360°C to 1240°C by furnace attached to the machine. Prior to the applying the load to the samples the grain size was saturated at 1360°C for 24 hours to minimize grain growth during the test. Decreasing-rate of temperature was 0.11min/°C and 0.02min/°C at temperature ranges of 1360 to 1300 and 1300 to 1240 respectively. The increasing-rate of the temperature was the same as the decreasing-rate. Strain rates from every 1 degree were obtained successfully. After the experiment, we analyzed the microstructure of the sample with scanning electron microscopy to measure the grain diameter. Arrhenius plots of strain rate demonstrate very linear distribution at > 1300 °C giving an activation energy of 649 ± 14 kJ/mol, whereas weak transition to lower activation energy 550 ± 23 kJ/mol below 1300°C was observed. Tasaka et al. (2013) obtained the activation energy of 370 ± 50 kJ/mol from similar temperature ranges used in our study but finer-grained samples. Combining these results, we interpret our results of high activation

  5. Attenuation and bit error rate for four co-propagating spatially multiplexed optical communication channels of exactly same wavelength in step index multimode fibers

    NASA Astrophysics Data System (ADS)

    Murshid, Syed H.; Chakravarty, Abhijit

    2011-06-01

    Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.

  6. Optimization of high-throughput sequencing kinetics for determining enzymatic rate constants of thousands of RNA substrates.

    PubMed

    Niland, Courtney N; Jankowsky, Eckhard; Harris, Michael E

    2016-10-01

    Quantification of the specificity of RNA binding proteins and RNA processing enzymes is essential to understanding their fundamental roles in biological processes. High-throughput sequencing kinetics (HTS-Kin) uses high-throughput sequencing and internal competition kinetics to simultaneously monitor the processing rate constants of thousands of substrates by RNA processing enzymes. This technique has provided unprecedented insight into the substrate specificity of the tRNA processing endonuclease ribonuclease P. Here, we investigated the accuracy and robustness of measurements associated with each step of the HTS-Kin procedure. We examine the effect of substrate concentration on the observed rate constant, determine the optimal kinetic parameters, and provide guidelines for reducing error in amplification of the substrate population. Importantly, we found that high-throughput sequencing and experimental reproducibility contribute to error, and these are the main sources of imprecision in the quantified results when otherwise optimized guidelines are followed. PMID:27296633

  7. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands

    PubMed Central

    Sánchez-Durán, José A.; Hidalgo-López, José A.; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando

    2015-01-01

    Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics. PMID:26295393

  8. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands.

    PubMed

    Sánchez-Durán, José A; Hidalgo-López, José A; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando

    2015-01-01

    Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics. PMID:26295393

  9. Human PrimPol is a highly error-prone polymerase regulated by single-stranded DNA binding proteins

    PubMed Central

    Guilliam, Thomas A.; Jozwiakowski, Stanislaw K.; Ehlinger, Aaron; Barnes, Ryan P.; Rudd, Sean G.; Bailey, Laura J.; Skehel, J. Mark; Eckert, Kristin A.; Chazin, Walter J.; Doherty, Aidan J.

    2015-01-01

    PrimPol is a recently identified polymerase involved in eukaryotic DNA damage tolerance, employed in both re-priming and translesion synthesis mechanisms to bypass nuclear and mitochondrial DNA lesions. In this report, we investigate how the enzymatic activities of human PrimPol are regulated. We show that, unlike other TLS polymerases, PrimPol is not stimulated by PCNA and does not interact with it in vivo. We identify that PrimPol interacts with both of the major single-strand binding proteins, RPA and mtSSB in vivo. Using NMR spectroscopy, we characterize the domains responsible for the PrimPol-RPA interaction, revealing that PrimPol binds directly to the N-terminal domain of RPA70. In contrast to the established role of SSBs in stimulating replicative polymerases, we find that SSBs significantly limit the primase and polymerase activities of PrimPol. To identify the requirement for this regulation, we employed two forward mutation assays to characterize PrimPol's replication fidelity. We find that PrimPol is a mutagenic polymerase, with a unique error specificity that is highly biased towards insertion-deletion errors. Given the error-prone disposition of PrimPol, we propose a mechanism whereby SSBs greatly restrict the contribution of this enzyme to DNA replication at stalled forks, thus reducing the mutagenic potential of PrimPol during genome replication. PMID:25550423

  10. High frame rate photoacoustic imaging using clinical ultrasound system

    NASA Astrophysics Data System (ADS)

    Sivasubramanian, Kathyayini; Pramanik, Manojit

    2016-03-01

    Photoacoustic tomography (PAT) is a potential hybrid imaging modality which is gaining attention in the field of medical imaging. Typically a Q-switched Nd:YAG laser is used to excite the tissue and generate photoacoustic signals. But, they are not suitable for clinical applications owing to their high cost, large size. Also, their low pulse repetition rate (PRR) of few tens of hertz prevents them from being used in real-time PAT. So, there is a growing need for an imaging system capable of real-time imaging for various clinical applications. In this work, we are using a nanosecond pulsed laser diode as an excitation source and a clinical ultrasound imaging system to obtain the photoacoustic imaging. The excitation laser is ~803 nm in wavelength with energy of ~1.4 mJ per pulse. So far, the reported frame rate for photoacoustic imaging is only a few hundred Hertz. We have demonstrated up to 7000 frames per second framerate in photoacoustic imaging (B-mode) and measured the flow rate of fast moving obje ct. Phantom experiments were performed to test the fast imaging capability and measure the flow rate of ink solution inside a tube. This fast photoacoustic imaging can be used for various clinical applications including cardiac related problems, where the blood flow rate is quite high, or other dynamic studies.

  11. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations. PMID:26560913

  12. COSMIC ERROR CAUSED BY THE GRAVITATIONAL MICROLENSING EFFECT IN HIGH-PRECISION ASTROMETRY

    SciTech Connect

    Yano, Taihei

    2012-10-01

    We have investigated an expected deviation of the positions or the proper motions of stars as the cosmic error caused by the gravitational microlensing effect. In observing stars in the Galactic bulge region, we obtain an expected deviation of a star positions by the gravitational microlensing effect of about 7 {mu}as. We have also estimated the expected deviation of the proper motions of stars in the Galactic bulge caused by the gravitational microlensing effect. The expected deviation of the proper motions is mainly caused by the lens object located at the nearest angular distance from the source star. Each deviation of the proper motion has a value of less than 0.02 {mu}as yr{sup -1} for 99% of the sources. We have investigated the correlation of the deviation of Galactic bulge stars caused by the gravitational microlensing effect. The value of the correlation angle of the positional deviation is estimated to be about 1 arcmin. In the same way, we have estimated the correlation angle of the deviation of the proper motions. The angle is estimated to be about 1 arcsec. The following difference distinguishes the deviation of the position and that of the proper motion. The positional deviation is affected not only by lenses near the source but also by the lenses far from the source. On the other hand, the deviation of the proper motion by microlensing is mainly only caused by the nearest lens from the source. This difference causes that of the correlation angle.

  13. High repetition rate plasma mirror device for attosecond science

    SciTech Connect

    Borot, A.; Douillet, D.; Iaquaniello, G.; Lefrou, T.; Lopez-Martens, R.; Audebert, P.; Geindre, J.-P.

    2014-01-15

    This report describes an active solid target positioning device for driving plasma mirrors with high repetition rate ultra-high intensity lasers. The position of the solid target surface with respect to the laser focus is optically monitored and mechanically controlled on the nm scale to ensure reproducible interaction conditions for each shot at arbitrary repetition rate. We demonstrate the target capabilities by driving high-order harmonic generation from plasma mirrors produced on glass targets with a near-relativistic intensity few-cycle pulse laser system operating at 1 kHz. During experiments, residual target surface motion can be actively stabilized down to 47 nm (root mean square), which ensures sub-300-as relative temporal stability of the plasma mirror as a secondary source of coherent attosecond extreme ultraviolet radiation in pump-probe experiments.

  14. High repetition rate plasma mirror device for attosecond science

    NASA Astrophysics Data System (ADS)

    Borot, A.; Douillet, D.; Iaquaniello, G.; Lefrou, T.; Audebert, P.; Geindre, J.-P.; Lopez-Martens, R.

    2014-01-01

    This report describes an active solid target positioning device for driving plasma mirrors with high repetition rate ultra-high intensity lasers. The position of the solid target surface with respect to the laser focus is optically monitored and mechanically controlled on the nm scale to ensure reproducible interaction conditions for each shot at arbitrary repetition rate. We demonstrate the target capabilities by driving high-order harmonic generation from plasma mirrors produced on glass targets with a near-relativistic intensity few-cycle pulse laser system operating at 1 kHz. During experiments, residual target surface motion can be actively stabilized down to 47 nm (root mean square), which ensures sub-300-as relative temporal stability of the plasma mirror as a secondary source of coherent attosecond extreme ultraviolet radiation in pump-probe experiments.

  15. High repetition rate plasma mirror device for attosecond science.

    PubMed

    Borot, A; Douillet, D; Iaquaniello, G; Lefrou, T; Audebert, P; Geindre, J-P; Lopez-Martens, R

    2014-01-01

    This report describes an active solid target positioning device for driving plasma mirrors with high repetition rate ultra-high intensity lasers. The position of the solid target surface with respect to the laser focus is optically monitored and mechanically controlled on the nm scale to ensure reproducible interaction conditions for each shot at arbitrary repetition rate. We demonstrate the target capabilities by driving high-order harmonic generation from plasma mirrors produced on glass targets with a near-relativistic intensity few-cycle pulse laser system operating at 1 kHz. During experiments, residual target surface motion can be actively stabilized down to 47 nm (root mean square), which ensures sub-300-as relative temporal stability of the plasma mirror as a secondary source of coherent attosecond extreme ultraviolet radiation in pump-probe experiments. PMID:24517742

  16. High strain rate behavior of pure metals at elevated temperature

    NASA Astrophysics Data System (ADS)

    Testa, Gabriel; Bonora, Nicola; Ruggiero, Andrew; Iannitti, Gianluca; Domenico, Gentile

    2013-06-01

    In many applications and technology processes, such as stamping, forging, hot working etc., metals and alloys are subjected to elevated temperature and high strain rate deformation process. Characterization tests, such as quasistatic and dynamic tension or compression test, and validation tests, such as Taylor impact and DTE - dynamic tensile extrusion -, provide the experimental base of data for constitutive model validation and material parameters identification. Testing material at high strain rate and temperature requires dedicated equipment. In this work, both tensile Hopkinson bar and light gas gun where modified in order to allow material testing under sample controlled temperature conditions. Dynamic tension tests and Taylor impact tests, at different temperatures, on high purity copper (99.98%), tungsten (99.95%) and 316L stainless steel were performed. The accuracy of several constitutive models (Johnson and Cook, Zerilli-Armstrong, etc.) in predicting the observed material response was verified by means of extensive finite element analysis (FEA).

  17. Magnetic Implosion for Novel Strength Measurements at High Strain Rates

    SciTech Connect

    Lee, H.; Preston, D.L.; Bartsch, R.R.; Bowers, R.L.; Holtkamp, D.; Wright, B.L.

    1998-10-19

    Recently Lee and Preston have proposed to use magnetic implosions as a new method for measuring material strength in a regime of large strains and high strain rates inaccessible to previously established techniques. By its shockless nature, this method avoids the intrinsic difficulties associated with an earlier approach using high explosives. The authors illustrate how the stress-strain relation for an imploding liner can be obtained by measuring the velocity and temperature history of its inner surface. They discuss the physical requirements that lead us to a composite liner design applicable to different test materials, and also compare the code-simulated prediction with the measured data for the high strain-rate experiments conducted recently at LANL. Finally, they present a novel diagnostic scheme that will enable us to remove the background in the pyrometric measurement through data reduction.

  18. High-rate mechanical properties of energetic materials

    NASA Astrophysics Data System (ADS)

    Walley, S. M.; Siviour, C. R.; Drodge, D. R.; Williamson, D. M.

    2010-01-01

    Compared to the many thousands of studies that have been performed on the energy release mechanisms of high energy materials, relatively few studies have been performed (a few hundred) into their mechanical properties. Since it is increasingly desired to model the high rate deformation of such materials, it is of great importance to gather data on their response so that predictive constitutive models can be constructed. This paper reviews the state of the art concerning what is known about the mechanical response of high energy materials. Examples of such materials are polymer bonded explosives (used in munitions), propellants (used to propel rockets), and pyrotechnics (used to initiate munitions and also in flares).

  19. Characterisation of human diaphragm at high strain rate loading.

    PubMed

    Gaur, Piyush; Chawla, Anoop; Verma, Khyati; Mukherjee, Sudipto; Lalvani, Sanjeev; Malhotra, Rajesh; Mayer, Christian

    2016-07-01

    Motor vehicle crashes (MVC׳s) commonly results in life threating thoracic and abdominal injuries. Finite element models are becoming an important tool in analyzing automotive related injuries to soft tissues. Establishment of accurate material models including tissue tolerance limits is critical for accurate injury evaluation. The diaphragm is the most important skeletal muscle for respiration having a bi-domed structure, separating the thoracic cavity from abdominal cavity. Traumatic rupture of the diaphragm is a potentially serious injury which presents in different forms depending upon the mechanisms of the causative trauma. A major step to gain insight into the mechanism of traumatic rupture of diaphragm is to understand the high rate failure properties of diaphragm tissue. Thus, the main objective of this study was to estimate the mechanical and failure properties of human diaphragm at strain rates associated with blunt thoracic and abdominal trauma. A total of 23 uniaxial tensile tests were performed at various strain rates ranging from 0.001-200s(-1) in order to characterize the mechanical and failure properties on human diaphragm tissue. Each specimen was tested to failure at one of the four strain rates (0.001s(-1), 65s(-1), and 130s(-1), 190s(-1)) to investigate the effects of strain rate dependency. High speed video and markers placed on the grippers were used to measure the gripper to gripper displacement. Engineering stresses reported in the study is calculated from the ratio of force measured and initial cross sectional area whereas engineering strain is calculated from the ratio of the elongation to the undeformed length (gauge length) of the specimen.The results of this study showed that the diaphragm tissues is rate dependent with higher strain rate tests giving higher failure stress and higher failure strains. The failure stress for all tests ranged from 1.17MPa to 4.1MPa and failure strain ranged from 12.15% to 24.62%. PMID:27062242

  20. High rate constitutive modeling of aluminium alloy tube

    NASA Astrophysics Data System (ADS)

    Salisbury, C. P.; Worswick, M. J.; Mayer, R.

    2006-08-01

    As the need for fuel efficient automobiles increases, car designers are investigating light-weight materials for automotive bodies that will reduce the overall automobile weight. Aluminium alloy tube is a desirable material to use in automotive bodies due to its light weight. However, aluminium suffers from lower formability than steel and its energy absorption ability in a crash event after a forming operation is largely unknown. As part of a larger study on the relationship between crashworthiness and forming processes, constitutive models for 3mm AA5754 aluminium tube were developed. A nominal strain rate of 100/s is often used to characterize overall automobile crash events, whereas strain rates on the order of 1000/s can occur locally. Therefore, tests were performed at quasi-static rates using an Instron test fixture and at strain rates of 500/s to 1500/s using a tensile split Hopkinson bar. High rate testing was then conducted at rates of 500/s, 1000/s and 1500/s at 21circC, 150circC and 300circC. The generated data was then used to determine the constitutive parameters for the Johnson-Cook and Zerilli-Armstrong material models.

  1. CW Interference Effects on High Data Rate Transmission Through the ACTS Wideband Channel

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Ngo, Duc H.; Tran, Quang K.; Tran, Diepchi T.; Yu, John; Kachmar, Brian A.; Svoboda, James S.

    1996-01-01

    Satellite communications channels are susceptible to various sources of interference. Wideband channels have a proportionally greater probability of receiving interference than narrowband channels. NASA's Advanced Communications Technology Satellite (ACTS) includes a 900 MHz bandwidth hardlimiting transponder which has provided an opportunity for the study of interference effects of wideband channels. A series of interference tests using two independent ACTS ground terminals measured the effects of continuous-wave (CW) uplink interference on the bit-error rate of a 220 Mbps digitally modulated carrier. These results indicate the susceptibility of high data rate transmissions to CW interference and are compared to results obtained with a laboratory hardware-based system simulation and a computer simulation.

  2. A software control system for the ACTS high-burst-rate link evaluation terminal

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Daugherty, Elaine S.

    1991-01-01

    Control and performance monitoring of NASA's High Burst Rate Link Evaluation Terminal (HBR-LET) is accomplished by using several software control modules. Different software modules are responsible for controlling remote radio frequency (RF) instrumentation, supporting communication between a host and a remote computer, controlling the output power of the Link Evaluation Terminal and data display. Remote commanding of microwave RF instrumentation and the LET digital ground terminal allows computer control of various experiments, including bit error rate measurements. Computer communication allows system operators to transmit and receive from the Advanced Communications Technology Satellite (ACTS). Finally, the output power control software dynamically controls the uplink output power of the terminal to compensate for signal loss due to rain fade. Included is a discussion of each software module and its applications.

  3. Short-cavity high-repetition-rate CO2 laser

    NASA Astrophysics Data System (ADS)

    Klopper, Wouter; Bagrova, Kalina; du Pisanie, Johan; Ronander, Einar; Meyer, Jan A.; von Bergmann, Hubertus M.

    1994-09-01

    We report on the construction and optimization of a TEA CO2 laser with a discharge volume of 15 cm3 and cavity length of 20 cm. Such a short cavity facilitates single longitudinal mode operation. A roots blower is employed to achieve the necessary gas flow rate for high-repetition-frequency operation in a compact design. Output has been obtained at 1 kHz and a stable discharge to a repetition rate of 2 kHz has been demonstrated. The laser is part of a program aimed at the development of an efficient laser system for molecular laser isotope separation. Additional applications in materials processing are envisioned.

  4. Hispanic High School Graduates Pass Whites in Rate of College Enrollment: High School Drop-out Rate at Record Low

    ERIC Educational Resources Information Center

    Fry, Richard; Taylor, Paul

    2013-01-01

    A record seven-in-ten (69%) Hispanic high school graduates in the class of 2012 enrolled in college that fall, two percentage points higher than the rate (67%) among their white counterparts, according to a Pew Research Center analysis of new data from the U.S. Census Bureau. This milestone is the result of a long-term increase in Hispanic…

  5. Vitreous bond CBN high speed and high material removal rate grinding of ceramics

    SciTech Connect

    Shih, A.J.; Grant, M.B.; Yonushonis, T.M.; Morris, T.O.; McSpadden, S.B.

    1998-08-01

    High speed (up to 127 m/s) and high material removal rate (up to 10 mm{sup 3}/s/mm) grinding experiments using a vitreous bond CBN wheel were conducted to investigate the effects of material removal rate, wheel speed, dwell time and truing speed ratio on cylindrical grinding of silicon nitride and zirconia. Experimental results show that the high grinding wheel surface speed can reduce the effective chip thickness, lower grinding forces, enable high material removal rate grinding and achieve a higher G-ratio. The radial feed rate was increased to as high as 0.34 {micro}m/s for zirconia and 0.25 {micro}m/s for silicon nitride grinding to explore the advantage of using high wheel speed for cost-effective high material removal rate grinding of ceramics.

  6. High Pressure Burn Rate Measurements on an Ammonium Perchlorate Propellant

    SciTech Connect

    Glascoe, E A; Tan, N

    2010-04-21

    High pressure deflagration rate measurements of a unique ammonium perchlorate (AP) based propellant are required to design the base burn motor for a Raytheon weapon system. The results of these deflagration rate measurements will be key in assessing safety and performance of the system. In particular, the system may experience transient pressures on the order of 100's of MPa (10's kPSI). Previous studies on similar AP based materials demonstrate that low pressure (e.g. P < 10 MPa or 1500 PSI) burn rates can be quite different than the elevated pressure deflagration rate measurements (see References and HPP results discussed herein), hence elevated pressure measurements are necessary in order understand the deflagration behavior under relevant conditions. Previous work on explosives have shown that at 100's of MPa some explosives will transition from a laminar burn mechanism to a convective burn mechanism in a process termed deconsolidative burning. The resulting burn rates that are orders-of-magnitude faster than the laminar burn rates. Materials that transition to the deconsolidative-convective burn mechanism at elevated pressures have been shown to be considerably more violent in confined heating experiments (i.e. cook-off scenarios). The mechanisms of propellant and explosive deflagration are extremely complex and include both chemical, and mechanical processes, hence predicting the behavior and rate of a novel material or formulation is difficult if not impossible. In this work, the AP/HTPB based material, TAL-1503 (B-2049), was burned in a constant volume apparatus in argon up to 300 MPa (ca. 44 kPSI). The burn rate and pressure were measured in-situ and used to calculate a pressure dependent burn rate. In general, the material appears to burn in a laminar fashion at these elevated pressures. The experiment was reproduced multiple times and the burn rate law using the best data is B = (0.6 {+-} 0.1) x P{sup (1.05{+-}0.02)} where B is the burn rate in mm/s and

  7. A high-rate PCI-based telemetry processor system

    NASA Astrophysics Data System (ADS)

    Turri, R.

    2002-07-01

    The high performances reached by the Satellite on-board telemetry generation and transmission, as consequently, will impose the design of ground facilities with higher processing capabilities at low cost to allow a good diffusion of these ground station. The equipment normally used are based on complex, proprietary bus and computing architectures that prevent the systems from exploiting the continuous and rapid increasing in computing power available on market. The PCI bus systems now allow processing of high-rate data streams in a standard PC-system. At the same time the Windows NT operating system supports multitasking and symmetric multiprocessing, giving the capability to process high data rate signals. In addition, high-speed networking, 64 bit PCI-bus technologies and the increase in processor power and software, allow creating a system based on COTS products (which in future may be easily and inexpensively upgraded). In the frame of EUCLID RTP 9.8 project, a specific work element was dedicated to develop the architecture of a system able to acquire telemetry data of up to 600 Mbps. Laben S.p.A - a Finmeccanica Company -, entrusted of this work, has designed a PCI-based telemetry system making possible the communication between a satellite down-link and a wide area network at the required rate.

  8. Modeling Large-Strain, High-Rate Deformation in Metals

    SciTech Connect

    Lesuer, D R; Kay, G J; LeBlanc, M M

    2001-07-20

    The large strain deformation response of 6061-T6 and Ti-6Al-4V has been evaluated over a range in strain rates from 10{sup -4} s{sup -1} to over 10{sup 4} s{sup -1}. The results have been used to critically evaluate the strength and damage components of the Johnson-Cook (JC) material model. A new model that addresses the shortcomings of the JC model was then developed and evaluated. The model is derived from the rate equations that represent deformation mechanisms active during moderate and high rate loading. Another model that accounts for the influence of void formation on yield and flow behavior of a ductile metal (the Gurson model) was also evaluated. The characteristics and predictive capabilities of these models are reviewed.

  9. Devolatilization of bituminous coals at medium to high heating rates

    SciTech Connect

    Jamaluddin, A.S.; Truelove, J.S.; Wall, T.F.

    1986-03-01

    A high-volatile and a medium volatile bituminous coal, size-graded between 53 and 63 ..mu..m, were devolatilized in a laboratory-scale laminar-flow furnace at 800-1400/sup 0/C at heating rates of 1 x 10/sup 4/-5 x 10/sup 4/ /sup 0/C s. The weight loss was determined by both gravimetric and ash-tracer techniques. The experimental results were well correlated by a two-competing-reactions devolatilization model. The model was also evaluated against data from captive-sample experiments at moderate heating rates of 250-1000/sup 0/C/s. Heating rate was found to affect substantially the devolatilization weight loss.

  10. Sample size and sampling errors as the source of dispersion in chemical analyses. [for high-Ti lunar basalt

    NASA Technical Reports Server (NTRS)

    Clanton, U. S.; Fletcher, C. R.

    1976-01-01

    The paper describes a Monte Carlo model for simulation of two-dimensional representations of thin sections of some of the more common igneous rock textures. These representations are extrapolated to three dimensions to develop a volume of 'rock'. The model (here applied to a medium-grained high-Ti basalt) can be used to determine a statistically significant sample for a lunar rock or to predict the probable errors in the oxide contents that can occur during the analysis of a sample that is not representative of the parent rock.

  11. The incidence of diagnostic error in medicine.

    PubMed

    Graber, Mark L

    2013-10-01

    A wide variety of research studies suggest that breakdowns in the diagnostic process result in a staggering toll of harm and patient deaths. These include autopsy studies, case reviews, surveys of patient and physicians, voluntary reporting systems, using standardised patients, second reviews, diagnostic testing audits and closed claims reviews. Although these different approaches provide important information and unique insights regarding diagnostic errors, each has limitations and none is well suited to establishing the incidence of diagnostic error in actual practice, or the aggregate rate of error and harm. We argue that being able to measure the incidence of diagnostic error is essential to enable research studies on diagnostic error, and to initiate quality improvement projects aimed at reducing the risk of error and harm. Three approaches appear most promising in this regard: (1) using 'trigger tools' to identify from electronic health records cases at high risk for diagnostic error; (2) using standardised patients (secret shoppers) to study the rate of error in practice; (3) encouraging both patients and physicians to voluntarily report errors they encounter, and facilitating this process. PMID:23771902

  12. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  13. Dynamic high-temperature characterization of an iridium alloy in compression at high strain rates.

    SciTech Connect

    Song, Bo; Nelson, Kevin; Lipinski, Ronald J.; Bignell, John L.; Ulrich, G. B.; George, E. P.

    2014-06-01

    Iridium alloys have superior strength and ductility at elevated temperatures, making them useful as structural materials for certain high-temperature applications. However, experimental data on their high-temperature high-strain-rate performance are needed for understanding high-speed impacts in severe elevated-temperature environments. Kolsky bars (also called split Hopkinson bars) have been extensively employed for high-strain-rate characterization of materials at room temperature, but it has been challenging to adapt them for the measurement of dynamic properties at high temperatures. Current high-temperature Kolsky compression bar techniques are not capable of obtaining satisfactory high-temperature high-strain-rate stress-strain response of thin iridium specimens investigated in this study. We analyzed the difficulties encountered in high-temperature Kolsky compression bar testing of thin iridium alloy specimens. Appropriate modifications were made to the current high-temperature Kolsky compression bar technique to obtain reliable compressive stress-strain response of an iridium alloy at high strain rates (300 10000 s-1) and temperatures (750ÀC and 1030ÀC). Uncertainties in such high-temperature high-strain-rate experiments on thin iridium specimens were also analyzed. The compressive stress-strain response of the iridium alloy showed significant sensitivity to strain rate and temperature.

  14. High rates of evolution preceded the origin of birds.

    PubMed

    Puttick, Mark N; Thomas, Gavin H; Benton, Michael J

    2014-05-01

    The origin of birds (Aves) is one of the great evolutionary transitions. Fossils show that many unique morphological features of modern birds, such as feathers, reduction in body size, and the semilunate carpal, long preceded the origin of clade Aves, but some may be unique to Aves, such as relative elongation of the forelimb. We study the evolution of body size and forelimb length across the phylogeny of coelurosaurian theropods and Mesozoic Aves. Using recently developed phylogenetic comparative methods, we find an increase in rates of body size and body size dependent forelimb evolution leading to small body size relative to forelimb length in Paraves, the wider clade comprising Aves and Deinonychosauria. The high evolutionary rates arose primarily from a reduction in body size, as there were no increased rates of forelimb evolution. In line with a recent study, we find evidence that Aves appear to have a unique relationship between body size and forelimb dimensions. Traits associated with Aves evolved before their origin, at high rates, and support the notion that numerous lineages of paravians were experimenting with different modes of flight through the Late Jurassic and Early Cretaceous. PMID:24471891

  15. High rates of organic carbon burial in fjord sediments globally

    NASA Astrophysics Data System (ADS)

    Smith, Richard W.; Bianchi, Thomas S.; Allison, Mead; Savage, Candida; Galy, Valier

    2015-06-01

    The deposition and long-term burial of organic carbon in marine sediments has played a key role in controlling atmospheric O2 and CO2 concentrations over the past 500 million years. Marine carbon burial represents the dominant natural mechanism of long-term organic carbon sequestration. Fjords--deep, glacially carved estuaries at high latitudes--have been hypothesized to be hotspots of organic carbon burial, because they receive high rates of organic material fluxes from the watershed. Here we compile organic carbon concentrations from 573 fjord surface sediment samples and 124 sediment cores from nearly all fjord systems globally. We use sediment organic carbon content and sediment delivery rates to calculate rates of organic carbon burial in fjord systems across the globe. We estimate that about 18 Mt of organic carbon are buried in fjord sediments each year, equivalent to 11% of annual marine carbon burial globally. Per unit area, fjord organic carbon burial rates are one hundred times as large as the global ocean average, and fjord sediments contain twice as much organic carbon as biogenous sediments underlying the upwelling regions of the ocean. We conclude that fjords may play an important role in climate regulation on glacial-interglacial timescales.

  16. Investigation of high-rate lithium-thionyl chloride cells

    NASA Astrophysics Data System (ADS)

    Hayes, Catherine A.; Gust, Steven; Farrington, Michael D.; Lockwood, Judith A.; Donaldson, George J.

    Chemical analysis of a commercially produced high-rate D-size lithium-thionyl cell was carried out, as a function of rate of discharge (1 ohm and 5 ohms), depth of discharge, and temperature (25 C and -40 C), using specially developed methods for identifying suspected minor cell products or impurities which may effect cell performance. These methods include a product-retrieval system which involves solvent extraction to enhance the recovery of suspected semivolatile minor chemicals, and methods of quantitative GC analysis of volatile and semivolatile products. The nonvolatile products were analyzed by wet chemical methods. The results of the analyses indicate that the predominant discharge reaction in this cell is 4Li + 2SOCl2 going to 4LiCl + S + SO2, with SO2 formation decreasing towards the end of cell life (7 to 12 Ah). The rate of discharge had no effect on the product distribution. Upon discharge of the high-rate cell at -40 C, one cell exploded, and all others exhibited overheating and rapid internal pressure rise when allowed to warm up to room temperature.

  17. High Rate Proton Irradiation of 15mm Muon Drifttubes

    NASA Astrophysics Data System (ADS)

    Zibell, A.; Biebel, O.; Hertenberger, R.; Ruschke, A.; Schmitt, Ch.; Kroha, H.; Bittner, B.; Schwegler, P.; Dubbert, J.; Ott, S.

    2012-08-01

    Future LHC luminosity upgrades will significantly increase the amount of background hits from photons, neutrons 11.11d protons in the detectors of the ATLAS muon spectrometer. At the proposed LHC peak luminosity of 5\\cdot 1034(1)/(cm2s), background hit rates of more than 10(kHz)/(cm2) are expected in the innermost forward region, leading to a loss of performance of the current tracking chambers. Based on the ATLAS Monitored Drift Tube chambers, a new high rate capable drift tube detecor using tubes with a reduced diameter of 15mm was developed. To test the response to highly ionizing particles, a prototype chamber of 46 15mm drift tubes was irradiated with a 20 MeV proton beam at the tandem accelerator at the Maier-Leibnitz Laboratory, Munich. Three tubes in a planar layer were irradiated while all other tubes were used for reconstruction of cosmic muon tracks through irradiated and nonirradiated parts of the chamber. To determine the rate capability of the 15mm drifttubes we investigated the effect of the proton hit rate on pulse height, efficiency and spatial resolution of the cosmic muon signals.

  18. High removal rate laser-based coating removal system

    SciTech Connect

    Matthews, D.L.; Celliers, P.M.; Hackel, L.; Da Silva, L.B.; Dane, C.B.; Mrowka, S.

    1999-11-16

    A compact laser system is disclosed that removes surface coatings (such as paint, dirt, etc.) at a removal rate as high as 1,000 ft{sup 2}/hr or more without damaging the surface. A high repetition rate laser with multiple amplification passes propagating through at least one optical amplifier is used, along with a delivery system consisting of a telescoping and articulating tube which also contains an evacuation system for simultaneously sweeping up the debris produced in the process. The amplified beam can be converted to an output beam by passively switching the polarization of at least one amplified beam. The system also has a personal safety system which protects against accidental exposures.

  19. Failure Rate Data Analysis for High Technology Components

    SciTech Connect

    L. C. Cadwallader

    2007-07-01

    Understanding component reliability helps designers create more robust future designs and supports efficient and cost-effective operations of existing machines. The accelerator community can leverage the commonality of its high-vacuum and high-power systems with those of the magnetic fusion community to gain access to a larger database of reliability data. Reliability studies performed under the auspices of the International Energy Agency are the result of an international working group, which has generated a component failure rate database for fusion experiment components. The initial database work harvested published data and now analyzes operating experience data. This paper discusses the usefulness of reliability data, describes the failure rate data collection and analysis effort, discusses reliability for components with scarce data, and points out some of the intersections between magnetic fusion experiments and accelerators.

  20. Deconvolution of evoked responses obtained at high stimulus rates

    NASA Astrophysics Data System (ADS)

    Delgado, Rafael E.; Ozdamar, Ozcan

    2004-03-01

    Continuous loop averaging deconvolution (CLAD) is a new general mathematical theory and method developed to deconvolve overlapping auditory evoked responses obtained at high stimulation rates. Using CLAD, arbitrary stimulus sequences are generated and averaged responses deconvolved. Until now, only a few special stimulus series such as maximum length sequences (MLS) and Legendre sequences (LGS) were capable of performing this task. A CLAD computer algorithm is developed and implemented in an evoked potential averaging system. Computer simulations are used to verify the theory and methodology. Auditory brainstem responses (ABR) and middle latency responses (MLR) are acquired from subjects with normal hearing at high stimulation rates to validate and show the feasibility of the CLAD technique.

  1. The use of high-resolution atmospheric simulations over mountainous terrain for deriving error correction functions of satellite precipitation products

    NASA Astrophysics Data System (ADS)

    Bartsotas, Nikolaos S.; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Kallos, George

    2015-04-01

    Mountainous regions account for a significant part of the Earth's surface. Such areas are persistently affected by heavy precipitation episodes, which induce flash floods and landslides. The limitation of inadequate in-situ observations has put remote sensing rainfall estimates on a pedestal concerning the analyses of these events, as in many mountainous regions worldwide they serve as the only available data source. However, well-known issues of remote sensing techniques over mountainous areas, such as the strong underestimation of precipitation associated with low-level orographic enhancement, limit the way these estimates can accommodate operational needs. Even locations that fall within the range of weather radars suffer from strong biases in precipitation estimates due to terrain blockage and vertical rainfall profile issues. A novel approach towards the reduction of error in quantitative precipitation estimates lies upon the utilization of high-resolution numerical simulations in order to derive error correction functions for corresponding satellite precipitation data. The correction functions examined consist of 1) mean field bias adjustment and 2) pdf matching, two procedures that are simple and have been widely used in gauge-based adjustment techniques. For the needs of this study, more than 15 selected storms over the mountainous Upper Adige region of Northern Italy were simulated at 1-km resolution from a state-of-the-art atmospheric model (RAMS/ICLAMS), benefiting from the explicit cloud microphysical scheme, prognostic treatment of natural pollutants such as dust and sea-salt and the detailed SRTM90 topography that are implemented in the model. The proposed error correction approach is applied on three quasi-global and widely used satellite precipitation datasets (CMORPH, TRMM 3B42 V7 and PERSIANN) and the evaluation of the error model is based on independent in situ precipitation measurements from a dense rain gauge network (1 gauge / 70 km2

  2. Pre-Compensation for Continuous-Path Running Trajectory Error in High-Speed Machining of Parts with Varied Curvature Features

    NASA Astrophysics Data System (ADS)

    Jia, Zhenyuan; Song, Dening; Ma, Jianwei; Gao, Yuanyuan

    2016-04-01

    Parts with varied curvature features play increasingly critical roles in engineering, and are often machined under high-speed continuous-path running mode to ensure the machining efficiency. However, the continuous-path running trajectory error is significant during high-feed-speed machining, which seriously restricts the machining precision for such parts with varied curvature features. In order to reduce the continuous-path running trajectory error without sacrificing the machining efficiency, a pre-compensation method for the trajectory error is proposed. Based on the formation mechanism of the continuous-path running trajectory error analyzed, this error is estimated in advance by approximating the desired toolpath with spline curves. Then, an iterative error pre-compensation method is presented. By machining with the regenerated toolpath after pre-compensation instead of the uncompensated toolpath, the continuous-path running trajectory error can be effectively decreased without the reduction of the feed speed. To demonstrate the feasibility of the proposed pre-compensation method, a heart curve toolpath that possesses varied curvature features is employed. Experimental results indicate that compared with the uncompensated processing trajectory, the maximum and average machining errors for the pre-compensated processing trajectory are reduced by 67.19% and 82.30%, respectively. An easy to implement solution for high efficiency and high precision machining of the parts with varied curvature features is provided.

  3. Dynamic Recrystallization During High-Strain-Rate Tension of Copper

    NASA Astrophysics Data System (ADS)

    Mortazavi, Nooshin; Bonora, Nicola; Ruggiero, Andrew; Hörnqvist Colliander, Magnus

    2016-06-01

    Discontinuous dynamic recrystallization can occur during dynamic tensile extrusion of copper, which is subjected to uniaxial tensile strains of ~5 and strain rates up to 106 s-1 in the extruded section. Through high-resolution transmission Kikuchi diffraction, we show that nucleation occurs through subgrain rotation and grain boundary bulging at boundaries between <001> and <111> oriented grains. The observed nuclei consist of subgrains with a size of approximately 200 to 400 nm.

  4. Electrochemical cell with high discharge/charge rate capability

    DOEpatents

    Redey, Laszlo

    1988-01-01

    A fully charged positive electrode composition for an electrochemical cell includes FeS.sub.2 and NiS.sub.2 in about equal molar amounts along with about 2-20 mole percent of the reaction product Li.sub.2 S. Through selection of appropriate electrolyte compositions, high power output or low operating temperatures can be obtained. The cell includes a substantially constant electrode impedance through most of its charge and discharge range. Exceptionally high discharge rates and overcharge protection are obtainable through use of the inventive electrode composition.

  5. Semi-solid electrodes having high rate capability

    DOEpatents

    Chiang, Yet-Ming; Duduta, Mihai; Holman, Richard; Limthongkul, Pimpa; Tan, Taison

    2016-06-07

    Embodiments described herein relate generally to electrochemical cells having high rate capability, and more particularly to devices, systems and methods of producing high capacity and high rate capability batteries having relatively thick semi-solid electrodes. In some embodiments, an electrochemical cell includes an anode and a semi-solid cathode. The semi-solid cathode includes a suspension of an active material of about 35% to about 75% by volume of an active material and about 0.5% to about 8% by volume of a conductive material in a non-aqueous liquid electrolyte. An ion-permeable membrane is disposed between the anode and the semi-solid cathode. The semi-solid cathode has a thickness of about 250 .mu.m to about 2,000 .mu.m, and the electrochemical cell has an area specific capacity of at least about 7 mAh/cm.sup.2 at a C-rate of C/4. In some embodiments, the semi-solid cathode slurry has a mixing index of at least about 0.9.

  6. Application of thermal spray coatings using high deposition rate equipment

    SciTech Connect

    Novak, H.L.

    1995-12-01

    Reusable launch vehicles located by the ocean are subject to harsh seacoast environments before launch and immersion after splashdown at sea and towback to the refurbishment facility. High strength aluminum and non-corrosion resistant steel alloys are prone to general corrosion and pitting due to galvanic couples and protective coating damage. Additional protection of structural materials with thermally sprayed pure aluminum coatings was evaluated for plasma, arc spray and high velocity oxy-fuel (HVOF) processes. Comparisons are made for corrosion rates of various coated aluminum alloy and steel substrates when exposed to ASTM B-117 neutral salt fog testing and also to beach exposure tests performed at Kennedy Space Center, Florida. Recent development work involved the use of high deposition rate thermal arc-spray equipment. The use of an inverter power supply reduced powdering and enhanced operator visibility. Deposition rates of 45.36--68.04 kilograms/hour are obtainable using 4.76--6.35 millimeter diameter wire electrodes.

  7. Comparison on the sensitivity of fiber optic SONET OC-48 PIN-TIA receivers measured by using synchronous modulation intermixing technique and bit-error-rate tester

    NASA Astrophysics Data System (ADS)

    Lin, Gong-Ru; Liao, Yu-Sheng

    2004-04-01

    The sensitivity of SONET p-i-n photodiode receivers with transimpedance amplifier (PIN-TIA) from OC-3 to OC-48 data rates measured by using a standard bit-error-rate tester (BERT) and a novel synchronous-modulation inter-mixing (SMIM) technique are compared. A threshold inter-mixed voltage of below 15.8 mV obtained by SMIM method corresponding to the sensitivity of PIN-TIA receiver beyond -32 dBm determined by BERT for the SONET OC-48 PIN-TIA receivers with a required BER of better than 10-10 is reported. the analysis interprets that the inter-mixed voltage for improving the PIN-TIA receiver sensitivity from -31 dBm to -33 dBm has to be increased from 12.5 mV to 20.4 mV. As compared to the BERT, the SMIM is a relatively simplified and low-cost technique for on-line mass-production diagnostics for measureing the sensitivity and evaluationg the BER performances of PIN-TIA receivers.

  8. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  9. Error coding simulations

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1993-11-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  10. An error control system with multiple-stage forward error corrections

    NASA Technical Reports Server (NTRS)

    Takata, Toyoo; Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1990-01-01

    A robust error-control coding system is presented. This system is a cascaded FEC (forward error control) scheme supported by parity retransmissions for further error correction in the erroneous data words. The error performance and throughput efficiency of the system are analyzed. Two specific examples of the error-control system are studied. The first example does not use an inner code, and the outer code, which is not interleaved, is a shortened code of the NASA standard RS code over GF(28). The second example, as proposed for NASA, uses the same shortened RS code as the base outer code C2, except that it is interleaved to a depth of 2. It is shown that both examples provide high reliability and throughput efficiency even for high channel bit-error rates in the range of 0.01.

  11. Calibration of high flow rate thoracic-size selective samplers.

    PubMed

    Lee, Taekhee; Thorpe, Andrew; Cauda, Emanuele; Harper, Martin

    2016-01-01

    High flow rate respirable size selective samplers, GK4.126 and FSP10 cyclones, were calibrated for thoracic-size selective sampling in two different laboratories. The National Institute for Occupational Safety and Health (NIOSH) utilized monodisperse ammonium fluorescein particles and scanning electron microscopy to determine the aerodynamic particle size of the monodisperse aerosol. Fluorescein intensity was measured to determine sampling efficiencies of the cyclones. The Health Safety and Laboratory (HSL) utilized a real time particle sizing instrument (Aerodynamic Particle Sizer) and polydisperse glass sphere particles and particle size distributions between the cyclone and reference sampler were compared. Sampling efficiency of the cyclones were compared to the thoracic convention defined by the American Conference of Governmental Industrial Hygienists (ACGIH)/Comité Européen de Normalisation (CEN)/International Standards Organization (ISO). The GK4.126 cyclone showed minimum bias compared to the thoracic convention at flow rates of 3.5 l min(-1) (NIOSH) and 2.7-3.3 l min(-1) (HSL) and the difference may be from the use of different test systems. In order to collect the most dust and reduce the limit of detection, HSL suggested using the upper end in range (3.3 l min(-1)). A flow rate of 3.4 l min(-1) would be a reasonable compromise, pending confirmation in other laboratories. The FSP10 cyclone showed minimum bias at the flow rate of 4.0 l min(-1) in the NIOSH laboratory test. The high flow rate thoracic-size selective samplers might be used for higher sample mass collection in order to meet analytical limits of quantification. PMID:26891196

  12. Calibration of high flow rate thoracic-size selective samplers

    PubMed Central

    Lee, Taekhee; Thorpe, Andrew; Cauda, Emanuele; Harper, Martin

    2016-01-01

    High flow rate respirable size selective samplers, GK4.126 and FSP10 cyclones, were calibrated for thoracic-size selective sampling in two different laboratories. The National Institute for Occupational Safety and Health (NIOSH) utilized monodisperse ammonium fluorescein particles and scanning electron microscopy to determine the aerodynamic particle size of the monodisperse aerosol. Fluorescein intensity was measured to determine sampling efficiencies of the cyclones. The Health Safety and Laboratory (HSL) utilized a real time particle sizing instrument (Aerodynamic Particle Sizer) and poly-disperse glass sphere particles and particle size distributions between the cyclone and reference sampler were compared. Sampling efficiency of the cyclones were compared to the thoracic convention defined by the American Conference of Governmental Industrial Hygienists (ACGIH)/Comité Européen de Normalisation (CEN)/International Standards Organization (ISO). The GK4.126 cyclone showed minimum bias compared to the thoracic convention at flow rates of 3.5 l min−1 (NIOSH) and 2.7–3.3 l min−1 (HSL) and the difference may be from the use of different test systems. In order to collect the most dust and reduce the limit of detection, HSL suggested using the upper end in range (3.3 l min−1). A flow rate of 3.4 l min−1 would be a reasonable compromise, pending confirmation in other laboratories. The FSP10 cyclone showed minimum bias at the flow rate of 4.0 l min−1 in the NIOSH laboratory test. The high flow rate thoracic-size selective samplers might be used for higher sample mass collection in order to meet analytical limits of quantification. PMID:26891196

  13. Mechanical Solder Characterisation Under High Strain Rate Conditions

    NASA Astrophysics Data System (ADS)

    Meier, Karsten; Roellig, Mike; Wiese, Steffen; Wolter, Klaus-Juergen

    2010-11-01

    Using a setup for high strain rate tensile experiments the mechanical behavior of two lead-free tin based solders is investigated. The first alloy is SnAg1.3Cu0.5Ni. The second alloy has a higher silver content but no addition of Ni. Solder joints are the main electrical, thermal and mechanical interconnection technology on the first and second interconnection level. With the recent rise of 3D packaging technologies many novel interconnection ideas are proposed with innovative or visionary nature. Copper pillar, stud bump, intermetallic (SLID) and even spring like joints are presented in a number of projects. However, soldering will remain one of the important interconnect technologies. Knowing the mechanical properties of solder joints is important for any reliability assessment, especially when it comes to vibration and mechanical shock associated with mobile applications. Taking the ongoing miniaturization and linked changes in solder joint microstructure and mechanical behavior into account the need for experimental work on that issue is not satisfied. The tests are accomplished utilizing miniature bulk specimens to match the microstructure of real solder joints as close as possible. The dogbone shaped bulk specimens have a crucial diameter of 1 mm, which is close to BGA solder joints. Experiments were done in the strain rate range from 20 s-1 to 600 s-1. Solder strengthening has been observed with increased strain rate for both SAC solder alloys. The yield stress increases by about 100% in the investigated strain rate range. The yield level differs strongly. A high speed camera system was used to assist the evaluation process of the stress and strain data. Besides the stress and strain data extracted from the experiment the ultimate fracture strain is determined and the fracture surfaces are evaluated using SEM technique considering rate dependency.

  14. Method for generating high-energy and high repetition rate laser pulses from CW amplifiers

    DOEpatents

    Zhang, Shukui

    2013-06-18

    A method for obtaining high-energy, high repetition rate laser pulses simultaneously using continuous wave (CW) amplifiers is described. The method provides for generating micro-joule level energy in pico-second laser pulses at Mega-hertz repetition rates.

  15. Characterization of an infrared detector for high frame rate thermography

    NASA Astrophysics Data System (ADS)

    Fruehmann, R. K.; Crump, D. A.; Dulieu-Barton, J. M.

    2013-10-01

    The use of a commercially available photodetector based infrared thermography system, operating in the 2-5 µm range, for high frame rate imaging of temperature evolutions in solid materials is investigated. Infrared photodetectors provide a very fast and precise means of obtaining temperature evolutions over a wide range of science and engineering applications. A typical indium antimonide detector will have a thermal resolution of around 4 mK for room temperature measurements, with a noise threshold around 15 to 20 mK. However the precision of the measurement is dependent on the integration time (akin to exposure time in conventional photography). For temperature evolutions that occur at a moderate rate the integration time can be relatively long, enabling a large signal to noise ratio. A matter of increasing importance in engineering is the behaviour of materials at high strain rates, such as those experienced in impact, shock and ballistic loading. The rapid strain evolution in the material is usually accompanied by a temperature change. The temperature change will affect the material constitutive properties and hence it is important to capture both the temperature and the strain evolutions to provide a proper constitutive law for the material behaviour. The present paper concentrates on the capture of the temperature evolutions, which occur at such rates that rule out the use of contact sensors such as thermocouples and electrical resistance thermometers, as their response times are too slow. Furthermore it is desirable to have an indication of the temperature distribution over a test specimen, hence the full-field approach of IRT is investigated. The paper explores the many hitherto unaddressed challenges of IRT when employed at high speed. Firstly the images must be captured at high speeds, which means reduced integration times and hence a reduction in the signal to noise ratio. Furthermore, to achieve the high image capture rates the detector array must be

  16. High Dose-Rate Versus Low Dose-Rate Brachytherapy for Lip Cancer

    SciTech Connect

    Ghadjar, Pirus; Bojaxhiu, Beat; Simcock, Mathew; Terribilini, Dario; Isaak, Bernhard; Gut, Philipp; Wolfensberger, Patrick; Broemme, Jens O.; Geretschlaeger, Andreas; Behrensmeier, Frank; Pica, Alessia; Aebersold, Daniel M.

    2012-07-15

    Purpose: To analyze the outcome after low-dose-rate (LDR) or high-dose-rate (HDR) brachytherapy for lip cancer. Methods and Materials: One hundred and three patients with newly diagnosed squamous cell carcinoma of the lip were treated between March 1985 and June 2009 either by HDR (n = 33) or LDR brachytherapy (n = 70). Sixty-eight patients received brachytherapy alone, and 35 received tumor excision followed by brachytherapy because of positive resection margins. Acute and late toxicity was assessed according to the Common Terminology Criteria for Adverse Events 3.0. Results: Median follow-up was 3.1 years (range, 0.3-23 years). Clinical and pathological variables did not differ significantly between groups. At 5 years, local recurrence-free survival, regional recurrence-free survival, and overall survival rates were 93%, 90%, and 77%. There was no significant difference for these endpoints when HDR was compared with LDR brachytherapy. Forty-two of 103 patients (41%) experienced acute Grade 2 and 57 of 103 patients (55%) experienced acute Grade 3 toxicity. Late Grade 1 toxicity was experienced by 34 of 103 patients (33%), and 5 of 103 patients (5%) experienced late Grade 2 toxicity; no Grade 3 late toxicity was observed. Acute and late toxicity rates were not significantly different between HDR and LDR brachytherapy. Conclusions: As treatment for lip cancer, HDR and LDR brachytherapy have comparable locoregional control and acute and late toxicity rates. HDR brachytherapy for lip cancer seems to be an effective treatment with acceptable toxicity.

  17. Scale dependence of rock friction at high work rate.

    PubMed

    Yamashita, Futoshi; Fukuyama, Eiichi; Mizoguchi, Kazuo; Takizawa, Shigeru; Xu, Shiqing; Kawakata, Hironori

    2015-12-10

    Determination of the frictional properties of rocks is crucial for an understanding of earthquake mechanics, because most earthquakes are caused by frictional sliding along faults. Prior studies using rotary shear apparatus revealed a marked decrease in frictional strength, which can cause a large stress drop and strong shaking, with increasing slip rate and increasing work rate. (The mechanical work rate per unit area equals the product of the shear stress and the slip rate.) However, those important findings were obtained in experiments using rock specimens with dimensions of only several centimetres, which are much smaller than the dimensions of a natural fault (of the order of 1,000 metres). Here we use a large-scale biaxial friction apparatus with metre-sized rock specimens to investigate scale-dependent rock friction. The experiments show that rock friction in metre-sized rock specimens starts to decrease at a work rate that is one order of magnitude smaller than that in centimetre-sized rock specimens. Mechanical, visual and material observations suggest that slip-evolved stress heterogeneity on the fault accounts for the difference. On the basis of these observations, we propose that stress-concentrated areas exist in which frictional slip produces more wear materials (gouge) than in areas outside, resulting in further stress concentrations at these areas. Shear stress on the fault is primarily sustained by stress-concentrated areas that undergo a high work rate, so those areas should weaken rapidly and cause the macroscopic frictional strength to decrease abruptly. To verify this idea, we conducted numerical simulations assuming that local friction follows the frictional properties observed on centimetre-sized rock specimens. The simulations reproduced the macroscopic frictional properties observed on the metre-sized rock specimens. Given that localized stress concentrations commonly occur naturally, our results suggest that a natural fault may lose its

  18. Scale dependence of rock friction at high work rate

    NASA Astrophysics Data System (ADS)

    Yamashita, Futoshi; Fukuyama, Eiichi; Mizoguchi, Kazuo; Takizawa, Shigeru; Xu, Shiqing; Kawakata, Hironori

    2015-12-01

    Determination of the frictional properties of rocks is crucial for an understanding of earthquake mechanics, because most earthquakes are caused by frictional sliding along faults. Prior studies using rotary shear apparatus revealed a marked decrease in frictional strength, which can cause a large stress drop and strong shaking, with increasing slip rate and increasing work rate. (The mechanical work rate per unit area equals the product of the shear stress and the slip rate.) However, those important findings were obtained in experiments using rock specimens with dimensions of only several centimetres, which are much smaller than the dimensions of a natural fault (of the order of 1,000 metres). Here we use a large-scale biaxial friction apparatus with metre-sized rock specimens to investigate scale-dependent rock friction. The experiments show that rock friction in metre-sized rock specimens starts to decrease at a work rate that is one order of magnitude smaller than that in centimetre-sized rock specimens. Mechanical, visual and material observations suggest that slip-evolved stress heterogeneity on the fault accounts for the difference. On the basis of these observations, we propose that stress-concentrated areas exist in which frictional slip produces more wear materials (gouge) than in areas outside, resulting in further stress concentrations at these areas. Shear stress on the fault is primarily sustained by stress-concentrated areas that undergo a high work rate, so those areas should weaken rapidly and cause the macroscopic frictional strength to decrease abruptly. To verify this idea, we conducted numerical simulations assuming that local friction follows the frictional properties observed on centimetre-sized rock specimens. The simulations reproduced the macroscopic frictional properties observed on the metre-sized rock specimens. Given that localized stress concentrations commonly occur naturally, our results suggest that a natural fault may lose its

  19. Error Estimation in an Optimal Interpolation Scheme for High Spatial and Temporal Resolution SST Analyses

    NASA Technical Reports Server (NTRS)

    Rigney, Matt; Jedlovec, Gary; LaFontaine, Frank; Shafer, Jaclyn

    2010-01-01

    Heat and moisture exchange between ocean surface and atmosphere plays an integral role in short-term, regional NWP. Current SST products lack both spatial and temporal resolution to accurately capture small-scale features that affect heat and moisture flux. NASA satellite is used to produce high spatial and temporal resolution SST analysis using an OI technique.

  20. The Development of an African-Centered Urban High School by Trial and Error

    ERIC Educational Resources Information Center

    Robinson, Theresa Y.; Jeremiah, Maxine

    2011-01-01

    As part of the Small Schools movement in Chicago Public Schools, a high school dedicated to African-centered education was chartered. The virtues of Ma'at and the Nguzo Saba, otherwise known as the seven principles of Kwanza, were the foundational principles of the school and were to be integrated into all of the practices and policies of the…