Science.gov

Sample records for high error rates

  1. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors.

    PubMed

    Bányai, László; Patthy, László

    2016-08-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation.

  2. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors

    PubMed Central

    Bányai, László; Patthy, László

    2016-01-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation. PMID:27476717

  3. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors.

    PubMed

    Bányai, László; Patthy, László

    2016-01-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation. PMID:27476717

  4. Estimating genotype error rates from high-coverage next-generation sequence data.

    PubMed

    Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil

    2014-11-01

    Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods.

  5. Bursty channel errors and the Viterbi decoder. [for high rate digit data channels

    NASA Technical Reports Server (NTRS)

    Ingels, F.

    1978-01-01

    Recent applications have developed for spread spectrum communications, hardware data transfer, high rate digital systems, etc. that use channels for which errors tend to occur in short bursts in addition to those at random, i.e., compound channels. Viterbi decoding algorithms are generally very good for random error channels but are not as efficient for burst errors or for compound channels. This paper presents the results of a computer simulation study of the performance of various Viterbi decoders when receiving data corrupted with burst and random errors on the same channel. Simulations were performed using hard-decision CPSK.

  6. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  7. Bit error rate performance of Image Processing Facility high density tape recorders

    NASA Technical Reports Server (NTRS)

    Heffner, P.

    1981-01-01

    The Image Processing Facility at the NASA/Goddard Space Flight Center uses High Density Tape Recorders (HDTR's) to transfer high volume image data and ancillary information from one system to another. For ancillary information, it is required that very low bit error rates (BER's) accompany the transfers. The facility processes about 10 to the 11th bits of image data per day from many sensors, involving 15 independent processing systems requiring the use of HDTR's. When acquired, the 16 HDTR's offered state-of-the-art performance of 1 x 10 to the -6th BER as specified. The BER requirement was later upgraded in two steps: (1) incorporating data randomizing circuitry to yield a BER of 2 x 10 to the -7th and (2) further modifying to include a bit error correction capability to attain a BER of 2 x 10 to the -9th. The total improvement factor was 500 to 1. Attention is given here to the background, technical approach, and final results of these modifications. Also discussed are the format of the data recorded by the HDTR, the magnetic tape format, the magnetic tape dropout characteristics as experienced in the Image Processing Facility, the head life history, and the reliability of the HDTR's.

  8. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities

    PubMed Central

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-01-01

    Introduction Sound is among the significant environmental factors for people’s health, and it has an important role in both physical and psychological injuries, and it also affects individuals’ performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. Methods This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Results Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant’s performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). Conclusion This study found that a sound level of 110 dB had an important effect on the individuals’ performances, i.e., the performances were decreased. PMID:27123216

  9. Instantaneous bit-error-rate meter

    NASA Astrophysics Data System (ADS)

    Slack, Robert A.

    1995-06-01

    An instantaneous bit error rate meter provides an instantaneous, real time reading of bit error rate for digital communications data. Bit error pulses are input into the meter and are first filtered in a buffer stage to provide input impedance matching and desensitization to pulse variations in amplitude, rise time and pulse width. The bit error pulses are transformed into trigger signals for a timing pulse generator. The timing pulse generator generates timing pulses for each transformed bit error pulse, and is calibrated to generate timing pulses having a preselected pulse width corresponding to the baud rate of the communications data. An integrator generates a voltage from the timing pulses that is representative of the bit error rate as a function of the data transmission rate. The integrated voltage is then displayed on a meter to indicate the bit error rate.

  10. Adaptive planning strategy for high dose rate prostate brachytherapy—a simulation study on needle positioning errors.

    PubMed

    Borot de Battisti, M; Denis de Senneville, B; Maenhout, M; Hautvast, G; Binnekamp, D; Lagendijk, J J W; van Vulpen, M; Moerland, M A

    2016-03-01

    The development of magnetic resonance (MR) guided high dose rate (HDR) brachytherapy for prostate cancer has gained increasing interest for delivering a high tumor dose safely in a single fraction. To support needle placement in the limited workspace inside the closed-bore MRI, a single-needle MR-compatible robot is currently under development at the University Medical Center Utrecht (UMCU). This robotic device taps the needle in a divergent way from a single rotation point into the prostate. With this setup, it is warranted to deliver the irradiation dose by successive insertions of the needle. Although robot-assisted needle placement is expected to be more accurate than manual template-guided insertion, needle positioning errors may occur and are likely to modify the pre-planned dose distribution.In this paper, we propose a dose plan adaptation strategy for HDR prostate brachytherapy with feedback on the needle position: a dose plan is made at the beginning of the interventional procedure and updated after each needle insertion in order to compensate for possible needle positioning errors. The introduced procedure can be used with the single needle MR-compatible robot developed at the UMCU. The proposed feedback strategy was tested by simulating complete HDR procedures with and without feedback on eight patients with different numbers of needle insertions (varying from 4 to 12). In of the cases tested, the number of clinically acceptable plans obtained at the end of the procedure was larger with feedback compared to the situation without feedback. Furthermore, the computation time of the feedback between each insertion was below 100 s which makes it eligible for intra-operative use.

  11. Irreducible error rate in aeronautical satellite channels

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1988-01-01

    The irreducible error rate in aeronautical satellite systems is experimentally investigated. It is shown that the introduction of a delay in the multipath component of a Rician channel increases the channel irreducible error rate. However, since the carrier/multipath ratio is usually large for aeronautical applications, this rise in the irreducible error rate should not be interpreted as a practical limitation of aeronautical satellite communications.

  12. Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.

    1987-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  13. Bit-error-rate testing of high-power 30-GHz traveling wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Fujikawa, Gene

    1986-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30 GHz, 200 W, coupled-cavity traveling wave tubes (TWTs). The transmission effects of each TWT were investigated on a band-limited, 220 Mb/sec SMSK signal. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20 GHz technology development program. The approach taken to test the 30 GHz tubes is described and the resultant test data are discussed. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  14. High-Speed Tracking Method Using Zero Phase Error Tracking-Feed-Forward (ZPET-FF) Control for High-Data-Transfer-Rate Optical Disk Drives

    NASA Astrophysics Data System (ADS)

    Koide, Daiichi; Yanagisawa, Hitoshi; Tokumaru, Haruki; Nakamura, Shoichi; Ohishi, Kiyoshi; Inomata, Koichi; Miyazaki, Toshimasa

    2004-07-01

    We describe the effectiveness of feed-forward control using the zero phase error tracking method (ZPET-FF control) of the tracking servo for high-data-transfer-rate optical disk drives, as we are developing an optical disk system to replace the conventional professional videotape recorder for recording high-definition television signals for news gathering or producing broadcast contents. The optical disk system requires a high-data-transfer-rate of more than 200 Mbps and large recording capacity. Therefore, fast and precise track-following control is indispensable. Here, we compare the characteristics of ZPET-FF control with those of conventional feedback control or repetitive control. Experimental results show that ZPET-FF control is more precise than feedback control, and the residual tracking error level is achieved with a tolerance of 10 nm at a linear velocity of 26 m/s in the experimental setup using a blue-violet laser optical head and high-density media. The feasibility of achieving precise ZPET-FF control at 15000 rpm is also presented.

  15. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase.

    PubMed

    McInerney, Peter; Adams, Paul; Hadi, Masood Z

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572

  16. Forward error correction and spatial diversity techniques for high-data-rate MILSATCOM over a slow-fading, nuclear-disturbed channel

    NASA Astrophysics Data System (ADS)

    Paul, Heywood I.; Meader, Charles B.; Lyons, Daniel A.; Ayers, David R.

    Forward error correction (FEC) and spatial diversity techniques are considered for improving the reliability of high-data-rate military satellite communication (MILSATCOM) over a slow-fading, nuclear-disturbed channel. Slow fading, which occurs when the channel decorrelation time is much greater than the transmitted symbol interval, is characterized by deep fades and, without special precautions, long bursts of errors over high-data-rate communication links. Using the widely accepted Defense Nuclear Agency (DNA) nuclear-scintillated channel model, the authors derive performance tradeoffs among required interleaver storage, FEC, spatial diversity, and link signal-to-noise ratio for differential binary phase shift keying (DBPSK) in the slow-fading environment. Spatial diversity is found to yield impressive gains without the large memory storage and transmission relay requirements associated with interleaving.

  17. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGESBeta

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore » rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  18. Monitoring Error Rates In Illumina Sequencing

    PubMed Central

    Manley, Leigh J.; Ma, Duanduan; Levine, Stuart S.

    2016-01-01

    Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR’s unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted. PMID:27672352

  19. Monitoring Error Rates In Illumina Sequencing

    PubMed Central

    Manley, Leigh J.; Ma, Duanduan; Levine, Stuart S.

    2016-01-01

    Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR’s unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted.

  20. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. PMID:26888900

  1. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories.

  2. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  3. Controlling type-1 error rates in whole effluent toxicity testing

    SciTech Connect

    Smith, R.; Johnson, S.C.

    1995-12-31

    A form of variability, called the dose x test interaction, has been found to affect the variability of the mean differences from control in the statistical tests used to evaluate Whole Effluent Toxicity Tests for compliance purposes. Since the dose x test interaction is not included in these statistical tests, the assumed type-1 and type-2 error rates can be incorrect. The accepted type-1 error rate for these tests is 5%. Analysis of over 100 Ceriodaphnia, fathead minnow and sea urchin fertilization tests showed that when the test x dose interaction term was not included in the calculations the type-1 error rate was inflated to as high as 20%. In a compliance setting, this problem may lead to incorrect regulatory decisions. Statistical tests are proposed that properly incorporate the dose x test interaction variance.

  4. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  5. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  6. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  7. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  8. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart apply to the fifty States, the District...

  9. Logical error rate in the Pauli twirling approximation.

    PubMed

    Katabarwa, Amara; Geller, Michael R

    2015-09-30

    The performance of error correction protocols are necessary for understanding the operation of potential quantum computers, but this requires physical error models that can be simulated efficiently with classical computers. The Gottesmann-Knill theorem guarantees a class of such error models. Of these, one of the simplest is the Pauli twirling approximation (PTA), which is obtained by twirling an arbitrary completely positive error channel over the Pauli basis, resulting in a Pauli channel. In this work, we test the PTA's accuracy at predicting the logical error rate by simulating the 5-qubit code using a 9-qubit circuit with realistic decoherence and unitary gate errors. We find evidence for good agreement with exact simulation, with the PTA overestimating the logical error rate by a factor of 2 to 3. Our results suggest that the PTA is a reliable predictor of the logical error rate, at least for low-distance codes.

  10. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.

  11. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval; Wallman, Joel; Sanders, Barry

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.

  12. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  13. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  14. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  15. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  16. Error Rates of Multiple F Tests in Factorial ANOVA Designs.

    ERIC Educational Resources Information Center

    Halderson, Judith S.; Glasnapp, Douglas R.

    The primary purpose of the present study was to investigate empirically the effect of multiple hypothesis testing on error rates in factorial ANOVA designs under a variety of controlled conditions. The per comparison, per experiment, and experimentwise error rates were investigated for three hypothesis testing procedures. The specific conditions…

  17. Technological Advancements and Error Rates in Radiation Therapy Delivery

    SciTech Connect

    Margalit, Danielle N.

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There

  18. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  19. Hypercorrection of High Confidence Errors in Children

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2012-01-01

    Three experiments investigated whether the hypercorrection effect--the finding that errors committed with high confidence are easier, rather than more difficult, to correct than are errors committed with low confidence--occurs in grade school children as it does in young adults. All three experiments showed that Grade 3-6 children hypercorrected…

  20. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  1. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  2. Experimental quantum error correction with high fidelity

    SciTech Connect

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-15

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from {epsilon} to {approx}{epsilon}{sup 2}. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  3. Aid to determining freeway metering rates and detecting loop errors

    SciTech Connect

    Nihan, N.L.

    1997-11-01

    A recent freeway congestion prediction study for the Washington Department of Transportation (WSDOT) found that the sum of storage rates over time, SumSR(t), for a freeway section was a better variable for determining the best upstream ramp metering rates than the storage rate for time interval t, SR(t), which is the current WSDOT criterion. (Use of the SumSR(t) variable for this purpose requires that the summation be started during a period of low density flows.) Another finding was that the SumSR(t) variable was a better detector of loop chattering errors than WSDOT`s current criterion, which misses chattering errors that occur at normal traffic volume levels. Since calculation of SumSR(t) is easily incorporated in the current WSDOT ramp metering algorithm, the writer recommends its use in future WSDOT freeway metering schemes.

  4. The nearest neighbor and the bayes error rates.

    PubMed

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal. PMID:21869395

  5. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  6. CMOS RAM cosmic-ray-induced-error-rate analysis

    NASA Technical Reports Server (NTRS)

    Pickel, J. C.; Blandford, J. T., Jr.

    1981-01-01

    A significant number of spacecraft operational anomalies are believed to be associated with cosmic-ray-induced soft errors in the LSI memories. Test programs using a cyclotron to simulate cosmic rays have established conclusively that many common commercial memory types are vulnerable to heavy-ion upset. A description is given of the methodology and the results of a detailed analysis for predicting the bit-error rate in an assumed space environment for CMOS memory devices. Results are presented for three types of commercially available CMOS 1,024-bit RAMs. It was found that the HM6508 is susceptible to single-ion induced latchup from argon and krypton ions. The HS6508 and HS6508RH and the CDP1821 apparently are not susceptible to single-ion induced latchup.

  7. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  8. Controlling Rater Stringency Error in Clinical Performance Rating: Further Validation of a Performance Rating Theory.

    ERIC Educational Resources Information Center

    Cason, Gerald J.; And Others

    Prior research in a single clinical training setting has shown Cason and Cason's (1981) simplified model of their performance rating theory can improve rating reliability and validity through statistical control of rater stringency error. Here, the model was applied to clinical performance ratings of 14 cohorts (about 250 students and 200 raters)…

  9. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  10. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  11. Optimized filtering reduces the error rate in detecting genomic variants by short-read sequencing.

    PubMed

    Reumers, Joke; De Rijk, Peter; Zhao, Hui; Liekens, Anthony; Smeets, Dominiek; Cleary, John; Van Loo, Peter; Van Den Bossche, Maarten; Catthoor, Kirsten; Sabbe, Bernard; Despierre, Evelyn; Vergote, Ignace; Hilbush, Brian; Lambrechts, Diether; Del-Favero, Jurgen

    2012-01-01

    Distinguishing single-nucleotide variants (SNVs) from errors in whole-genome sequences remains challenging. Here we describe a set of filters, together with a freely accessible software tool, that selectively reduce error rates and thereby facilitate variant detection in data from two short-read sequencing technologies, Complete Genomics and Illumina. By sequencing the nearly identical genomes from monozygotic twins and considering shared SNVs as 'true variants' and discordant SNVs as 'errors', we optimized thresholds for 12 individual filters and assessed which of the 1,048 filter combinations were effective in terms of sensitivity and specificity. Cumulative application of all effective filters reduced the error rate by 290-fold, facilitating the identification of genetic differences between monozygotic twins. We also applied an adapted, less stringent set of filters to reliably identify somatic mutations in a highly rearranged tumor and to identify variants in the NA19240 HapMap genome relative to a reference set of SNVs. PMID:22178994

  12. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  13. Error Rates and Channel Capacities in Multipulse PPM

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Moision, Bruce

    2007-01-01

    A method of computing channel capacities and error rates in multipulse pulse-position modulation (multipulse PPM) has been developed. The method makes it possible, when designing an optical PPM communication system, to determine whether and under what conditions a given multipulse PPM scheme would be more or less advantageous, relative to other candidate modulation schemes. In conventional M-ary PPM, each symbol is transmitted in a time frame that is divided into M time slots (where M is an integer >1), defining an M-symbol alphabet. A symbol is represented by transmitting a pulse (representing 1) during one of the time slots and no pulse (representing 0 ) during the other M 1 time slots. Multipulse PPM is a generalization of PPM in which pulses are transmitted during two or more of the M time slots.

  14. Optical refractive synchronization: bit error rate analysis and measurement

    NASA Astrophysics Data System (ADS)

    Palmer, James R.

    1999-11-01

    The direction of this paper is to describe the analytical tools and measurement techniques used at SilkRoad to evaluate the optical and electrical signals used in Optical Refractive Synchronization for transporting SONET signals across the transmission fiber. Fundamentally, the direction of this paper is to provide an outline of how SilkRoad, Inc., transports a multiplicity of SONET signals across a distance of fiber > 100 Km without amplification or regeneration of the optical signal, i.e., one laser over one fiber. Test and measurement data are presented to reflect how the SilkRoad technique of Optical Refractive Synchronization is employed to provide a zero bit error rate for transmission of multiple OC-12 and OC-48 SONET signals that are sent over a fiber optical cable which is > 100Km. The recovery and transformation modules are described for the modification and transportation of these SONET signals.

  15. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  16. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    SciTech Connect

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  17. Error Rate Reduction of Super-Resolution Near-Field Structure Disc

    NASA Astrophysics Data System (ADS)

    Kim, Jooho; Bae, Jaecheol; Hwang, Inoh; Lee, Jinkyung; Park, Hyunsoo; Chung, Chongsam; Kim, Hyunki; Park, Insik; Tominaga, Junji

    2007-06-01

    We report the error rate improvement of super-resolution near-field structure (super-RENS) write-once read-many (WORM) and read-only-memory (ROM) discs in a blue laser optical system [laser wavelength (λ), 405 nm; numerical aperture (NA), 0.85]. We prepared samples of higher carrier level WORM discs and wider pit width ROM discs. Using controlled equalization (EQ) characteristics and an adaptive write strategy and an advanced adaptive partial response maximum likelihood (PRML) technique, we obtained a bit error rate (bER) of 10-4 level. This result shows the high feasibility of super-RENS technology for practical use.

  18. A New Method for the Statistical Control of Rating Error in Performance Ratings.

    ERIC Educational Resources Information Center

    Bannister, Brendan D.; And Others

    1987-01-01

    To control for response bias in student ratings of college teachers, an index of rater error was used that was theoretically independent of actual performance. Partialing out the effects of this extraneous response bias enhanced validity, but partialing out overall effectiveness resulted in reduced convergent and discriminant validities.…

  19. High population increase rates.

    PubMed

    1991-09-01

    In addition to its economic and ethnic difficulties, the USSR faces several pressing demographic problems, including high population increase rates in several of its constituent republics. It has now become clear that although the country's rigid centralized planning succeeded in covering the basic needs of people, it did not lead to welfare growth. Since the 1970s, the Soviet economy has remained sluggish, which as led to increase in the death and birth rates. Furthermore, the ideology that held that demography could be entirely controlled by the country's political and economic system is contradicted by current Soviet reality, which shows that religion and ethnicity also play a significant role in demographic dynamics. Currently, Soviet republics fall under 2 categories--areas with high or low natural population increase rates. Republics with low rates consist of Christian populations (Armenia, Moldavia, Georgia, Byelorussia, Russia, Lithuania, Estonia, Latvia, Ukraine), while republics with high rates are Muslim (Tadzhikistan, Uzbekistan, Turkmenistan, Kirgizia, Azerbaijan Kazakhstan). The later group has natural increase rates as high as 3.3%. Although the USSR as a whole is not considered a developing country, the later group of republics fit the description of the UNFPA's priority list. Another serious demographic issue facing the USSR is its extremely high rate of abortion. This is especially true in the republics of low birth rates, where up to 60% of all pregnancies are terminated by induced abortions. Up to 1/5 of the USSR's annual health care budget is spent on clinical abortions -- money which could be better spent on the production of contraceptives. Along with the recent political and economic changes, the USSR is now eager to deal with its demographic problems. PMID:12284289

  20. Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers

    NASA Technical Reports Server (NTRS)

    Ha, Eunho; North, Gerald R.

    1995-01-01

    Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.

  1. Error rate performance of pulse position modulation schemes for indoor wireless optical communication

    NASA Astrophysics Data System (ADS)

    Azzam, Nazmy; Aly, Moustafa H.; AboulSeoud, A. K.

    2009-06-01

    Error rate performance of pulse position modulation (PPM) schemes for indoor wireless optical communication (WOC) applications is investigated. These schemes include traditional PPM and multiple PPM (MPPM). Study is unique in presenting and evaluating symbol error behaviour under wide range of design parameters such symbol length (L), number of chips per symbol (n), number of chips forms optical pulse (w). Effect of signal to noise ratio levels and operating bitrates on symbol error performance is also discussed. A comparison between studying modulation schemes is done. Relation with IrDA and IEEE 802.11 indoor WOC standardization is also investigated. Results indicate that PPM achieve great symbol error performance at reasonable signal to noise ratio and high bitrates with large symbol length.

  2. Testing Theories of Transfer Using Error Rate Learning Curves.

    PubMed

    Koedinger, Kenneth R; Yudelson, Michael V; Pavlik, Philip I

    2016-07-01

    We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions.

  3. Testing Theories of Transfer Using Error Rate Learning Curves.

    PubMed

    Koedinger, Kenneth R; Yudelson, Michael V; Pavlik, Philip I

    2016-07-01

    We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions. PMID:27230694

  4. High Dimensional Variable Selection with Error Control

    PubMed Central

    2016-01-01

    Background. The iterative sure independence screening (ISIS) is a popular method in selecting important variables while maintaining most of the informative variables relevant to the outcome in high throughput data. However, it not only is computationally intensive but also may cause high false discovery rate (FDR). We propose to use the FDR as a screening method to reduce the high dimension to a lower dimension as well as controlling the FDR with three popular variable selection methods: LASSO, SCAD, and MCP. Method. The three methods with the proposed screenings were applied to prostate cancer data with presence of metastasis as the outcome. Results. Simulations showed that the three variable selection methods with the proposed screenings controlled the predefined FDR and produced high area under the receiver operating characteristic curve (AUROC) scores. In applying these methods to the prostate cancer example, LASSO and MCP selected 12 and 8 genes and produced AUROC scores of 0.746 and 0.764, respectively. Conclusions. We demonstrated that the variable selection methods with the sequential use of FDR and ISIS not only controlled the predefined FDR in the final models but also had relatively high AUROC scores. PMID:27597974

  5. High Dimensional Variable Selection with Error Control.

    PubMed

    Kim, Sangjin; Halabi, Susan

    2016-01-01

    Background. The iterative sure independence screening (ISIS) is a popular method in selecting important variables while maintaining most of the informative variables relevant to the outcome in high throughput data. However, it not only is computationally intensive but also may cause high false discovery rate (FDR). We propose to use the FDR as a screening method to reduce the high dimension to a lower dimension as well as controlling the FDR with three popular variable selection methods: LASSO, SCAD, and MCP. Method. The three methods with the proposed screenings were applied to prostate cancer data with presence of metastasis as the outcome. Results. Simulations showed that the three variable selection methods with the proposed screenings controlled the predefined FDR and produced high area under the receiver operating characteristic curve (AUROC) scores. In applying these methods to the prostate cancer example, LASSO and MCP selected 12 and 8 genes and produced AUROC scores of 0.746 and 0.764, respectively. Conclusions. We demonstrated that the variable selection methods with the sequential use of FDR and ISIS not only controlled the predefined FDR in the final models but also had relatively high AUROC scores. PMID:27597974

  6. Experimental error filtration for quantum communication over highly noisy channels.

    PubMed

    Lamoureux, L-P; Brainis, E; Cerf, N J; Emplit, Ph; Haelterman, M; Massar, S

    2005-06-17

    Error filtration is a method for encoding the quantum state of a single particle into a higher dimensional Hilbert space in such a way that it becomes less sensitive to noise. We have realized a fiber optics demonstration of this method and illustrated its potentialities by carrying out the optical part of a quantum key distribution scheme over a line whose phase noise is too high for a standard implementation of BB84 to be secure. By filtering out the noise, a bit error rate of 15.3% +/- 0.1%, which is beyond the security limit, can be reduced to 10.6% +/- 0.1%, thereby guaranteeing the cryptographic security. PMID:16090449

  7. [High dose rate brachytherapy].

    PubMed

    Aisen, S; Carvalho, H A; Chavantes, M C; Esteves, S C; Haddad, C M; Permonian, A C; Taier, M do C; Marinheiro, R C; Feriancic, C V

    1992-01-01

    The high dose rate brachytherapy uses a single source os 192Ir with 10Ci of nominal activity in a remote afterloading machine. This technique allows an outpatient treatment, without the inconveniences of the conventional low dose rate brachytherapy such as use of general anesthesia, rhachianesthesia, prolonged immobilization, and personal exposition to radiation. The radiotherapy department is now studying 5 basic treatment schemes concerning carcinomas of the uterine cervix, endometrium, lung, esophagus and central nervous system tumors. With the Micro Selectron HDR, 257 treatment sessions were done in 90 patients. Mostly were treated with weekly fractions, receiving a total of three to four treatments each. No complications were observed neither during nor after the procedure. Doses, fraction and ideal associations still have to be studied, so that a higher therapeutic ratio can be reached.

  8. The effects of digitizing rate and phase distortion errors on the shock response spectrum

    NASA Technical Reports Server (NTRS)

    Wise, J. H.

    1983-01-01

    Some of the methods used for acquisition and digitization of high-frequency transients in the analysis of pyrotechnic events, such as explosive bolts for spacecraft separation, are discussed with respect to the reduction of errors in the computed shock response spectrum. Equations are given for maximum error as a function of the sampling rate, phase distortion, and slew rate, and the effects of the characteristics of the filter used are analyzed. A filter is noted to exhibit good passband amplitude, phase response, and response to a step function is a compromise between the flat passband of the elliptic filter and the phase response of the Bessel filter; it is suggested that it be used with a sampling rate of 10f (5 percent).

  9. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  10. Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates

    PubMed Central

    Bartroff, Jay; Song, Jinlin

    2014-01-01

    This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948

  11. Tissue pattern recognition error rates and tumor heterogeneity in gastric cancer.

    PubMed

    Potts, Steven J; Huff, Sarah E; Lange, Holger; Zakharov, Vladislav; Eberhard, David A; Krueger, Joseph S; Hicks, David G; Young, George David; Johnson, Trevor; Whitney-Miller, Christa L

    2013-01-01

    The anatomic pathology discipline is slowly moving toward a digital workflow, where pathologists will evaluate whole-slide images on a computer monitor rather than glass slides through a microscope. One of the driving factors in this workflow is computer-assisted scoring, which depends on appropriate selection of regions of interest. With advances in tissue pattern recognition techniques, a more precise region of the tissue can be evaluated, no longer bound by the pathologist's patience in manually outlining target tissue areas. Pathologists use entire tissues from which to determine a score in a region of interest when making manual immunohistochemistry assessments. Tissue pattern recognition theoretically offers this same advantage; however, error rates exist in any tissue pattern recognition program, and these error rates contribute to errors in the overall score. To provide a real-world example of tissue pattern recognition, 11 HER2-stained upper gastrointestinal malignancies with high heterogeneity were evaluated. HER2 scoring of gastric cancer was chosen due to its increasing importance in gastrointestinal disease. A method is introduced for quantifying the error rates of tissue pattern recognition. The trade-off between fully sampling tumor with a given tissue pattern recognition error rate versus randomly sampling a limited number of fields of view with higher target accuracy was modeled with a Monte-Carlo simulation. Under most scenarios, stereological methods of sampling-limited fields of view outperformed whole-slide tissue pattern recognition approaches for accurate immunohistochemistry analysis. The importance of educating pathologists in the use of statistical sampling is discussed, along with the emerging role of hybrid whole-tissue imaging and stereological approaches.

  12. Study of bit error rate (BER) for multicarrier OFDM

    NASA Astrophysics Data System (ADS)

    Alshammari, Ahmed; Albdran, Saleh; Matin, Mohammad

    2012-10-01

    Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technique that is being used more and more in recent wideband digital communications. It is known for its ability to handle severe channel conditions, the efficiency of spectral usage and the high data rate. Therefore, It has been used in many wired and wireless communication systems such as DSL, wireless networks and 4G mobile communications. Data streams are modulated and sent over multiple subcarriers using either M-QAM or M-PSK. OFDM has lower inter simple interference (ISI) levels because of the of the low data rates of carriers resulting in long symbol periods. In this paper, BER performance of OFDM with respect to signal to noise ratio (SNR) is evaluated. BPSK Modulation is used in s Simulation based system in order to get the BER over different wireless channels. These channels include additive white Gaussian Noise (AWGN) and fading channels that are based on Doppler spread and Delay spread. Plots of the results are compared with each other after varying some of the key parameters of the system such as the IFFT, number of carriers, SNR. The results of the simulation give visualization of what kind of BER to expect when the signal goes through those channels.

  13. Reducing error rates in straintronic multiferroic nanomagnetic logic by pulse shaping

    NASA Astrophysics Data System (ADS)

    Munira, Kamaram; Xie, Yunkun; Nadri, Souheil; Forgues, Mark B.; Salehi Fashami, Mohammad; Atulasimha, Jayasimha; Bandyopadhyay, Supriyo; Ghosh, Avik W.

    2015-06-01

    Dipole-coupled nanomagnetic logic (NML), where nanomagnets (NMs) with bistable magnetization states act as binary switches and information is transferred between them via dipole-coupling and Bennett clocking, is a potential replacement for conventional transistor logic since magnets dissipate less energy than transistors when they switch in a logic circuit. Magnets are also ‘non-volatile’ and hence can store the results of a computation after the computation is over, thereby doubling as both logic and memory—a feat that transistors cannot achieve. However, dipole-coupled NML is much more error-prone than transistor logic at room temperature (\\gt 1%) because thermal noise can easily disrupt magnetization dynamics. Here, we study a particularly energy-efficient version of dipole-coupled NML known as straintronic multiferroic logic (SML) where magnets are clocked/switched with electrically generated mechanical strain. By appropriately ‘shaping’ the voltage pulse that generates strain, we show that the error rate in SML can be reduced to tolerable limits. We describe the error probabilities associated with various stress pulse shapes and discuss the trade-off between error rate and switching speed in SML.The lowest error probability is obtained when a ‘shaped’ high voltage pulse is applied to strain the output NM followed by a low voltage pulse. The high voltage pulse quickly rotates the output magnet’s magnetization by 90° and aligns it roughly along the minor (or hard) axis of the NM. Next, the low voltage pulse produces the critical strain to overcome the shape anisotropy energy barrier in the NM and produce a monostable potential energy profile in the presence of dipole coupling from the neighboring NM. The magnetization of the output NM then migrates to the global energy minimum in this monostable profile and completes a 180° rotation (magnetization flip) with high likelihood.

  14. Finding the right coverage: the impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates.

    PubMed

    Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah

    2016-07-01

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown. Here, we estimated genotyping error rates in SNPs genotyped with double digest RAD sequencing from Mendelian incompatibilities in known mother-offspring dyads of Hoffman's two-toed sloth (Choloepus hoffmanni) across a range of coverage and sequence quality criteria, for both reference-aligned and de novo-assembled data sets. Genotyping error rates were more sensitive to coverage than sequence quality and low coverage yielded high error rates, particularly in de novo-assembled data sets. For example, coverage ≥5 yielded median genotyping error rates of ≥0.03 and ≥0.11 in reference-aligned and de novo-assembled data sets, respectively. Genotyping error rates declined to ≤0.01 in reference-aligned data sets with a coverage ≥30, but remained ≥0.04 in the de novo-assembled data sets. We observed approximately 10- and 13-fold declines in the number of loci sampled in the reference-aligned and de novo-assembled data sets when coverage was increased from ≥5 to ≥30 at quality score ≥30, respectively. Finally, we assessed the effects of genotyping coverage on a common population genetic application, parentage assignments, and showed that the proportion of incorrectly assigned maternities was relatively high at low coverage. Overall, our results suggest that the trade-off between sample size and genotyping error rates be considered prior to building sequencing libraries, reporting genotyping error rates become standard practice, and that effects of genotyping errors on inference be evaluated in restriction-enzyme-based SNP studies.

  15. High accuracy optical rate sensor

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, J.

    1990-01-01

    Optical rate sensors, in particular CCD arrays, will be used on Space Station Freedom to track stars in order to provide inertial attitude reference. An algorithm to provide attitude rate information by directly manipulating the sensor pixel intensity output is presented. The star image produced by a sensor in the laboratory is modeled. Simulated, moving star images are generated, and the algorithm is applied to this data for a star moving at a constant rate. The algorithm produces accurate derived rate of the above data. A step rate change requires two frames for the output of the algorithm to accurately reflect the new rate. When zero mean Gaussian noise with a standard deviation of 5 is added to the simulated data of a star image moving at a constant rate, the algorithm derives the rate with an error of 1.9 percent at a rate of 1.28 pixels per frame.

  16. Controlling Type I Error Rate in Evaluating Differential Item Functioning for Four DIF Methods: Use of Three Procedures for Adjustment of Multiple Item Testing

    ERIC Educational Resources Information Center

    Kim, Jihye

    2010-01-01

    In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…

  17. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  18. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency

    ERIC Educational Resources Information Center

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 second and 974 third graders. Results found a significant relationship between error rate, oral reading fluency, and reading comprehension performance, and…

  19. Conserved rates and patterns of transcription errors across bacterial growth states and lifestyles.

    PubMed

    Traverse, Charles C; Ochman, Howard

    2016-03-22

    Errors that occur during transcription have received much less attention than the mutations that occur in DNA because transcription errors are not heritable and usually result in a very limited number of altered proteins. However, transcription error rates are typically several orders of magnitude higher than the mutation rate. Also, individual transcripts can be translated multiple times, so a single error can have substantial effects on the pool of proteins. Transcription errors can also contribute to cellular noise, thereby influencing cell survival under stressful conditions, such as starvation or antibiotic stress. Implementing a method that captures transcription errors genome-wide, we measured the rates and spectra of transcription errors in Escherichia coli and in endosymbionts for which mutation and/or substitution rates are greatly elevated over those of E. coli Under all tested conditions, across all species, and even for different categories of RNA sequences (mRNA and rRNAs), there were no significant differences in rates of transcription errors, which ranged from 2.3 × 10(-5) per nucleotide in mRNA of the endosymbiont Buchnera aphidicola to 5.2 × 10(-5) per nucleotide in rRNA of the endosymbiont Carsonella ruddii The similarity of transcription error rates in these bacterial endosymbionts to that in E. coli (4.63 × 10(-5) per nucleotide) is all the more surprising given that genomic erosion has resulted in the loss of transcription fidelity factors in both Buchnera and Carsonella.

  20. Conserved rates and patterns of transcription errors across bacterial growth states and lifestyles

    PubMed Central

    Traverse, Charles C.; Ochman, Howard

    2016-01-01

    Errors that occur during transcription have received much less attention than the mutations that occur in DNA because transcription errors are not heritable and usually result in a very limited number of altered proteins. However, transcription error rates are typically several orders of magnitude higher than the mutation rate. Also, individual transcripts can be translated multiple times, so a single error can have substantial effects on the pool of proteins. Transcription errors can also contribute to cellular noise, thereby influencing cell survival under stressful conditions, such as starvation or antibiotic stress. Implementing a method that captures transcription errors genome-wide, we measured the rates and spectra of transcription errors in Escherichia coli and in endosymbionts for which mutation and/or substitution rates are greatly elevated over those of E. coli. Under all tested conditions, across all species, and even for different categories of RNA sequences (mRNA and rRNAs), there were no significant differences in rates of transcription errors, which ranged from 2.3 × 10−5 per nucleotide in mRNA of the endosymbiont Buchnera aphidicola to 5.2 × 10−5 per nucleotide in rRNA of the endosymbiont Carsonella ruddii. The similarity of transcription error rates in these bacterial endosymbionts to that in E. coli (4.63 × 10−5 per nucleotide) is all the more surprising given that genomic erosion has resulted in the loss of transcription fidelity factors in both Buchnera and Carsonella. PMID:26884158

  1. Error rates in a clinical data repository: lessons from the transition to electronic data transfer—a descriptive study

    PubMed Central

    Hong, Matthew K H; Yao, Henry H I; Pedersen, John S; Peters, Justin S; Costello, Anthony J; Murphy, Declan G; Hovens, Christopher M; Corcoran, Niall M

    2013-01-01

    Objective Data errors are a well-documented part of clinical datasets as is their potential to confound downstream analysis. In this study, we explore the reliability of manually transcribed data across different pathology fields in a prostate cancer database and also measure error rates attributable to the source data. Design Descriptive study. Setting Specialist urology service at a single centre in metropolitan Victoria in Australia. Participants Between 2004 and 2011, 1471 patients underwent radical prostatectomy at our institution. In a large proportion of these cases, clinicopathological variables were recorded by manual data-entry. In 2011, we obtained electronic versions of the same printed pathology reports for our cohort. The data were electronically imported in parallel to any existing manual entry record enabling direct comparison between them. Outcome measures Error rates of manually entered data compared with electronically imported data across clinicopathological fields. Results 421 patients had at least 10 comparable pathology fields between the electronic import and manual records and were selected for study. 320 patients had concordant data between manually entered and electronically populated fields in a median of 12 pathology fields (range 10–13), indicating an outright accuracy in manually entered pathology data in 76% of patients. Across all fields, the error rate was 2.8%, while individual field error ranges from 0.5% to 6.4%. Fields in text formats were significantly more error-prone than those with direct measurements or involving numerical figures (p<0.001). 971 cases were available for review of error within the source data, with figures of 0.1–0.9%. Conclusions While the overall rate of error was low in manually entered data, individual pathology fields were variably prone to error. High-quality pathology data can be obtained for both prospective and retrospective parts of our data repository and the electronic checking of source

  2. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    PubMed

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  3. Topological quantum computing with a very noisy network and local error rates approaching one percent

    PubMed Central

    Nickerson, Naomi H.; Li, Ying; Benjamin, Simon C.

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems. PMID:23612297

  4. Increasing Redundancy Exponentially Reduces Error Rates during Algorithmic Self-Assembly.

    PubMed

    Schulman, Rebecca; Wright, Christina; Winfree, Erik

    2015-06-23

    While biology demonstrates that molecules can reliably transfer information and compute, design principles for implementing complex molecular computations in vitro are still being developed. In electronic computers, large-scale computation is made possible by redundancy, which allows errors to be detected and corrected. Increasing the amount of redundancy can exponentially reduce errors. Here, we use algorithmic self-assembly, a generalization of crystal growth in which the self-assembly process executes a program for growing an object, to examine experimentally whether redundancy can analogously reduce the rate at which errors occur during molecular self-assembly. We designed DNA double-crossover molecules to algorithmically self-assemble ribbon crystals that repeatedly copy a short bitstring, and we measured the error rate when each bit is encoded by 1 molecule, or redundantly encoded by 2, 3, or 4 molecules. Under our experimental conditions, each additional level of redundancy decreases the bitwise error rate by a factor of roughly 3, with the 4-redundant encoding yielding an error rate less than 0.1%. While theory and simulation predict that larger improvements in error rates are possible, our results already suggest that by using sufficient redundancy it may be possible to algorithmically self-assemble micrometer-sized objects with programmable, nanometer-scale features.

  5. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  6. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  7. Sensitivity to Error Fields in NSTX High Beta Plasmas

    SciTech Connect

    Park, Jong-Kyu; Menard, Jonathan E.; Gerhardt, Stefan P.; Buttery, Richard J.; Sabbagh, Steve A.; Bell, Steve E.; LeBlanc, Benoit P.

    2011-11-07

    It was found that error field threshold decreases for high β in NSTX, although the density correlation in conventional threshold scaling implies the threshold would increase since higher β plasmas in our study have higher plasma density. This greater sensitivity to error field in higher β plasmas is due to error field amplification by plasmas. When the effect of amplification is included with ideal plasma response calculations, the conventional density correlation can be restored and threshold scaling becomes more consistent with low β plasmas. However, it was also found that the threshold can be significantly changed depending on plasma rotation. When plasma rotation was reduced by non-resonant magnetic braking, the further increase of sensitivity to error field was observed.

  8. Error rate of the Kane quantum computer controlled-NOT gate in the presence of dephasing

    SciTech Connect

    Fowler, Austin G.; Wellard, Cameron J.; Hollenberg, Lloyd C. L.

    2003-01-01

    We study the error rate of controlled-NOT (CNOT) operations in the Kane solid-state quantum computer architecture [B. Kane, Nature 393, 133 (1998)]. A spin Hamiltonian is used to describe the system. Dephasing is included as exponential decay of the off-diagonal elements of the system's density matrix. Using available spin-echo decay data, the CNOT error rate is estimated at {approx_equal}10{sup -3}.

  9. Post-error adaptation in adults with high functioning autism.

    PubMed

    Bogte, Hans; Flamma, Bert; van der Meere, Jaap; van Engeland, Herman

    2007-04-01

    Deficits in executive function (EF), i.e. function of the prefrontal cortex, may be central in the etiology of autism. One of the various aspects of EF is error detection and adjusting behavior after an error. In cognitive tests, adults normally slow down their responding on the next trial after making an error, a compensatory mechanism geared toward improving performance on subsequent trials, and a faculty critically associated with activity in the anterior cingulate cortex (ACC). The current study evaluated post-error slowing in people with high functioning autism (HFA) (n=36), taking symptom severity into account, compared to the performance of a normal control group (n=32). Symptom severity in the HFA group was defined in terms of level of adaptation: living independently (outpatients; n=12) and living residentially (inpatients; n=24). Half the group of inpatients was on medication; the results of their performance were analyzed separately. A computerized version of a memory search task was used with two response probability conditions. The subjects in the control group adjusted their reaction time (RT) substantially after an error, while the group of participants with HFA appeared to be overall slow, with no significant adjustment of RT after an error. This finding remained significant if the medication factor was taken into account, and was independent of the degree of severity of the autistic disorder, as defined by the dichotomy 'inpatient versus outpatient'. Possible causes and implications of the finding are discussed.

  10. Total dose effect on soft error rate for dynamic metal-oxide-semiconductor memory cells

    NASA Technical Reports Server (NTRS)

    Benumof, Reuben

    1989-01-01

    A simple model for the soft error rate for dynamic metal-oxide-semiconductor random access memories due to normal galactic radiation was devised and then used to calculate the rate of decrease of the single-event-upset rate with total radiation dose. The computation shows that the decrease in the soft error rate is less than 10 percent per day if the shielding is 0.5 g/sq cm and the spacecraft is in a geosynchronous orbit. The decrease is considerably less in a polar orbiting device.

  11. Exact error rate analysis of free-space optical communications with spatial diversity over Gamma-Gamma atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Li, Kangning; Tan, Liying; Yu, Siyuan; Cao, Yubin

    2016-02-01

    The error rate performances and outage probabilities of free-space optical (FSO) communications with spatial diversity are studied for Gamma-Gamma turbulent environments. Equal gain combining (EGC) and selection combining (SC) diversity are considered as practical schemes to mitigate turbulence. The exact bit-error rate (BER) expression and outage probability are derived for direct detection EGC multiple aperture receiver system. BER performances and outage probabilities are analyzed and compared for different number of sub-apertures each having aperture area A with EGC and SC techniques. BER performances and outage probabilities of a single monolithic aperture and multiple aperture receiver system with the same total aperture area are compared under thermal-noise-limited and background-noise-limited conditions. It is shown that multiple aperture receiver system can greatly improve the system communication performances. And these analytical tools are useful in providing highly accurate error rate estimation for FSO communication systems.

  12. Mean and Random Errors of Visual Roll Rate Perception from Central and Peripheral Visual Displays

    NASA Technical Reports Server (NTRS)

    Vandervaart, J. C.; Hosman, R. J. A. W.

    1984-01-01

    A large number of roll rate stimuli, covering rates from zero to plus or minus 25 deg/sec, were presented to subjects in random order at 2 sec intervals. Subjects were to make estimates of magnitude of perceived roll rate stimuli presented on either a central display, on displays in the peripheral ield of vision, or on all displays simultaneously. Response was by way of a digital keyboard device, stimulus exposition times were varied. The present experiment differs from earlier perception tasks by the same authors in that mean rate perception error (and standard deviation) was obtained as a function of rate stimulus magnitude, whereas the earlier experiments only yielded mean absolute error magnitude. Moreover, in the present experiment, all stimulus rates had an equal probability of occurrence, whereas the earlier tests featured a Gaussian stimulus probability density function. Results yield a ood illustration of the nonlinear functions relating rate presented to rate perceived by human observers or operators.

  13. Step angles to reduce the north-finding error caused by rate random walk with fiber optic gyroscope.

    PubMed

    Wang, Qin; Xie, Jun; Yang, Chuanchuan; He, Changhong; Wang, Xinyue; Wang, Ziyu

    2015-10-20

    We study the relationship between the step angles and the accuracy of north finding with fiber optic gyroscopes. A north-finding method with optimized step angles is proposed to reduce the errors caused by rate random walk (RRW). Based on this method, the errors caused by both angle random walk and RRW are reduced by increasing the number of positions. For when the number of positions is even, we proposed a north-finding method with symmetric step angles that can reduce the error caused by RRW and is not affected by the azimuth angles. Experimental results show that, compared with the traditional north-finding method, the proposed methods with the optimized step angles and the symmetric step angles can reduce the north-finding errors by 67.5% and 62.5%, respectively. The method with symmetric step angles is not affected by the azimuth angles and can offer consistent high accuracy for any azimuth angles.

  14. Asymptotic error-rate analysis of FSO links using transmit laser selection over gamma-gamma atmospheric turbulence channels with pointing errors.

    PubMed

    García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen

    2012-01-30

    Since free-space optical (FSO) systems are usually installed on high buildings and building sway may cause vibrations in the transmitted beam, an unsuitable alignment between transmitter and receiver together with fluctuations in the irradiance of the transmitted optical beam due to the atmospheric turbulence can severely degrade the performance of optical wireless communication systems. In this paper, asymptotic bit error-rate (BER) performance for FSO communication systems using transmit laser selection over atmospheric turbulence channels with pointing errors is analyzed. Novel closed-form asymptotic expressions are derived when the irradiance of the transmitted optical beam is susceptible to either a wide range of turbulence conditions (weak to strong), following a gamma-gamma distribution of parameters α and β, or pointing errors, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results provide significant insight into the impact of various system and channel parameters, showing that the diversity order is independent of the pointing error when the equivalent beam radius at the receiver is at least 2(min{α,β})(1/2) times the value of the pointing error displacement standard deviation at the receiver. Moreover, since proper FSO transmission requires transmitters with accurate control of their beamwidth, asymptotic expressions are used to find the optimum beamwidth that minimizes the BER at different turbulence conditions. Simulation results are further demonstrated to confirm the accuracy and usefulness of the derived results, showing that asymptotic expressions here obtained lead to simple bounds on the bit error probability that get tighter over a wider range of signal-to-noise ratio (SNR) as the turbulence strength increases.

  15. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    PubMed

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  16. Optimal joint power-rate adaptation for error resilient video coding

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Gürses, Eren; Kim, Anna N.; Perkis, Andrew

    2008-01-01

    In recent years digital imaging devices become an integral part of our daily lives due to the advancements in imaging, storage and wireless communication technologies. Power-Rate-Distortion efficiency is the key factor common to all resource constrained portable devices. In addition, especially in real-time wireless multimedia applications, channel adaptive and error resilient source coding techniques should be considered in conjunction with the P-R-D efficiency, since most of the time Automatic Repeat-reQuest (ARQ) and Forward Error Correction (FEC) are either not feasible or costly in terms of bandwidth efficiency delay. In this work, we focus on the scenarios of real-time video communication for resource constrained devices over bandwidth limited and lossy channels, and propose an analytic Power-channel Error-Rate-Distortion (P-E-R-D) model. In particular, probabilities of macroblocks coding modes are intelligently controlled through an optimization process according to their distinct rate-distortion-complexity performance for a given channel error rate. The framework provides theoretical guidelines for the joint analysis of error resilient source coding and resource allocation. Experimental results show that our optimal framework provides consistent rate-distortion performance gain under different power constraints.

  17. A stochastic node-failure network with individual tolerable error rate at multiple sinks

    NASA Astrophysics Data System (ADS)

    Huang, Cheng-Fu; Lin, Yi-Kuei

    2014-05-01

    Many enterprises consider several criteria during data transmission such as availability, delay, loss, and out-of-order packets from the service level agreements (SLAs) point of view. Hence internet service providers and customers are gradually focusing on tolerable error rate in transmission process. The internet service provider should provide the specific demand and keep a certain transmission error rate by their SLAs to each customer. This paper is mainly to evaluate the system reliability that the demand can be fulfilled under the tolerable error rate at all sinks by addressing a stochastic node-failure network (SNFN), in which each component (edge or node) has several capacities and a transmission error rate. An efficient algorithm is first proposed to generate all lower boundary points, the minimal capacity vectors satisfying demand and tolerable error rate for all sinks. Then the system reliability can be computed in terms of such points by applying recursive sum of disjoint products. A benchmark network and a practical network in the United States are demonstrated to illustrate the utility of the proposed algorithm. The computational complexity of the proposed algorithm is also analyzed.

  18. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-01-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  19. Reducing bit-error rate with optical phase regeneration in multilevel modulation formats.

    PubMed

    Hesketh, Graham; Horak, Peter

    2013-12-15

    We investigate theoretically the benefits of using all-optical phase regeneration in a long-haul fiber optic link. We also introduce a design for a device capable of phase regeneration without phase-to-amplitude noise conversion. We simulate numerically the bit-error rate of a wavelength division multiplexed optical communication system over many fiber spans with periodic reamplification and compare the results obtained with and without phase regeneration at half the transmission distance when using the new design or an existing design. Depending on the modulation format, our results suggest that all-optical phase regeneration can reduce the bit-error rate by up to two orders of magnitude and that the amplitude preserving design offers a 50% reduction in bit-error rate relative to existing technology.

  20. Type-II generalized family-wise error rate formulas with application to sample size determination.

    PubMed

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26914402

  1. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets.

    PubMed

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  2. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets

    PubMed Central

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W.; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  3. Estimation of the minimum mRNA splicing error rate in vertebrates.

    PubMed

    Skandalis, A

    2016-01-01

    The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons. PMID:26811995

  4. Minimum attainable RMS attitude error using co-located rate sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1989-01-01

    A closed form analytical expression for the minimum attainable attitude error (as well as the error rate) in a flexible beam by feedback control using co-located rate sensors is announced. For simplicity, researchers consider a beam clamped at one end with an offset mass (antenna) at the other end where the controls and sensors are located. Both control moment generators and force actuators are provided. The results apply to any beam-like lattice-type truss, and provide the kind of performance criteria needed under CSI - Controls-Stuctures-Integrated optimization.

  5. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    SciTech Connect

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-11-15

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization of the homodyne detection scheme.

  6. High performance interconnection between high data rate networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.

    1992-01-01

    The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.

  7. Rate of Medical Errors in Affiliated Hospitals of Mazandaran University of Medical Sciences

    PubMed Central

    Saravi, Benyamin Mohseni; Mardanshahi, Alireza; Ranjbar, Mansour; Siamian, Hasan; Azar, Masoud Shayeste; Asghari, Zolikah; Motamed, Nima

    2015-01-01

    Introduction: Health care organizations are highly specialized and complex. Thus we may expect the adverse events will inevitably occur. Building a medical error reporting system to analyze the reported preventable adverse events and learn from their results can help to prevent the repeat of these events. The medical errors which were reported to the Clinical Governance’s office of Mazandaran University of Medical Sciences (MazUMS) in years 2011-2012 were analyzed. Methods and Materials: This is a descriptive retrospective study in which 18 public hospitals were participated. The instrument of data collection was checklist that was designed by the Ministry of Health of Iran. Variables were type of hospital, unit of hospital, season, severity of event and type of error. The data were analyzed with SPSS software. Results: Of 317966 admissions 182 cases, about 0.06%, medical error reported of which most of the reports (%51.6) were from non- teaching hospitals. Among various units of hospital, the highest frequency of medical error was related to surgical unit (%42.3). The frequency of medical error according to the type of error was also evaluated of which the highest frequency was related to inappropriate and no care (totally 37%) and medication error 28%. We also analyzed the data with respect to the effect of the error on a patient of which the highest frequency was related to minor effect (44.5%). Conclusion: The results showed that a wide variety of errors. Encourage and revision of the reporting process will be result to know more data for prevention of them. PMID:25870528

  8. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  9. Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.

    2010-01-01

    We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.

  10. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    2001-01-01

    Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…

  11. Bit error rate testing of a proof-of-concept model baseband processor

    NASA Technical Reports Server (NTRS)

    Stover, J. B.; Fujikawa, G.

    1986-01-01

    Bit-error-rate tests were performed on a proof-of-concept baseband processor. The BBP, which operates at an intermediate frequency in the C-Band, demodulates, demultiplexes, routes, remultiplexes, and remodulates digital message segments received from one ground station for retransmission to another. Test methods are discussed and test results are compared with the Contractor's test results.

  12. Error-rate prediction for programmable circuits: methodology, tools and studied cases

    NASA Astrophysics Data System (ADS)

    Velazco, Raoul

    2013-05-01

    This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).

  13. The Impact of Statistically Adjusting for Rater Effects on Conditional Standard Errors of Performance Ratings

    ERIC Educational Resources Information Center

    Raymond, Mark R.; Harik, Polina; Clauser, Brian E.

    2011-01-01

    Prior research indicates that the overall reliability of performance ratings can be improved by using ordinary least squares (OLS) regression to adjust for rater effects. The present investigation extends previous work by evaluating the impact of OLS adjustment on standard errors of measurement ("SEM") at specific score levels. In addition, a…

  14. POWER-ENHANCED MULTIPLE DECISION FUNCTIONS CONTROLLING FAMILY-WISE ERROR AND FALSE DISCOVERY RATES

    PubMed Central

    Peña, Edsel A.; Habiger, Joshua D.; Wu, Wensong

    2014-01-01

    Improved procedures, in terms of smaller missed discovery rates (MDR), for performing multiple hypotheses testing with weak and strong control of the family-wise error rate (FWER) or the false discovery rate (FDR) are developed and studied. The improvement over existing procedures such as the Šidák procedure for FWER control and the Benjamini–Hochberg (BH) procedure for FDR control is achieved by exploiting possible differences in the powers of the individual tests. Results signal the need to take into account the powers of the individual tests and to have multiple hypotheses decision functions which are not limited to simply using the individual p-values, as is the case, for example, with the Šidák, Bonferroni, or BH procedures. They also enhance understanding of the role of the powers of individual tests, or more precisely the receiver operating characteristic (ROC) functions of decision processes, in the search for better multiple hypotheses testing procedures. A decision-theoretic framework is utilized, and through auxiliary randomizers the procedures could be used with discrete or mixed-type data or with rank-based nonparametric tests. This is in contrast to existing p-value based procedures whose theoretical validity is contingent on each of these p-value statistics being stochastically equal to or greater than a standard uniform variable under the null hypothesis. Proposed procedures are relevant in the analysis of high-dimensional “large M, small n” data sets arising in the natural, physical, medical, economic and social sciences, whose generation and creation is accelerated by advances in high-throughput technology, notably, but not limited to, microarray technology. PMID:25018568

  15. Power penalties for multi-level PAM modulation formats at arbitrary bit error rates

    NASA Astrophysics Data System (ADS)

    Kaliteevskiy, Nikolay A.; Wood, William A.; Downie, John D.; Hurley, Jason; Sterlingov, Petr

    2016-03-01

    There is considerable interest in combining multi-level pulsed amplitude modulation formats (PAM-L) and forward error correction (FEC) in next-generation, short-range optical communications links for increased capacity. In this paper we derive new formulas for the optical power penalties due to modulation format complexity relative to PAM-2 and due to inter-symbol interference (ISI). We show that these penalties depend on the required system bit-error rate (BER) and that the conventional formulas overestimate link penalties. Our corrections to the standard formulas are very small at conventional BER levels (typically 1×10-12) but become significant at the higher BER levels enabled by FEC technology, especially for signal distortions due to ISI. The standard formula for format complexity, P = 10log(L-1), is shown to overestimate the actual penalty for PAM-4 and PAM-8 by approximately 0.1 and 0.25 dB respectively at 1×10-3 BER. Then we extend the well-known PAM-2 ISI penalty estimation formula from the IEEE 802.3 standard 10G link modeling spreadsheet to the large BER case and generalize it for arbitrary PAM-L formats. To demonstrate and verify the BER dependence of the ISI penalty, a set of PAM-2 experiments and Monte-Carlo modeling simulations are reported. The experimental results and simulations confirm that the conventional formulas can significantly overestimate ISI penalties at relatively high BER levels. In the experiments, overestimates up to 2 dB are observed at 1×10-3 BER.

  16. High-rate artificial lift

    SciTech Connect

    Clegg, J.D.

    1988-03-01

    This paper summarizes the major considerations in the selection, design, installation, operation, or repair of high-rate artificial-lift systems. The major types of artificial lift - sucker-rod pumps, gas-lift systems, electrical submersible pumps, hydraulic pumps and jets, and hydraulic turbine-driven pumps - will be discussed. An extensive bibliography of artificial-lift papers is included.

  17. High Data Rate Instrument Study

    NASA Technical Reports Server (NTRS)

    Schober, Wayne; Lansing, Faiza; Wilson, Keith; Webb, Evan

    1999-01-01

    The High Data Rate Instrument Study was a joint effort between the Jet Propulsion Laboratory (JPL) and the Goddard Space Flight Center (GSFC). The objectives were to assess the characteristics of future high data rate Earth observing science instruments and then to assess the feasibility of developing data processing systems and communications systems required to meet those data rates. Instruments and technology were assessed for technology readiness dates of 2000, 2003, and 2006. The highest data rate instruments are hyperspectral and synthetic aperture radar instruments which are capable of generating 3.2 Gigabits per second (Gbps) and 1.3 Gbps, respectively, with a technology readiness date of 2003. These instruments would require storage of 16.2 Terebits (Tb) of information (RF communications case of two orbits of data) or 40.5 Tb of information (optical communications case of five orbits of data) with a technology readiness date of 2003. Onboard storage capability in 2003 is estimated at 4 Tb; therefore, all the data created cannot be stored without processing or compression. Of the 4 Tb of stored data, RF communications can only send about one third of the data to the ground, while optical communications is estimated at 6.4 Tb across all three technology readiness dates of 2000, 2003, and 2006 which were used in the study. The study includes analysis of the onboard processing and communications technologies at these three dates and potential systems to meet the high data rate requirements. In the 2003 case, 7.8% of the data can be stored and downlinked by RF communications while 10% of the data can be stored and downlinked with optical communications. The study conclusion is that only 1 to 10% of the data generated by high data rate instruments will be sent to the ground from now through 2006 unless revolutionary changes in spacecraft design and operations such as intelligent data extraction are developed.

  18. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  19. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  20. Hypercorrection of high confidence errors in lexical representations.

    PubMed

    Iwaki, Nobuyoshi; Matsushima, Hiroko; Kodaira, Kazumasa

    2013-08-01

    Memory errors associated with higher confidence are more likely to be corrected than errors made with lower confidence, a phenomenon called the hypercorrection effect. This study investigated whether the hypercorrection effect occurs with phonological information of lexical representations. In Experiment 1, 15 participants performed a Japanese Kanji word-reading task, in which the words had several possible pronunciations. In the initial task, participants were required to read aloud each word and indicate their confidence in their response; this was followed by receipt of visual feedback of the correct response. A hypercorrection effect was observed, indicating generality of this effect beyond previous observations in memories based upon semantic or episodic representations. This effect was replicated in Experiment 2, in which 40 participants performed the same task as in Experiment 1. When the participant's ratings of the practical value of the words were controlled, a partial correlation between confidence and likelihood of later correcting the initial mistaken response was reduced. This suggests that the hypercorrection effect may be partially caused by an individual's recognition of the practical value of reading the words correctly. PMID:24422352

  1. Hypercorrection of high confidence errors in lexical representations.

    PubMed

    Iwaki, Nobuyoshi; Matsushima, Hiroko; Kodaira, Kazumasa

    2013-08-01

    Memory errors associated with higher confidence are more likely to be corrected than errors made with lower confidence, a phenomenon called the hypercorrection effect. This study investigated whether the hypercorrection effect occurs with phonological information of lexical representations. In Experiment 1, 15 participants performed a Japanese Kanji word-reading task, in which the words had several possible pronunciations. In the initial task, participants were required to read aloud each word and indicate their confidence in their response; this was followed by receipt of visual feedback of the correct response. A hypercorrection effect was observed, indicating generality of this effect beyond previous observations in memories based upon semantic or episodic representations. This effect was replicated in Experiment 2, in which 40 participants performed the same task as in Experiment 1. When the participant's ratings of the practical value of the words were controlled, a partial correlation between confidence and likelihood of later correcting the initial mistaken response was reduced. This suggests that the hypercorrection effect may be partially caused by an individual's recognition of the practical value of reading the words correctly.

  2. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  3. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  4. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  5. High spin rate magnetic controller for nanosatellites

    NASA Astrophysics Data System (ADS)

    Slavinskis, A.; Kvell, U.; Kulu, E.; Sünter, I.; Kuuste, H.; Lätt, S.; Voormansik, K.; Noorma, M.

    2014-02-01

    This paper presents a study of a high rate closed-loop spin controller that uses only electromagnetic coils as actuators. The controller is able to perform spin rate control and simultaneously align the spin axis with the Earth's inertial reference frame. It is implemented, optimised and simulated for a 1-unit CubeSat ESTCube-1 to fulfil its mission requirements: spin the satellite up to 360 deg s-1 around the z-axis and align its spin axis with the Earth's polar axis with a pointing error of less than 3°. The attitude of the satellite is determined using a magnetic field vector, a Sun vector and angular velocity. It is estimated using an Unscented Kalman Filter and controlled using three electromagnetic coils. The algorithm is tested in a simulation environment that includes models of space environment and environmental disturbances, sensor and actuator emulation, attitude estimation, and a model to simulate the time delay caused by on-board calculations. In addition to the normal operation mode, analyses of reduced satellite functionality are performed: significant errors of attitude estimation due to non-operational Sun sensors; and limited actuator functionality due to two non-operational coils. A hardware-in-the-loop test is also performed to verify on-board software.

  6. The examination of commercial printing defects to assess common origin, batch variation, and error rate.

    PubMed

    LaPorte, Gerald M; Stephens, Joseph C; Beuchel, Amanda K

    2010-01-01

    The examination of printing defects, or imperfections, found on printed or copied documents has been recognized as a generally accepted approach for linking questioned documents to a common source. This research paper will highlight the results from two mutually exclusive studies. The first involved the examination and characterization of printing defects found in a controlled production run of 500,000 envelopes bearing text and images. It was concluded that printing defects are random occurrences and that morphological differences can be used to identify variations within the same production batch. The second part incorporated a blind study to assess the error rate of associating randomly selected envelopes from different retail locations to a known source. The examination was based on the comparison of printing defects in the security patterns found in some envelopes. The results demonstrated that it is possible to associate envelopes to a common origin with a 0% error rate.

  7. Safety Aspects of Pulsed Dose Rate Brachytherapy: Analysis of Errors in 1,300 Treatment Sessions

    SciTech Connect

    Koedooder, Kees Wieringen, Niek van; Grient, Hans N.B. van der; Herten, Yvonne R.J. van; Pieters, Bradley R.; Blank, Leo

    2008-03-01

    Purpose: To determine the safety of pulsed-dose-rate (PDR) brachytherapy by analyzing errors and technical failures during treatment. Methods and Materials: More than 1,300 patients underwent treatment with PDR brachytherapy, using five PDR remote afterloaders. Most patients were treated with consecutive pulse schemes, also outside regular office hours. Tumors were located in the breast, esophagus, prostate, bladder, gynecology, anus/rectum, orbit, head/neck, with a miscellaneous group of small numbers, such as the lip, nose, and bile duct. Errors and technical failures were analyzed for 1,300 treatment sessions, for which nearly 20,000 pulses were delivered. For each tumor localization, the number and type of occurring errors were determined, as were which localizations were more error prone than others. Results: By routinely using the built-in dummy check source, only 0.2% of all pulses showed an error during the phase of the pulse when the active source was outside the afterloader. Localizations treated using flexible catheters had greater error frequencies than those treated with straight needles or rigid applicators. Disturbed pulse frequencies were in the range of 0.6% for the anus/rectum on a classic version 1 afterloader to 14.9% for orbital tumors using a version 2 afterloader. Exceeding the planned overall treatment time by >10% was observed in only 1% of all treatments. Patients received their dose as originally planned in 98% of all treatments. Conclusions: According to the experience in our institute with 1,300 PDR treatments, we found that PDR is a safe brachytherapy treatment modality, both during and outside of office hours.

  8. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  9. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    NASA Technical Reports Server (NTRS)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  10. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.

  11. Analysis of bit error rate for modified T-APPM under weak atmospheric turbulence channel

    NASA Astrophysics Data System (ADS)

    Liu, Zhe; Zhang, Qi; Wang, Yong-jun; Liu, Bo; Zhang, Li-jia; Wang, Kai-min; Xiao, Fei; Deng, Chao-gong

    2013-12-01

    T-APPM is combined of TCM (trellis-coded modulation) and APPM (Amplitude-Pulse-position modulation) and has broad application prospects in space optical communication. Set partitioning in standard T-APPM algorithm has the optimal performance in a multi-carrier system, but whether this method has the optimal performance in APPM which is a single-carrier system is unknown. To solve this problem, we first research the atmospheric channel model with weak turbulence; then a modified T-APPM algorithm was proposed, compared to the standard T-APPM algorithm, modified algorithm uses Gray code mapping instead of set partitioning mapping; finally, simulate the two algorithms with Monte-Carlo method. Simulation results showed that, when bit error rate at 10-4, the modified T-APPM algorithm achieved 0.4dB in SNR, effectively improve the system error performance.

  12. Effect of Vertical Rate Error on Recovery from Loss of Well Clear Between UAS and Non-Cooperative Intruders

    NASA Technical Reports Server (NTRS)

    Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2016-01-01

    are suppressed, for all vertical error rate thresholds examined. However, results also show that in roughly 35 of the encounters where a vertical maneuver was selected, forcing the UAS to do a horizontal maneuver instead increased the severity of the loss of well-clear for that encounter. Finally, results showed a small reduction in the number of severe losses of well-clear when the high performance UAS (2000 fpm climb and descent rate) was allowed to maneuver vertically, and the vertical rate error was below 500 fpm. Overall, the results show that using a single vertical rate threshold is not advisable, and that limiting a UAS to horizontal maneuvers when vertical rate errors are above 175 fpm can make a UAS less safe about a third of the time. It is suggested that the hard limit be removed, and system manufacturers instructed to account for their own UAS performance, as well as vertical rate error and encounter geometry, when determining whether or not to provide vertical guidance to regain well-clear.

  13. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  14. Performance monitoring following total sleep deprivation: effects of task type and error rate.

    PubMed

    Renn, Ryan P; Cote, Kimberly A

    2013-04-01

    There is a need to understand the neural basis of performance deficits that result from sleep deprivation. Performance monitoring tasks generate response-locked event-related potentials (ERPs), generated from the anterior cingulate cortex (ACC) located in the medial surface of the frontal lobe that reflect error processing. The outcome of previous research on performance monitoring during sleepiness has been mixed. The purpose of this study was to evaluate performance monitoring in a controlled study of experimental sleep deprivation using a traditional Flanker task, and to broaden this examination using a response inhibition task. Forty-nine young adults (24 male) were randomly assigned to a total sleep deprivation or rested control group. The sleep deprivation group was slower on the Flanker task and less accurate on a Go/NoGo task compared to controls. General attentional impairments were evident in stimulus-locked ERPs for the sleep deprived group: P300 was delayed on Flanker trials and smaller to Go-stimuli. Further, N2 was smaller to NoGo stimuli, and the response-locked ERN was smaller on both tasks, reflecting neurocognitive impairment during performance monitoring. In the Flanker task, higher error rate was associated with smaller ERN amplitudes for both groups. Examination of ERN amplitude over time showed that it attenuated in the rested control group as error rate increased, but such habituation was not apparent in the sleep deprived group. Poor performing sleep deprived individuals had a larger Pe response than controls, possibly indicating perseveration of errors. These data provide insight into the neural underpinnings of performance failure during sleepiness and have implications for workplace and driving safety.

  15. High Resolution, High Frame Rate Video Technology

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Papers and working group summaries presented at the High Resolution, High Frame Rate Video (HHV) Workshop are compiled. HHV system is intended for future use on the Space Shuttle and Space Station Freedom. The Workshop was held for the dual purpose of: (1) allowing potential scientific users to assess the utility of the proposed system for monitoring microgravity science experiments; and (2) letting technical experts from industry recommend improvements to the proposed near-term HHV system. The following topics are covered: (1) State of the art in the video system performance; (2) Development plan for the HHV system; (3) Advanced technology for image gathering, coding, and processing; (4) Data compression applied to HHV; (5) Data transmission networks; and (6) Results of the users' requirements survey conducted by NASA.

  16. Type I error rates for testing genetic drift with phenotypic covariance matrices: a simulation study.

    PubMed

    Prôa, Miguel; O'Higgins, Paul; Monteiro, Leandro R

    2013-01-01

    Studies of evolutionary divergence using quantitative genetic methods are centered on the additive genetic variance-covariance matrix (G) of correlated traits. However, estimating G properly requires large samples and complicated experimental designs. Multivariate tests for neutral evolution commonly replace average G by the pooled phenotypic within-group variance-covariance matrix (W) for evolutionary inferences, but this approach has been criticized due to the lack of exact proportionality between genetic and phenotypic matrices. In this study, we examined the consequence, in terms of type I error rates, of replacing average G by W in a test of neutral evolution that measures the regression slope between among-population variances and within-population eigenvalues (the Ackermann and Cheverud [AC] test) using a simulation approach to generate random observations under genetic drift. Our results indicate that the type I error rates for the genetic drift test are acceptable when using W instead of average G when the matrix correlation between the ancestral G and P is higher than 0.6, the average character heritability is above 0.7, and the matrices share principal components. For less-similar G and P matrices, the type I error rates would still be acceptable if the ratio between the number of generations since divergence and the effective population size (t/N(e)) is smaller than 0.01 (large populations that diverged recently). When G is not known in real data, a simulation approach to estimate expected slopes for the AC test under genetic drift is discussed.

  17. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.

  18. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Laboratory experiments performed at NASA Lewis measured the bit-error-rate (BER) degradation resulting from several types of amplitude response distortions. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory-simulated satellite channel. This paper presents the results of these experiments.

  19. Creation and implementation of department-wide structured reports: an analysis of the impact on error rate in radiology reports.

    PubMed

    Hawkins, C Matthew; Hall, Seth; Zhang, Bin; Towbin, Alexander J

    2014-10-01

    The purpose of this study was to evaluate and compare textual error rates and subtypes in radiology reports before and after implementation of department-wide structured reports. Randomly selected radiology reports that were generated following the implementation of department-wide structured reports were evaluated for textual errors by two radiologists. For each report, the text was compared to the corresponding audio file. Errors in each report were tabulated and classified. Error rates were compared to results from a prior study performed prior to implementation of structured reports. Calculated error rates included the average number of errors per report, average number of nongrammatical errors per report, the percentage of reports with an error, and the percentage of reports with a nongrammatical error. Identical versions of voice-recognition software were used for both studies. A total of 644 radiology reports were randomly evaluated as part of this study. There was a statistically significant reduction in the percentage of reports with nongrammatical errors (33 to 26%; p = 0.024). The likelihood of at least one missense omission error (omission errors that changed the meaning of a phrase or sentence) occurring in a report was significantly reduced from 3.5 to 1.2% (p = 0.0175). A statistically significant reduction in the likelihood of at least one comission error (retained statements from a standardized report that contradict the dictated findings or impression) occurring in a report was also observed (3.9 to 0.8%; p = 0.0007). Carefully constructed structured reports can help to reduce certain error types in radiology reports.

  20. Equilibrating errors: reliable estimation of information transmission rates in biological systems with spectral analysis-based methods.

    PubMed

    Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

    2014-06-01

    Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.

  1. The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.

    PubMed

    Fadaee, Shannon B; Migliaccio, Americo A

    2016-04-01

    The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation.

  2. The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.

    PubMed

    Fadaee, Shannon B; Migliaccio, Americo A

    2016-04-01

    The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation. PMID:26715411

  3. Reproduced waveform and bit error rate analysis of a patterned perpendicular medium R/W channel

    NASA Astrophysics Data System (ADS)

    Suzuki, Y.; Saito, H.; Aoi, H.; Muraoka, H.; Nakamura, Y.

    2005-05-01

    Patterned media were investigated as candidates for 1Tb/in.2 recording. In the case of recording with a patterned medium, the noise due to the irregularity of the pattern has to be taken into account instead of the medium noise due to grains. The bit error rate was studied for both continuous and patterned media to evaluate the advantages of patterning. The bit aspect ratio (BPI/TPI) was set to two for the patterned media and four for the continuous medium. The bit error rate (BER), calculated with a PR(1,1) channel simulator, indicated that for both double layered and single layered patterned media an improvement of the BER over conventional continuous media is expected when the patterning jitter is controlled to within 8%. When the system noise is large the BER of single layered patterned media deteriorates more rapidly than that of double layered media, due to the higher boost in the PR(1,1) channel. It was found that making the land length to bit length ratio large was quite effective at improving BER.

  4. Analytical Evaluation of Bit Error Rate Performance of a Free-Space Optical Communication System with Receive Diversity Impaired by Pointing Error

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2015-06-01

    Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.

  5. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    NASA Astrophysics Data System (ADS)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  6. [Can new technologies reduce the rate of medications errors in adult intensive care?].

    PubMed

    Benoit, E; Beney, J

    2011-09-01

    In the intensive care environment, technology is omnipresent to ensure the monitoring and the administration of critical drugs to unstable patients. Since the early 2000's computerized physician order entry (CPOE), bar code assisted medication administration (BCMA), "smart" infusion pumps (SIP), electronic medication administration record (eMAR) and automated dispensing systems (ADS) have been recommended to reduce medication errors. About ten years later, their implementation rises but remains modest. The objective of this study is to determine the impact of these technologies on the rate of medication errors (ME) in adult intensive care. CPOE allows a strong and significant reduction of ME, especially the least critical ones. Only when adding a clinical decision support system (CDSS), CPOE could allow a reduction of serious errors. Used alone, it could even increase them. The available studies do not have the sufficient power to demonstrate the benefits of SIP or BCMA on ME. However, these devices, reveal practices, such as overriding of alerts. Power or methodology problems and conflicting results do not allow to determine the ability of ADS to reduce the incidence of ME in the intensive care. The studies, investigating these technologies, are not very recent, of limited number and present lacks in their methodology, which does not allow to determine whether they can reduce the incidence of MEs in the adult intensive care. Currently, the benefits appear to be limited which may be explained by the complexity of their integration into the care process. Special attention should be given to the communication between caregivers, the human-computer interface and the caregivers' training.

  7. Carbon and sediment accumulation in the Everglades (USA) during the past 4000 years: rates, drivers, and sources of error

    USGS Publications Warehouse

    Glaser, Paul H.; Volin, John C.; Givnish, Thomas J.; Hansen, Barbara C. S.; Stricker, Craig A.

    2012-01-01

    Tropical and sub-tropical wetlands are considered to be globally important sources for greenhouse gases but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida, to assess these problems and determine the factors that could govern carbon accumulation in this large sub-tropical wetland. Accelerator mass spectroscopy dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion ( 0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1 ) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands.

  8. Ancient documents bleed-through evaluation and its application for predicting OCR error rates

    NASA Astrophysics Data System (ADS)

    Rabeux, V.; Journet, N.; Domenger, J. P.

    2011-01-01

    This article presents a way to evaluate the bleed-through defect on very old document images. We design measures to quantify and evaluate the verso ink bleeding through the paper onto the recto side. Measuring the bleed-through defect alows us to perform statistical analysis that are able to predict the feasibility of different post-scan tasks. In this article we choose to illustrate our measures by creating two OCR error rate predicting models based bleed-through evaluation. Two models are proposed, one for Abbyy FineReader * which is a very power-full commercial OCR and OCRopus † which is sponsored by Google. Both prediction models appears to be very accurate when calculating various statistic indicators.

  9. Bit Error Rate Performance of Partially Coherent Dual-Branch SSC Receiver over Composite Fading Channels

    NASA Astrophysics Data System (ADS)

    Milić, Dejan N.; Đorđević, Goran T.

    2013-01-01

    In this paper, we study the effects of imperfect reference signal recovery on the bit error rate (BER) performance of dual-branch switch and stay combining receiver over Nakagami-m fading/gamma shadowing channels with arbitrary parameters. The average BER of quaternary phase shift keying is evaluated under the assumption that the reference carrier signal is extracted from the received modulated signal. We compute numerical results illustrating simultaneous influence of average signal-to-noise ratio per bit, fading severity, shadowing, phase-locked loop bandwidth-bit duration (BLTb) product, and switching threshold on BER performance. The effects of BLTb on receiver performance under different channel conditions are emphasized. Optimal switching threshold is determined which minimizes BER performance under given channel and receiver parameters.

  10. Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error

    ERIC Educational Resources Information Center

    Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju

    2009-01-01

    Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…

  11. The Visual Motor Integration Test: High Interjudge Reliability, High Potential For Diagnostic Error.

    ERIC Educational Resources Information Center

    Snyder, Peggy P.; And Others

    1981-01-01

    Investigated scoring agreement among three different training levels of Visual Motor Integration Test (VMI) diagnosticians. Correlational data demonstrated high interexaminer reliabilities; however, there were gross errors in precision after raw scores had been converted into VMI age equivalent scores. (Author/RC)

  12. High-order averaging schemes with error bounds for thermodynamical properties calculations by molecular dynamics simulations.

    PubMed

    Cancès, Eric; Castella, François; Chartier, Philippe; Faou, Erwan; Le Bris, Claude; Legoll, Frédéric; Turinici, Gabriel

    2004-12-01

    We introduce high-order formulas for the computation of statistical averages based on the long-time simulation of molecular dynamics trajectories. In some cases, this allows us to significantly improve the convergence rate of time averages toward ensemble averages. We provide some numerical examples that show the efficiency of our scheme. When trajectories are approximated using symplectic integration schemes (such as velocity Verlet), we give some error bounds that allow one to fix the parameters of the computation in order to reach a given desired accuracy in the most efficient manner. PMID:15549912

  13. High-order averaging schemes with error bounds for thermodynamical properties calculations by molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Cancès, Eric; Castella, François; Chartier, Philippe; Faou, Erwan; Le Bris, Claude; Legoll, Frédéric; Turinici, Gabriel

    2004-12-01

    We introduce high-order formulas for the computation of statistical averages based on the long-time simulation of molecular dynamics trajectories. In some cases, this allows us to significantly improve the convergence rate of time averages toward ensemble averages. We provide some numerical examples that show the efficiency of our scheme. When trajectories are approximated using symplectic integration schemes (such as velocity Verlet), we give some error bounds that allow one to fix the parameters of the computation in order to reach a given desired accuracy in the most efficient manner.

  14. Time-resolved in vivo luminescence dosimetry for online error detection in pulsed dose-rate brachytherapy

    SciTech Connect

    Andersen, Claus E.; Nielsen, Soeren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari

    2009-11-15

    Purpose: The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Methods: Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with {sup 192}Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from {+-}5 to {+-}15 mm) were simulated in software in order to assess the ability of the system to detect errors. Results: For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when

  15. Measuring error rates in genomic perturbation screens: gold standards for human functional genomics.

    PubMed

    Hart, Traver; Brown, Kevin R; Sircoulomb, Fabrice; Rottapel, Robert; Moffat, Jason

    2014-01-01

    Technological advancement has opened the door to systematic genetics in mammalian cells. Genome-scale loss-of-function screens can assay fitness defects induced by partial gene knockdown, using RNA interference, or complete gene knockout, using new CRISPR techniques. These screens can reveal the basic blueprint required for cellular proliferation. Moreover, comparing healthy to cancerous tissue can uncover genes that are essential only in the tumor; these genes are targets for the development of specific anticancer therapies. Unfortunately, progress in this field has been hampered by off-target effects of perturbation reagents and poorly quantified error rates in large-scale screens. To improve the quality of information derived from these screens, and to provide a framework for understanding the capabilities and limitations of CRISPR technology, we derive gold-standard reference sets of essential and nonessential genes, and provide a Bayesian classifier of gene essentiality that outperforms current methods on both RNAi and CRISPR screens. Our results indicate that CRISPR technology is more sensitive than RNAi and that both techniques have nontrivial false discovery rates that can be mitigated by rigorous analytical methods.

  16. Detecting trends in raptor counts: power and type I error rates of various statistical tests

    USGS Publications Warehouse

    Hatfield, J.S.; Gould, W.R.; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.

    1996-01-01

    We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.

  17. Accuracy of High-Rate GPS for Seismology

    NASA Technical Reports Server (NTRS)

    Elosegui, P.; Davis, J. L.; Oberlander, D.; Baena, R.; Ekstrom, G.

    2006-01-01

    We built a device for translating a GPS antenna on a positioning table to simulate the ground motions caused by an earthquake. The earthquake simulator is accurate to better than 0.1 mm in position, and provides the "ground truth" displacements for assessing the technique of high-rate GPS. We found that the root-mean-square error of the 1-Hz GPS position estimates over the 15-min duration of the simulated seismic event was 2.5 mm, with approximately 96% of the observations in error by less than 5 mm, and is independent of GPS antenna motion. The error spectrum of the GPS estimates is approximately flicker noise, with a 50% decorrelation time for the position error of approx.1.6 s. We that, for the particular event simulated, the spectrum of dependent error in the GPS measurements. surface deformations exceeds the GPS error spectrum within a finite band. More studies are required to determine whether a generally optimal bandwidth exists for a target group of seismic events.

  18. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    PubMed

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  19. Error in estimation of rate and time inferred from the early amniote fossil record and avian molecular clocks.

    PubMed

    van Tuinen, Marcel; Hadly, Elizabeth A

    2004-08-01

    The best reconstructions of the history of life will use both molecular time estimates and fossil data. Errors in molecular rate estimation typically are unaccounted for and no attempts have been made to quantify this uncertainty comprehensively. Here, focus is primarily on fossil calibration error because this error is least well understood and nearly universally disregarded. Our quantification of errors in the synapsid-diapsid calibration illustrates that although some error can derive from geological dating of sedimentary rocks, the absence of good stem fossils makes phylogenetic error the most critical. We therefore propose the use of calibration ages that are based on the first undisputed synapsid and diapsid. This approach yields minimum age estimates and standard errors of 306.1 +/- 8.5 MYR for the divergence leading to birds and mammals. Because this upper bound overlaps with the recent use of 310 MYR, we do not support the notion that several metazoan divergence times are significantly overestimated because of serious miscalibration (sensuLee 1999). However, the propagation of relevant errors reduces the statistical significance of the pre-K-T boundary diversification of many bird lineages despite retaining similar point time estimates. Our results demand renewed investigation into suitable loci and fossil calibrations for constructing evolutionary timescales.

  20. An intravenous medication safety system: preventing high-risk medication errors at the point of care.

    PubMed

    Hatcher, Irene; Sullivan, Mark; Hutchinson, James; Thurman, Susan; Gaffney, F Andrew

    2004-10-01

    Improving medication safety at the point of care--particularly for high-risk drugs--is a major concern of nursing administrators. The medication errors most likely to cause harm are administration errors related to infusion of high-risk medications. An intravenous medication safety system is designed to prevent high-risk infusion medication errors and to capture continuous quality improvement data for best practice improvement. Initial testing with 50 systems in 2 units at Vanderbilt University Medical Center revealed that, even in the presence of a fully mature computerized prescriber order-entry system, the new safety system averted 99 potential infusion errors in 8 months.

  1. Bit error rate analysis of free-space optical system with spatial diversity over strong atmospheric turbulence channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Krishnan, Prabu; Sriram Kumar, D.

    2014-12-01

    Free-space optical communication (FSO) is emerging as a captivating alternative to work out the hindrances in the connectivity problems. It can be used for transmitting signals over common lands and properties that the sender or receiver may not own. The performance of an FSO system depends on the random environmental conditions. The bit error rate (BER) performance of differential phase shift keying FSO system is investigated. A distributed strong atmospheric turbulence channel with pointing error is considered for the BER analysis. Here, the system models are developed for single-input, single-output-FSO (SISO-FSO) and single-input, multiple-output-FSO (SIMO-FSO) systems. The closed-form mathematical expressions are derived for the average BER with various combining schemes in terms of the Meijer's G function.

  2. The effect of narrow-band digital processing and bit error rate on the intelligibility of ICAO spelling alphabet words

    NASA Astrophysics Data System (ADS)

    Schmidt-Nielsen, Astrid

    1987-08-01

    The recognition of ICAO spelling alphabet words (ALFA, BRAVO, CHARLIE, etc.) is compared with diagnostic rhyme test (DRT) scores for the same conditions. The voice conditions include unprocessed speech; speech processed through the DOD standard linear-predictive-coding algorithm operating at 2400 bit/s with random error rates of 0, 2, 5, 8, and 12 percent; and speech processed through an 800-bit/s pattern-matching algorithm. The results suggest that, with distinctive vocabularies, word intelligibility can be expected to remain high even when DRT scores fall into the poor range. However, once the DRT scores fall below 75 percent, the intelligibility can be expected to fall off rapidly; at DRT scores below 50, the recognition of a distinctive vocabulary should also fall below 50 percent.

  3. Scintillation index and bit error rate of hollow Gaussian beams in atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Qiao, Na; Zhang, Bin; Pan, Pingping; Dan, Youquan

    2011-06-01

    Based on the Huygens-Fresnel principle and Rytov method, the on-axis scintillation index is derived for hollow Gaussian beams (HGBs) in weak turbulence. The relationship between bit error rate (BER) and scintillation index is found by only considering the effect of atmosphere turbulence based on the probability distribution of intensity fluctuation, and the expression of the BER is obtained. Furthermore, the scintillation and the BER properties of HGBs in turbulence are discussed in detail. The results show that the scintillation index and BER of HGBs depend on the propagation length, the structure constant of the refractive index fluctuations of turbulence, the wavelength, the beam order and the waist width of the fundamental Gaussian beam. The scintillation index, increasing with the propagation length in turbulence, for the HGB with higher beam order increases more slowly. The BER of the HGBs increases rapidly against the propagation length in turbulence. For propagating the same distance, the BER of the fundamental Gaussian beam is the greatest, and that of the HGB with higher order is smaller.

  4. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, W. S.; Burkhart, J. F.; Kylling, A.

    2015-08-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  5. Serialized quantum error correction protocol for high-bandwidth quantum repeaters

    NASA Astrophysics Data System (ADS)

    Glaudell, A. N.; Waks, E.; Taylor, J. M.

    2016-09-01

    Advances in single-photon creation, transmission, and detection suggest that sending quantum information over optical fibers may have losses low enough to be correctable using a quantum error correcting code (QECC). Such error-corrected communication is equivalent to a novel quantum repeater scheme, but crucial questions regarding implementation and system requirements remain open. Here we show that long-range entangled bit generation with rates approaching 108 entangled bits per second may be possible using a completely serialized protocol, in which photons are generated, entangled, and error corrected via sequential, one-way interactions with as few matter qubits as possible. Provided loss and error rates of the required elements are below the threshold for quantum error correction, this scheme demonstrates improved performance over transmission of single photons. We find improvement in entangled bit rates at large distances using this serial protocol and various QECCs. In particular, at a total distance of 500 km with fiber loss rates of 0.3 dB km‑1, logical gate failure probabilities of 10‑5, photon creation and measurement error rates of 10‑5, and a gate speed of 80 ps, we find the maximum single repeater chain entangled bit rates of 51 Hz at a 20 m node spacing and 190 000 Hz at a 43 m node spacing for the {[[3,1,2

  6. Serialized quantum error correction protocol for high-bandwidth quantum repeaters

    NASA Astrophysics Data System (ADS)

    Glaudell, A. N.; Waks, E.; Taylor, J. M.

    2016-09-01

    Advances in single-photon creation, transmission, and detection suggest that sending quantum information over optical fibers may have losses low enough to be correctable using a quantum error correcting code (QECC). Such error-corrected communication is equivalent to a novel quantum repeater scheme, but crucial questions regarding implementation and system requirements remain open. Here we show that long-range entangled bit generation with rates approaching 108 entangled bits per second may be possible using a completely serialized protocol, in which photons are generated, entangled, and error corrected via sequential, one-way interactions with as few matter qubits as possible. Provided loss and error rates of the required elements are below the threshold for quantum error correction, this scheme demonstrates improved performance over transmission of single photons. We find improvement in entangled bit rates at large distances using this serial protocol and various QECCs. In particular, at a total distance of 500 km with fiber loss rates of 0.3 dB km-1, logical gate failure probabilities of 10-5, photon creation and measurement error rates of 10-5, and a gate speed of 80 ps, we find the maximum single repeater chain entangled bit rates of 51 Hz at a 20 m node spacing and 190 000 Hz at a 43 m node spacing for the {[[3,1,2

  7. Effect of audio bandwidth and bit error rate on PCM, ADPCM and LPC speech coding algorithm intelligibility

    NASA Astrophysics Data System (ADS)

    McKinley, Richard L.; Moore, Thomas J.

    1987-02-01

    The effects of audio bandwidth and bit error rate on speech intelligibility of voice coders in noise are described and quantified. Three different speech coding techniques were investigated, pulse code modulation (PCM), adaptive differential pulse code modulation (ADPCM), and linear predictive coding (LPC). Speech intelligibility was measured in realistic acoustic noise environs by a panel of 10 subjects performing the Modified Rhyme Test. Summary data is presented along with planned future research in optimization of audio bandwidth vs bit error rate tradeoff for best speech intelligibility.

  8. Estimation of genotyping error rate from repeat genotyping, unintentional recaptures and known parent-offspring comparisons in 16 microsatellite loci for brown rockfish (Sebastes auriculatus).

    PubMed

    Hess, Maureen A; Rhydderch, James G; LeClair, Larry L; Buckley, Raymond M; Kawase, Mitsuhiro; Hauser, Lorenz

    2012-11-01

    Genotyping errors are present in almost all genetic data and can affect biological conclusions of a study, particularly for studies based on individual identification and parentage. Many statistical approaches can incorporate genotyping errors, but usually need accurate estimates of error rates. Here, we used a new microsatellite data set developed for brown rockfish (Sebastes auriculatus) to estimate genotyping error using three approaches: (i) repeat genotyping 5% of samples, (ii) comparing unintentionally recaptured individuals and (iii) Mendelian inheritance error checking for known parent-offspring pairs. In each data set, we quantified genotyping error rate per allele due to allele drop-out and false alleles. Genotyping error rate per locus revealed an average overall genotyping error rate by direct count of 0.3%, 1.5% and 1.7% (0.002, 0.007 and 0.008 per allele error rate) from replicate genotypes, known parent-offspring pairs and unintentionally recaptured individuals, respectively. By direct-count error estimates, the recapture and known parent-offspring data sets revealed an error rate four times greater than estimated using repeat genotypes. There was no evidence of correlation between error rates and locus variability for all three data sets, and errors appeared to occur randomly over loci in the repeat genotypes, but not in recaptures and parent-offspring comparisons. Furthermore, there was no correlation in locus-specific error rates between any two of the three data sets. Our data suggest that repeat genotyping may underestimate true error rates and may not estimate locus-specific error rates accurately. We therefore suggest using methods for error estimation that correspond to the overall aim of the study (e.g. known parent-offspring comparisons in parentage studies).

  9. Highly porous thermal protection materials: Modelling and prediction of the methodical experimental errors

    NASA Astrophysics Data System (ADS)

    Cherepanov, Valery V.; Alifanov, Oleg M.; Morzhukhina, Alena V.; Budnik, Sergey A.

    2016-11-01

    The formation mechanisms and the main factors affecting the systematic error of thermocouples were investigated. According to the results of experimental studies and mathematical modelling it was established that in highly porous heat resistant materials for aerospace application the thermocouple errors are determined by two competing mechanisms provided correlation between the errors and the difference between radiation and conduction heat fluxes. The comparative analysis was carried out and some features of the methodical error formation related to the distances from the heated surface were established.

  10. People's Hypercorrection of High-Confidence Errors: Did They Know It All Along?

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2011-01-01

    This study investigated the "knew it all along" explanation of the hypercorrection effect. The hypercorrection effect refers to the finding that when people are given corrective feedback, errors that are committed with high confidence are easier to correct than low-confidence errors. Experiment 1 showed that people were more likely to claim that…

  11. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  12. GaAlAs laser temperature effects on the BER performance of a gigabit PCM fiber system. [Bit Error Rate

    NASA Technical Reports Server (NTRS)

    Eng, S. T.; Bergman, L. A.

    1982-01-01

    The performance of a gigabit pulse-code modulation fiber system has been investigated as a function of laser temperature. The bit error rate shows an improvement for temperature in the range of -15 C to -35 C. A tradeoff seems possible between relaxation oscillation, rise time, and signal-to-noise ratio.

  13. Comparison of Self-Scoring Error Rate for SDS (Self Directed Search) (1970) and the Revised SDS (1977).

    ERIC Educational Resources Information Center

    Price, Gary E.; And Others

    A comparison of Self-Scoring Error Rate for Self Directed Search (SDS) and the revised SDS is presented. The subjects were college freshmen and sophomores who participated in career planning as a part of their orientation program, and a career workshop. Subjects, N=190 on first study and N=84 on second study, were then randomly assigned to the SDS…

  14. General closed-form bit-error rate expressions for coded M-distributed atmospheric optical communications.

    PubMed

    Balsells, José M Garrido; López-González, Francisco J; Jurado-Navas, Antonio; Castillo-Vázquez, Miguel; Notario, Antonio Puerta

    2015-07-01

    In this Letter, general closed-form expressions for the average bit error rate in atmospheric optical links employing rate-adaptive channel coding are derived. To characterize the irradiance fluctuations caused by atmospheric turbulence, the Málaga or M distribution is employed. The proposed expressions allow us to evaluate the performance of atmospheric optical links employing channel coding schemes such as OOK-GSc, OOK-GScc, HHH(1,13), or vw-MPPM with different coding rates and under all regimes of turbulence strength. A hyper-exponential fitting technique applied to the conditional bit error rate is used in all cases. The proposed closed-form expressions are validated by Monte-Carlo simulations.

  15. General closed-form bit-error rate expressions for coded M-distributed atmospheric optical communications.

    PubMed

    Balsells, José M Garrido; López-González, Francisco J; Jurado-Navas, Antonio; Castillo-Vázquez, Miguel; Notario, Antonio Puerta

    2015-07-01

    In this Letter, general closed-form expressions for the average bit error rate in atmospheric optical links employing rate-adaptive channel coding are derived. To characterize the irradiance fluctuations caused by atmospheric turbulence, the Málaga or M distribution is employed. The proposed expressions allow us to evaluate the performance of atmospheric optical links employing channel coding schemes such as OOK-GSc, OOK-GScc, HHH(1,13), or vw-MPPM with different coding rates and under all regimes of turbulence strength. A hyper-exponential fitting technique applied to the conditional bit error rate is used in all cases. The proposed closed-form expressions are validated by Monte-Carlo simulations. PMID:26125336

  16. Errors in the estimation of arterial wall shear rates that result from curve fitting of velocity profiles.

    PubMed

    Lou, Z; Yang, W J; Stein, P D

    1993-01-01

    An analysis was performed to determine the error that results from the estimation of the wall shear rates based on linear and quadratic curve-fittings of the measured velocity profiles. For steady, fully developed flow in a straight vessel, the error for the linear method is linearly related to the distance between the probe and the wall, dr1, and the error for the quadratic method is zero. With pulsatile flow, especially a physiological pulsatile flow in a large artery, the thickness of the velocity boundary layer, delta is small, and the error in the estimation of wall shear based on curve fitting is much higher than that with steady flow. In addition, there is a phase lag between the actual shear rate and the measured one. In oscillatory flow, the error increases with the distance ratio dr1/delta and, for a quadratic method, also with the distance ratio dr2/dr1, where dr2 is the distance of the second probe from the wall. The quadratic method has a distinct advantage in accuracy over the linear method when dr1/delta < 1, i.e. when the first velocity point is well within the boundary layer. The use of this analysis in arterial flow involves many simplifications, including Newtonian fluid, rigid walls, and the linear summation of the harmonic components, and can provide more qualitative than quantitative guidance. PMID:8478343

  17. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  18. Internal pressure gradient errors in σ-coordinate ocean models in high resolution fjord studies

    NASA Astrophysics Data System (ADS)

    Berntsen, Jarle; Thiem, Øyvind; Avlesen, Helge

    2015-08-01

    Terrain following ocean models are today applied in coastal areas and fjords where the topography may be very steep. Recent advances in high performance computing facilitate model studies with very high spatial resolution. In general, numerical discretization errors tend to zero with the grid size. However, in fjords and near the coast the slopes may be very steep, and the internal pressure gradient errors associated with σ-models may be significant even in high resolution studies. The internal pressure gradient errors are due to errors when estimating the density gradients in σ-models, and these errors are investigated for two idealized test cases and for the Hardanger fjord in Norway. The methods considered are the standard second order method and a recently proposed method that is balanced such that the density gradients are zero for the case ρ = ρ(z) where ρ is the density and z is the vertical coordinate. The results show that by using the balanced method, the errors may be reduced considerably also for slope parameters larger than the maximum suggested value of 0.2. For the Hardanger fjord case initialized with ρ = ρ(z) , the errors in the results produced with the balanced method are orders of magnitude smaller than the corresponding errors in the results produced with the second order method.

  19. Compensation of spectral and RF errors in swept-source OCT for high extinction complex demodulation.

    PubMed

    Siddiqui, Meena; Tozburun, Serhat; Zhang, Ellen Ziyi; Vakoc, Benjamin J

    2015-03-01

    We provide a framework for compensating errors within passive optical quadrature demodulation circuits used in swept-source optical coherence tomography (OCT). Quadrature demodulation allows for detection of both the real and imaginary components of an interference fringe, and this information separates signals from positive and negative depth spaces. To achieve a high extinction (∼60 dB) between these positive and negative signals, the demodulation error must be less than 0.1% in amplitude and phase. It is difficult to construct a system that achieves this low error across the wide spectral and RF bandwidths of high-speed swept-source systems. In a prior work, post-processing methods for removing residual spectral errors were described. Here, we identify the importance of a second class of errors originating in the RF domain, and present a comprehensive framework for compensating both spectral and RF errors. Using this framework, extinctions >60 dB are demonstrated. A stability analysis shows that calibration parameters associated with RF errors are accurate for many days, while those associated with spectral errors must be updated prior to each imaging session. Empirical procedures to derive both RF and spectral calibration parameters simultaneously and to update spectral calibration parameters are presented. These algorithms provide the basis for using passive optical quadrature demodulation circuits with high speed and wide-bandwidth swept-source OCT systems.

  20. Compensation of spectral and RF errors in swept-source OCT for high extinction complex demodulation

    PubMed Central

    Siddiqui, Meena; Tozburun, Serhat; Zhang, Ellen Ziyi; Vakoc, Benjamin J.

    2015-01-01

    We provide a framework for compensating errors within passive optical quadrature demodulation circuits used in swept-source optical coherence tomography (OCT). Quadrature demodulation allows for detection of both the real and imaginary components of an interference fringe, and this information separates signals from positive and negative depth spaces. To achieve a high extinction (∼60 dB) between these positive and negative signals, the demodulation error must be less than 0.1% in amplitude and phase. It is difficult to construct a system that achieves this low error across the wide spectral and RF bandwidths of high-speed swept-source systems. In a prior work, post-processing methods for removing residual spectral errors were described. Here, we identify the importance of a second class of errors originating in the RF domain, and present a comprehensive framework for compensating both spectral and RF errors. Using this framework, extinctions >60 dB are demonstrated. A stability analysis shows that calibration parameters associated with RF errors are accurate for many days, while those associated with spectral errors must be updated prior to each imaging session. Empirical procedures to derive both RF and spectral calibration parameters simultaneously and to update spectral calibration parameters are presented. These algorithms provide the basis for using passive optical quadrature demodulation circuits with high speed and wide-bandwidth swept-source OCT systems. PMID:25836784

  1. Statistics-based reconstruction method with high random-error tolerance for integral imaging.

    PubMed

    Zhang, Juan; Zhou, Liqiu; Jiao, Xiaoxue; Zhang, Lei; Song, Lipei; Zhang, Bo; Zheng, Yi; Zhang, Zan; Zhao, Xing

    2015-10-01

    A three-dimensional (3D) digital reconstruction method for integral imaging with high random-error tolerance based on statistics is proposed. By statistically analyzing the points reconstructed by triangulation from all corresponding image points in an elemental images array, 3D reconstruction with high random-error tolerance could be realized. To simulate the impacts of random errors, random offsets with different error levels are added to a different number of elemental images in simulation and optical experiments. The results of simulation and optical experiments showed that the proposed statistic-based reconstruction method has relatively stable and better reconstruction accuracy than the conventional reconstruction method. It can be verified that the proposed method can effectively reduce the impacts of random errors on 3D reconstruction of integral imaging. This method is simple and very helpful to the development of integral imaging technology.

  2. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    PubMed

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  3. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  4. Analysis of 454 sequencing error rate, error sources, and artifact recombination for detection of Low-frequency drug resistance mutations in HIV-1 DNA

    PubMed Central

    2013-01-01

    Background 454 sequencing technology is a promising approach for characterizing HIV-1 populations and for identifying low frequency mutations. The utility of 454 technology for determining allele frequencies and linkage associations in HIV infected individuals has not been extensively investigated. We evaluated the performance of 454 sequencing for characterizing HIV populations with defined allele frequencies. Results We constructed two HIV-1 RT clones. Clone A was a wild type sequence. Clone B was identical to clone A except it contained 13 introduced drug resistant mutations. The clones were mixed at ratios ranging from 1% to 50% and were amplified by standard PCR conditions and by PCR conditions aimed at reducing PCR-based recombination. The products were sequenced using 454 pyrosequencing. Sequence analysis from standard PCR amplification revealed that 14% of all sequencing reads from a sample with a 50:50 mixture of wild type and mutant DNA were recombinants. The majority of the recombinants were the result of a single crossover event which can happen during PCR when the DNA polymerase terminates synthesis prematurely. The incompletely extended template then competes for primer sites in subsequent rounds of PCR. Although less often, a spectrum of other distinct crossover patterns was also detected. In addition, we observed point mutation errors ranging from 0.01% to 1.0% per base as well as indel (insertion and deletion) errors ranging from 0.02% to nearly 50%. The point errors (single nucleotide substitution errors) were mainly introduced during PCR while indels were the result of pyrosequencing. We then used new PCR conditions designed to reduce PCR-based recombination. Using these new conditions, the frequency of recombination was reduced 27-fold. The new conditions had no effect on point mutation errors. We found that 454 pyrosequencing was capable of identifying minority HIV-1 mutations at frequencies down to 0.1% at some nucleotide positions. Conclusion

  5. Rater Stringency Error in Performance Rating: A Contrast of Three Models.

    ERIC Educational Resources Information Center

    Cason, Gerald J.; Cason, Carolyn L.

    The use of three remedies for errors in the measurement of ability that arise from differences in rater stringency is discussed. Models contrasted are: (1) Conventional; (2) Handicap; and (3) deterministic Rater Response Theory (RRT). General model requirements, power, bias of measures, computing cost, and complexity are contrasted. Contrasts are…

  6. A Comparison of Type I Error Rates of Alpha-Max with Established Multiple Comparison Procedures.

    ERIC Educational Resources Information Center

    Barnette, J. Jackson; McLean, James E.

    J. Barnette and J. McLean (1996) proposed a method of controlling Type I error in pairwise multiple comparisons after a significant omnibus F test. This procedure, called Alpha-Max, is based on a sequential cumulative probability accounting procedure in line with Bonferroni inequality. A missing element in the discussion of Alpha-Max was the…

  7. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  8. Multichannel analyzers at high rates of input

    NASA Technical Reports Server (NTRS)

    Rudnick, S. J.; Strauss, M. G.

    1969-01-01

    Multichannel analyzer, used with a gating system incorporating pole-zero compensation, pile-up rejection, and baseline-restoration, achieves good resolution at high rates of input. It improves resolution, reduces tailing and rate-contributed continuum, and eliminates spectral shift.

  9. High-rate lithium thionyl chloride cells

    NASA Technical Reports Server (NTRS)

    Goebel, F.

    1982-01-01

    A high-rate C cell with disc electrodes was developed to demonstrate current rates which are comparable to other primary systems. The tests performed established the limits of abuse beyond which the cell becomes hazardous. Tests include: impact, shock, and vibration tests; temperature cycling; and salt water immersion of fresh cells.

  10. ISS Update: High Rate Communications System

    NASA Video Gallery

    ISS Update Commentator Pat Ryan interviews Diego Serna, Communications and Tracking Officer, about the High Rate Communications System. Questions? Ask us on Twitter @NASA_Johnson and include the ha...

  11. Lithium thionyl chloride high rate discharge

    NASA Technical Reports Server (NTRS)

    Klinedinst, K. A.

    1980-01-01

    Improvements in high rate lithium thionyl chloride power technology achieved by varying the electrolyte composition, operating temperature, cathode design, and cathode composition are discussed. Discharge capacities are plotted as a function of current density, cell voltage, and temperature.

  12. Rates of assay success and genotyping error when single nucleotide polymorphism genotyping in non-model organisms: a case study in the Antarctic fur seal.

    PubMed

    Hoffman, J I; Tucker, R; Bridgett, S J; Clark, M S; Forcada, J; Slate, J

    2012-09-01

    Although single nucleotide polymorphisms (SNPs) are increasingly being recognized as powerful molecular markers, their application to non-model organisms can bring significant challenges. Among these are imperfect conversion rates of assays designed from in silico resources and the enhanced potential for genotyping error relative to pre-validated, highly optimized human SNPs. To explore these issues, we used Illumina's GoldenGate assay to genotype 480 Antarctic fur seal (Arctocephalus gazella) individuals at 144 putative SNPs derived from a 454 transcriptome assembly. One hundred and thirty-five polymorphic SNPs (93.8%) were automatically validated by the program GenomeStudio, and the initial genotyping error rate, estimated from nine replicate samples, was 0.004 per reaction. However, an almost tenfold further reduction in the error rate was achieved by excluding 31 loci (21.5%) that exhibited unclear clustering patterns, manually editing clusters to allow rescoring of ambiguous or incorrect genotypes, and excluding 18 samples (3.8%) with unreliable genotypes. After stringent quality filtering, we also found a counter-intuitive negative relationship between in silico minor allele frequency and the conversion rate, suggesting that some of our assays may have been designed from paralogous loci. Nevertheless, we obtained over 45 000 individual SNP genotypes with a final error rate of 0.0005, indicating that the GoldenGate assay is eminently capable of generating large, high-quality data sets for non-model organisms. This has positive implications for future studies of the evolutionary, behavioural and conservation genetics of natural populations.

  13. Research on machining error compensation in high-precision surface grinding machine for optical aspheric elements

    NASA Astrophysics Data System (ADS)

    Ke, Xiaolong; Guo, Yinbiao; Zhang, Shihan; Huang, Hao

    2010-10-01

    Using aspheric component in optical system can correct optical aberration, acquire high imaging quality, improve the optical characteristic, simplify system structure, and reduce system volume. Nowadays, high-precision surface grinding machine is an important approach to processing optical aspheric elements. However, because of the characteristics of optical aspheric elements, the processing method makes a higher demand to whole performance of surface grinding machine, and hardly to achieve ideal machining effect. Taking high generality and efficiency into account, this paper presents a compensation method for machining errors of high-precision surface grinding machine, which bases on optical aspheric elements, to achieve high-precision machining for all kinds of optical aspheric elements. After compensation, the machining accuracy of grinding machine could reach 2um/200×200mm. The research bases on NC surface grinding machine which is self developed. First of all, this paper introduces machining principle for optical aspheric elements on the grinding machine. And then error sources which producing errors are analyzed. By contacting and non-contacting measurement sensors, measurement software which is self designed realizes on-position measure for grinded workpiece, then fits surface precision and machining errors. Through compensation software for machining error which is self designed, compensation algorithm is designed and translated compensation data into G-code for the high-precision grinding machine to achieve compensation machining. Finally, by comparison between machining error compensation before and after processing, the experiments for this purpose are done to validate the compensation machining accuracy.

  14. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  15. Tracking in high-frame-rate imaging.

    PubMed

    Wu, Shih-Ying; Wang, Shun-Li; Li, Pai-Chi

    2010-01-01

    Speckle tracking has been used for motion estimation in ultrasound imaging. Unlike conventional Doppler techniques, which are angle-dependent, speckle tracking can be utilized to estimate velocity vectors. However, the accuracy of speckle-tracking methods is limited by speckle decorrelation, which is related to the displacement between two consecutive images, and, hence, combining high-frame-rate imaging and speckle tracking could potentially increase the accuracy of motion estimation. However, the lack of transmit focusing may also affect the tracking results and the high computational requirement may be problematic. This study therefore assessed the performance of high-frame-rate speckle tracking and compared it with conventional focusing. The effects of the signal-to-noise ratio (SNR), bulk motion, and velocity gradients were investigated in both experiments and simulations. The results show that high-frame-rate speckle tracking can achieve high accuracy if the SNR is sufficiently high. In addition, its computational complexity is acceptable because smaller search windows can be used due to the displacements between frames generally being smaller during high-frame-rate imaging. Speckle decor-relation resulting from velocity gradients within a sample volume is also not as significant during high-frame-rate imaging. PMID:20690428

  16. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  17. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  18. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  19. A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware.

    PubMed

    Shi, Haixiang; Schmidt, Bertil; Liu, Weiguo; Müller-Wittig, Wolfgang

    2010-04-01

    Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this article, we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data so that error-free reads can be available before DNA fragment assembly, which is of high importance to many graph-based short-read assembly tools. The algorithm is based on spectral alignment and uses the Compute Unified Device Architecture (CUDA) programming model. To gain efficiency we are taking advantage of the CUDA texture memory using a space-efficient Bloom filter data structure for spectrum membership queries. We have tested the runtime and accuracy of our algorithm using real and simulated Illumina data for different read lengths, error rates, input sizes, and algorithmic parameters. Using a CUDA-enabled mass-produced GPU (available for less than US$400 at any local computer outlet), this results in speedups of 12-84 times for the parallelized error correction, and speedups of 3-63 times for both sequential preprocessing and parallelized error correction compared to the publicly available Euler-SR program. Our implementation is freely available for download from http://cuda-ec.sourceforge.net .

  20. An Error Model for High-Time Resolution Satellite Precipitation Products

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Sapiano, M.; Adler, R. F.; Huffman, G. J.; Tian, Y.

    2013-12-01

    A new error scheme (PUSH: Precipitation Uncertainties for Satellite Hydrology) is presented to provide global estimates of errors for high time resolution, merged precipitation products. Errors are estimated for the widely used Tropical Rainfall Monitoring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 product at daily/0.25° resolution, using the high quality NOAA CPC-UNI gauge analysis as the benchmark. Each of the following four scenarios is explored and explicitly modeled: correct no-precipitation detection (both satellite and gauges detect no precipitation), missed precipitation (satellite records a zero, but it is incorrect), false alarm (satellite detects precipitation, but the reference is zero), and hit (both satellite and gauges detect precipitation). Results over Oklahoma show that the estimated probability distributions are able to reproduce the probability density functions of the benchmark precipitation, in terms of both expected values and quantiles. PUSH adequately captures missed precipitation and false detection uncertainties, reproduces the spatial pattern of the error, and shows a good agreement between observed and estimated errors. The resulting error estimates could be attached to the standard products for the scientific community to use. Investigation is underway to: 1) test the approach in different regions of the world; 2) verify the ability of the model to discern the systematic and random components of the error; 3) and evaluate the model performance when higher time-resolution satellite products (i.e., 3-hourly) are employed.

  1. Indirect measurement of a laser communications bit-error-rate reduction with low-order adaptive optics.

    PubMed

    Tyson, Robert K; Canning, Douglas E

    2003-07-20

    In experimental measurements of the bit-error rate for a laser communication system, we show improved performance with the implementation of low-order (tip/tilt) adaptive optics in a free-space link. With simulated atmospheric tilt injected by a conventional piezoelectric tilt mirror, an adaptive optics system with a Xinetics tilt mirror was used in a closed loop. The laboratory experiment replicated a monostatic propagation with a cooperative wave front beacon at the receiver. Owing to constraints in the speed of the processing hardware, the data is scaled to represent an actual propagation of a few kilometers under moderate scintillation conditions. We compare the experimental data and indirect measurement of the bit-error rate before correction and after correction, with a theoretical prediction.

  2. Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results

    SciTech Connect

    Clark, E.L.

    1994-07-01

    Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.

  3. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  4. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  5. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  6. The Effect of Administrative Boundaries and Geocoding Error on Cancer Rates in California

    PubMed Central

    Goldberg, Daniel W.; Cockburn, Myles G.

    2012-01-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods. PMID:22469490

  7. Theoretical computation of trace gases retrieval random error from measurements of high spectral resolution infrared sounder

    NASA Technical Reports Server (NTRS)

    Huang, Hung-Lung; Smith, William L.; Woolf, Harold M.; Theriault, J. M.

    1991-01-01

    The purpose of this paper is to demonstrate the trace gas profiling capabilities of future passive high spectral resolution (1 cm(exp -1) or better) infrared (600 to 2700 cm(exp -1)) satellite tropospheric sounders. These sounders, such as the grating spectrometer, Atmospheric InfRared Sounders (AIRS) (Chahine et al., 1990) and the interferometer, GOES High Resolution Interferometer Sounder (GHIS), (Smith et al., 1991) can provide these unique infrared spectra which enable us to conduct this analysis. In this calculation only the total random retrieval error component is presented. The systematic error components contributed by the forward and inverse model error are not considered (subject of further studies). The total random errors, which are composed of null space error (vertical resolution component error) and measurement error (instrument noise component error), are computed by assuming one wavenumber spectral resolution with wavenumber span from 1100 cm(exp -1) to 2300 cm(exp -1) (the band 600 cm(exp -1) to 1100 cm(exp -1) is not used since there is no major absorption of our three gases here) and measurement noise of 0.25 degree at reference temperature of 260 degree K. Temperature, water vapor, ozone and mixing ratio profiles of nitrous oxide, carbon monoxide and methane are taken from 1976 US Standard Atmosphere conditions (a FASCODE model). Covariance matrices of the gases are 'subjectively' generated by assuming 50 percent standard deviation of gaussian perturbation with respect to their US Standard model profiles. Minimum information and maximum likelihood retrieval solutions are used.

  8. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    PubMed

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27264206

  9. High Rate for Type IC Supernovae

    SciTech Connect

    Muller, R.A.; Marvin-Newberg, H.J.; Pennypacker, Carl R.; Perlmutter, S.; Sasseen, T.P.; Smith, C.K.

    1991-09-01

    Using an automated telescope we have detected 20 supernovae in carefully documented observations of nearby galaxies. The supernova rates for late spiral (Sbc, Sc, Scd, and Sd) galaxies, normalized to a blue luminosity of 10{sup 10} L{sub Bsun}, are 0.4 h{sup 2}, 1.6 h{sup 2}, and 1.1 h{sup 2} per 100 years for SNe type la, Ic, and II. The rate for type Ic supernovae is significantly higher than found in previous surveys. The rates are not corrected for detection inefficiencies, and do not take into account the indications that the Ic supernovae are fainter on the average than the previous estimates; therefore the true rates are probably higher. The rates are not strongly dependent on the galaxy inclination, in contradiction to previous compilations. If the Milky Way is a late spiral, then the rate of Galactic supernovae is greater than 1 per 30 {+-} 7 years, assuming h = 0.75. This high rate has encouraging consequences for future neutrino and gravitational wave observatories.

  10. Approximation and error estimation in high dimensional space for stochastic collocation methods on arbitrary sparse samples

    SciTech Connect

    Archibald, Richard K; Deiterding, Ralf; Hauck, Cory D; Jakeman, John D; Xiu, Dongbin

    2012-01-01

    We have develop a fast method that can capture piecewise smooth functions in high dimensions with high order and low computational cost. This method can be used for both approximation and error estimation of stochastic simulations where the computations can either be guided or come from a legacy database.

  11. Baltimore District Tackles High Suspension Rates

    ERIC Educational Resources Information Center

    Maxwell, Lesli A.

    2007-01-01

    This article reports on how the Baltimore District tackles its high suspension rates. Driven by an increasing belief that zero-tolerance disciplinary policies are ineffective, more educators are embracing strategies that do not exclude misbehaving students from school for offenses such as insubordination, disrespect, cutting class, tardiness, and…

  12. Denoising DNA deep sequencing data—high-throughput sequencing errors and their correction

    PubMed Central

    Laehnemann, David; Borkhardt, Arndt

    2016-01-01

    Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here. PMID:26026159

  13. Assessing XCTD Fall Rate Errors using Concurrent XCTD and CTD Profiles in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Millar, J.; Gille, S. T.; Sprintall, J.; Frants, M.

    2010-12-01

    Refinements in the fall rate equation for XCTDs are not as well understood as those for XBTs, due in part to the paucity of concurrent and collocated XCTD and CTD profiles. During February and March 2010, the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES) conducted 31 collocated 1000-meter XCTD and CTD casts in the Drake Passage. These XCTD/CTD profile pairs are closely matched in space and time, with a mean distance between casts of 1.19 km and a mean lag time of 39 minutes. The profile pairs are well suited to address the XCTD fall rate problem specifically in higher latitude waters, where existing fall rate corrections have rarely been assessed. Many of these XCTD/CTD profile pairs reveal an observable depth offset in measurements of both temperature and conductivity. Here, the nature and extent of this depth offset is evaluated.

  14. Role of high shear rate in thrombosis.

    PubMed

    Casa, Lauren D C; Deaton, David H; Ku, David N

    2015-04-01

    Acute arterial occlusions occur in high shear rate hemodynamic conditions. Arterial thrombi are platelet-rich when examined histologically compared with red blood cells in venous thrombi. Prior studies of platelet biology were not capable of accounting for the rapid kinetics and bond strengths necessary to produce occlusive thrombus under these conditions where the stasis condition of the Virchow triad is so noticeably absent. Recent experiments elucidate the unique pathway and kinetics of platelet aggregation that produce arterial occlusion. Large thrombi form from local release and conformational changes in von Willebrand factor under very high shear rates. The effect of high shear hemodynamics on thrombus growth has profound implications for the understanding of all acute thrombotic cardiovascular events as well as for vascular reconstructive techniques and vascular device design, testing, and clinical performance.

  15. High strain rate behaviour of polypropylene microfoams

    NASA Astrophysics Data System (ADS)

    Gómez-del Río, T.; Garrido, M. A.; Rodríguez, J.; Arencón, D.; Martínez, A. B.

    2012-08-01

    Microcellular materials such as polypropylene foams are often used in protective applications and passive safety for packaging (electronic components, aeronautical structures, food, etc.) or personal safety (helmets, knee-pads, etc.). In such applications the foams which are used are often designed to absorb the maximum energy and are generally subjected to severe loadings involving high strain rates. The manufacture process to obtain polymeric microcellular foams is based on the polymer saturation with a supercritical gas, at high temperature and pressure. This method presents several advantages over the conventional injection moulding techniques which make it industrially feasible. However, the effect of processing conditions such as blowing agent, concentration and microfoaming time and/or temperature on the microstructure of the resulting microcellular polymer (density, cell size and geometry) is not yet set up. The compressive mechanical behaviour of several microcellular polypropylene foams has been investigated over a wide range of strain rates (0.001 to 3000 s-1) in order to show the effects of the processing parameters and strain rate on the mechanical properties. High strain rate tests were performed using a Split Hopkinson Pressure Bar apparatus (SHPB). Polypropylene and polyethylene-ethylene block copolymer foams of various densities were considered.

  16. An approach for reducing the error rate in automated lung segmentation.

    PubMed

    Gill, Gurman; Beichel, Reinhard R

    2016-09-01

    Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855±0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897

  17. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the

  18. Highly stable high-rate discriminator for nuclear counting

    NASA Technical Reports Server (NTRS)

    English, J. J.; Howard, R. H.; Rudnick, S. J.

    1969-01-01

    Pulse amplitude discriminator is specially designed for nuclear counting applications. At very high rates, the threshold is stable. The output-pulse width and the dead time change negligibly. The unit incorporates a provision for automatic dead-time correction.

  19. Phosphor thermometry at high repetition rates

    NASA Astrophysics Data System (ADS)

    Fuhrmann, N.; Brübach, J.; Dreizler, A.

    2013-09-01

    Phosphor thermometry is a semi-invasive surface temperature measurement technique utilizing the luminescence properties of thermographic phosphors. Typically these ceramic materials are coated onto the object of interest and are excited by a short UV laser pulse. Photomultipliers and high-speed camera systems are used to transiently detect the subsequently emitted luminescence decay point wise or two-dimensionally resolved. Based on appropriate calibration measurements, the luminescence lifetime is converted to temperature. Up to now, primarily Q-switched laser systems with repetition rates of 10 Hz were employed for excitation. Accordingly, this diagnostic tool was not applicable to resolve correlated temperature transients at time scales shorter than 100 ms. For the first time, the authors realized a high-speed phosphor thermometry system combining a highly repetitive laser in the kHz regime and a fast decaying phosphor. A suitable material was characterized regarding its temperature lifetime characteristic and precision. Additionally, the influence of laser power on the phosphor coating in terms of heating effects has been investigated. A demonstration of this high-speed technique has been conducted inside the thermally highly transient system of an optically accessible internal combustion engine. Temperatures have been measured with a repetition rate of one sample per crank angle degree at an engine speed of 1000 rpm. This experiment has proven that high-speed phosphor thermometry is a promising diagnostic tool for the resolution of surface temperature transients.

  20. Adjoint-field errors in high fidelity compressible turbulence simulations for sound control

    NASA Astrophysics Data System (ADS)

    Vishnampet, Ramanathan; Bodony, Daniel; Freund, Jonathan

    2013-11-01

    A consistent discrete adjoint for high-fidelity discretization of the three-dimensional Navier-Stokes equations is used to quantify the error in the sensitivity gradient predicted by the continuous adjoint method, and examine the aeroacoustic flow-control problem for free-shear-flow turbulence. A particular quadrature scheme for approximating the cost functional makes our discrete adjoint formulation for a fourth-order Runge-Kutta scheme with high-order finite differences practical and efficient. The continuous adjoint-based sensitivity gradient is shown to to be inconsistent due to discretization truncation errors, grid stretching and filtering near boundaries. These errors cannot be eliminated by increasing the spatial or temporal resolution since chaotic interactions lead them to become O (1) at the time of control actuation. Although this is a known behavior for chaotic systems, its effect on noise control is much harder to anticipate, especially given the different resolution needs of different parts of the turbulence and acoustic spectra. A comparison of energy spectra of the adjoint pressure fields shows significant error in the continuous adjoint at all wavenumbers, even though they are well-resolved. The effect of this error on the noise control mechanism is analyzed.

  1. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  2. Evaluation of write error rate for voltage-driven dynamic magnetization switching in magnetic tunnel junctions with perpendicular magnetization

    NASA Astrophysics Data System (ADS)

    Shiota, Yoichi; Nozaki, Takayuki; Tamaru, Shingo; Yakushiji, Kay; Kubota, Hitoshi; Fukushima, Akio; Yuasa, Shinji; Suzuki, Yoshishige

    2016-01-01

    We investigated the write error rate (WER) for voltage-driven dynamic switching in magnetic tunnel junctions with perpendicular magnetization. We observed a clear oscillatory behavior of the switching probability with respect to the duration of pulse voltage, which reveals the precessional motion of magnetization during voltage application. We experimentally demonstrated WER as low as 4 × 10-3 at the pulse duration corresponding to a half precession period (˜1 ns). The comparison between the results of the experiment and simulation based on a macrospin model shows a possibility of ultralow WER (<10-15) under optimum conditions. This study provides a guideline for developing practical voltage-driven spintronic devices.

  3. Packet error rate analysis of OOK, DPIM, and PPM modulation schemes for ground-to-satellite laser uplink communications.

    PubMed

    Jiang, Yijun; Tao, Kunyu; Song, Yiwei; Fu, Sen

    2014-03-01

    Performance of on-off keying (OOK), digital pulse interval modulation (DPIM), and pulse position modulation (PPM) schemes are researched for ground-to-satellite laser uplink communications. Packet error rates of these modulation systems are compared, with consideration of the combined effect of intensity fluctuation and beam wander. Based on the numerical results, performances of different modulation systems are discussed. Optimum divergence angle and transmitted beam radius of different modulation systems are indicated and the relations of the transmitted laser power to them are analyzed. This work can be helpful for modulation scheme selection and system design in ground-to-satellite laser uplink communications.

  4. Influence of beam wander on bit-error rate in a ground-to-satellite laser uplink communication system.

    PubMed

    Ma, Jing; Jiang, Yijun; Tan, Liying; Yu, Siyuan; Du, Wenhe

    2008-11-15

    Based on weak fluctuation theory and the beam-wander model, the bit-error rate of a ground-to-satellite laser uplink communication system is analyzed, in comparison with the condition in which beam wander is not taken into account. Considering the combined effect of scintillation and beam wander, optimum divergence angle and transmitter beam radius for a communication system are researched. Numerical results show that both of them increase with the increment of total link margin and transmitted wavelength. This work can benefit the ground-to-satellite laser uplink communication system design.

  5. High strain rate damage of Carrara marble

    NASA Astrophysics Data System (ADS)

    Doan, Mai-Linh; Billi, Andrea

    2011-10-01

    Several cases of rock pulverization have been observed along major active faults in granite and other crystalline rocks. They have been interpreted as due to coseismic pervasive microfracturing. In contrast, little is known about pulverization in carbonates. With the aim of understanding carbonate pulverization, we investigate the high strain rate (c. 100 s-1) behavior of unconfined Carrara marble through a set of experiments with a Split Hopkinson Pressure Bar. Three final states were observed: (1) at low strain, the sample is kept intact, without apparent macrofractures; (2) failure is localized along a few fractures once stress is larger than 100 MPa, corresponding to a strain of 0.65%; (3) above 1.3% strain, the sample is pulverized. Contrary to granite, the transition to pulverization is controlled by strain rather than strain rate. Yet, at low strain rate, a sample from the same marble displayed only a few fractures. This suggests that the experiments were done above the strain rate transition to pulverization. Marble seems easier to pulverize than granite. This creates a paradox: finely pulverized rocks should be prevalent along any high strain zone near faults through carbonates, but this is not what is observed. A few alternatives are proposed to solve this paradox.

  6. High temperature electrochemical corrosion rate probes

    SciTech Connect

    Bullard, Sophie J.; Covino, Bernard S., Jr.; Holcomb, Gordon R.; Ziomek-Moroz, M.

    2005-09-01

    Corrosion occurs in the high temperature sections of energy production plants due to a number of factors: ash deposition, coal composition, thermal gradients, and low NOx conditions, among others. Electrochemical corrosion rate (ECR) probes have been shown to operate in high temperature gaseous environments that are similar to those found in fossil fuel combustors. ECR probes are rarely used in energy production plants at the present time, but if they were more fully understood, corrosion could become a process variable at the control of plant operators. Research is being conducted to understand the nature of these probes. Factors being considered are values selected for the Stern-Geary constant, the effect of internal corrosion, and the presence of conductive corrosion scales and ash deposits. The nature of ECR probes will be explored in a number of different atmospheres and with different electrolytes (ash and corrosion product). Corrosion rates measured using an electrochemical multi-technique capabilities instrument will be compared to those measured using the linear polarization resistance (LPR) technique. In future experiments, electrochemical corrosion rates will be compared to penetration corrosion rates determined using optical profilometry measurements.

  7. HIGH ENERGY RATE EXTRUSION OF URANIUM

    DOEpatents

    Lewis, L.

    1963-07-23

    A method of extruding uranium at a high energy rate is described. Conditions during the extrusion are such that the temperature of the metal during extrusion reaches a point above the normal alpha to beta transition, but the metal nevertheless remains in the alpha phase in accordance with the Clausius- Clapeyron equation. Upon exiting from the die, the metal automatically enters the beta phase, after which the metal is permitted to cool. (AEC)

  8. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  9. Optimization of coplanar high rate supercapacitors

    NASA Astrophysics Data System (ADS)

    Sun, Leimeng; Wang, Xinghui; Liu, Wenwen; Zhang, Kang; Zou, Jianping; Zhang, Qing

    2016-05-01

    In this work, we describe two efficient methods to enhance the electrochemical performance of high-rate coplanar micro-supercapacitors (MSCs). Through introducing MnO2 nanosheets on vertical-aligned carbon nanotube (VACNT) array, the areal capacitance and volumetric energy density exhibit tremendous improvements which have been increased from 0.011 mF cm-2 to 0.017 mWh cm-3 to 0.479 mF cm-2 and 0.426 mWh cm-3 respectively at an ultrahigh scan rate of 50000 mV s-1. Subsequently, by fabricating an asymmetric MSC, the energy density could be increased to 0.167 mWh cm-3 as well. Moreover, as a result of applying MnO2/VACNT as the positive electrode and VACNT as the negative electrode, the cell operating voltage in aqueous electrolyte could be increased to as high as 2.0 V. Our advanced planar MSCs could operate well at different high scan rates and offer a promising integration potential with other in-plane devices on the same substrate.

  10. Senior High School Students' Errors on the Use of Relative Words

    ERIC Educational Resources Information Center

    Bao, Xiaoli

    2015-01-01

    Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…

  11. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    The various methods of high bit transition density encoding are presented, their relative performance is compared in so far as error propagation characteristics, transition properties and system constraints are concerned. A computer simulation of the system using the specific PN code recommended, is included.

  12. Influence of nonhomogeneous earth on the rms phase error and beam-pointing errors of large, sparse high-frequency receiving arrays

    NASA Astrophysics Data System (ADS)

    Weiner, M. M.

    1994-01-01

    The performance of ground-based high-frequency (HF) receiving arrays is reduced when the array elements have electrically small ground planes. The array rms phase error and beam-pointing errors, caused by multipath rays reflected from a nonhomogeneous Earth, are determined for a sparse array of elements that are modeled as Hertzian dipoles in close proximity to Earth with no ground planes. Numerical results are presented for cases of randomly distributed and systematically distributed Earth nonhomogeneities where one-half of vertically polarized array elements are located in proximity to one type of Earth and the remaining half are located in proximity to a second type of Earth. The maximum rms phase errors, for the cases examined, are 18 deg and 9 deg for randomly distributed and systematically distributed nonhomogeneities, respectively. The maximum beampointing errors are 0 and 0.3 beam widths for randomly distributed and systematically distributed nonhomogeneities, respectively.

  13. Civilian residential fire fatality rates: Six high-rate states versus six low-rate states

    NASA Astrophysics Data System (ADS)

    Hall, J. R., Jr.; Helzer, S. G.

    1983-08-01

    Results of an analysis of 1,600 fire fatalities occurring in six states with high fire-death rates and six states with low fire-death rates are presented. Reasons for the differences in rates are explored, with special attention to victim age, sex, race, and condition at time of ignition. Fire cause patterns are touched on only lightly but are addressed more extensively in the companion piece to this report, "Rural and Non-Rural Civilian Residential Fire Fatalities in Twelve States', NBSIR 82-2519.

  14. Simulation of System Error Tolerances of a High Current Transport Experiment for Heavy-Ion Fusion

    NASA Astrophysics Data System (ADS)

    Lund, Steven M.; Bangerter, Roger O.; Freidman, Alex; Grote, Dave P.; Seidl, Peter A.

    2000-10-01

    A driver-scale, intense ion beam transport experiment (HCX) is being designed to test issues for Heavy Ion Fusion (HIF) [1]. Here we present detailed, Particle in Cell simulations of HCX to parametrically explore how various system errors can impact machine performance. The simulations are transverse and include the full 3D fields of the quadrupole focusing magnets, spreads in axial momentum, conducting pipe boundary conditions, etc. System imperfections such as applied focusing field errors (magnet strength, field nonlinearities, etc.), alignment errors (magnet offsets and rotations), beam envelope mismatches to the focusing lattice, induced beam image charges, and beam distribution errors (beam nonuniformities, collective modes, and other distortions) are all analyzed in turn and in combination. The influence of these errors on the degradation of beam quality (emittance growth), halo production, and loss of beam control are evaluated. Evaluations of practical machine apertures and centroid steering corrections that can mitigate particle loss and degradation of beam quality are carried out. 1. P.A. Seidl, L.E. Ahle, R.O. Bangerter, V.P. Karpenko, S.M. Lund, A Faltens, R.M. Franks, D.B. Shuman, and H.K. Springer, Design of a Proof of Principal High Current Transport Experiment for Heavy-Ion Fusion, these proceedings.

  15. Orbit error correction on the high energy beam transport line at the KHIMA accelerator system

    NASA Astrophysics Data System (ADS)

    Park, Chawon; Yim, Heejoong; Hahn, Garam; An, Dong Hyun

    2016-09-01

    For the purpose of treatment of various cancers and medical research, a synchrotron based medical machine has been developed under the Korea Heavy Ion Medical Accelerator (KHIMA) project and is scheduled for use to treat patient at the beginning of 2018. The KHIMA synchrotron is designed to accelerate and extract carbon ion (proton) beams with various energies from 110 to 430 MeV/u (60 to 230 MeV). Studies on the lattice design and beam optics for the High Energy Beam Transport (HEBT) line at the KHIMA accelerator system have been carried out using the WinAgile and the MAD-X codes. Because magnetic field errors and misalignments introduce deviations from the design parameters, these error sources should be treated explicitly, and the sensitivity of the machine's lattice to different individual error sources should be considered. Various types of errors, both static and dynamic, have been taken into account and have been consequentially corrected with a dedicated correction algorithm by using the MAD-X program. Based on the error analysis, the optimized correction setup is decided, and the specifications for the correcting magnets of the HEBT lines are determined.

  16. Reducing Systematic Centroid Errors Induced by Fiber Optic Faceplates in Intensified High-Accuracy Star Trackers

    PubMed Central

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  17. On the non-Gaussian errors in high-z supernovae type Ia data

    NASA Astrophysics Data System (ADS)

    Singh, Meghendra; Pandey, Ashwini; Sharma, Amit; Gupta, Shashikant; Sharma, Satendra

    2016-11-01

    The nature of random errors in any data set is Gaussian, which is a well established fact according to the Central Limit Theorem. Supernovae type Ia data have played a crucial role in major discoveries in cosmology. Unlike in laboratory experiments, astronomical measurements cannot be performed in controlled situations. Thus, errors in astronomical data can be more severe in terms of systematics and non-Gaussianity compared to those of laboratory experiments. In this paper, we use the Kolmogorov-Smirnov statistic to test non-Gaussianity in high-z supernovae data. We apply this statistic to four data sets, i.e., Gold data (2004), Gold data (2007), the Union2 catalog and the Union2.1 data set for our analysis. Our results show that in all four data sets the errors are consistent with a Gaussian distribution.

  18. Adjustment on the Type I Error Rate for a Clinical Trial Monitoring for both Intermediate and Primary Endpoints

    PubMed Central

    Halabi, Susan

    2013-01-01

    In many clinical trials, a single endpoint is used to answer the primary question and forms the basis for monitoring the experimental therapy. Many trials are lengthy in duration and investigators are interested in using an intermediate endpoint for an accelerated approval, but will rely on the primary endpoint (such as, overall survival) for the full approval of the drug by the Food and Drug Administration. We have designed a clinical trial where both intermediate (progression-free survival, (PFS)) and primary endpoints (overall survival, (OS)) are used for monitoring the trial so the overall type I error rate is preserved at the pre-specified alpha level of 0.05. A two-stage procedure is used. In the first stage, the Bonferroni correction was used where the global type I error rate was allocated to each of the endpoints. In the next stage, the O’Brien-Fleming approach was used to design the boundary for the interim and final analysis for each endpoint. Data were generated assuming several parametric copulas with exponential marginals. Different degrees of dependence, as measured by Kendall’s τ, between OS and PFS were assumed: 0 (independence) 0.3, 0.5 and 0.70. This approach is applied to an example in a prostate cancer trial. PMID:24466469

  19. High strain-rate magnetoelasticity in Galfenol

    NASA Astrophysics Data System (ADS)

    Domann, J. P.; Loeffler, C. M.; Martin, B. E.; Carman, G. P.

    2015-09-01

    This paper presents the experimental measurements of a highly magnetoelastic material (Galfenol) under impact loading. A Split-Hopkinson Pressure Bar was used to generate compressive stress up to 275 MPa at strain rates of either 20/s or 33/s while measuring the stress-strain response and change in magnetic flux density due to magnetoelastic coupling. The average Young's modulus (44.85 GPa) was invariant to strain rate, with instantaneous stiffness ranging from 25 to 55 GPa. A lumped parameters model simulated the measured pickup coil voltages in response to an applied stress pulse. Fitting the model to the experimental data provided the average piezomagnetic coefficient and relative permeability as functions of field strength. The model suggests magnetoelastic coupling is primarily insensitive to strain rates as high as 33/s. Additionally, the lumped parameters model was used to investigate magnetoelastic transducers as potential pulsed power sources. Results show that Galfenol can generate large quantities of instantaneous power (80 MW/m3 ), comparable to explosively driven ferromagnetic pulse generators (500 MW/m3 ). However, this process is much more efficient and can be cyclically carried out in the linear elastic range of the material, in stark contrast with explosively driven pulsed power generators.

  20. High strain rate deformation of layered nanocomposites.

    PubMed

    Lee, Jae-Hwang; Veysset, David; Singer, Jonathan P; Retsch, Markus; Saini, Gagan; Pezeril, Thomas; Nelson, Keith A; Thomas, Edwin L

    2012-01-01

    Insight into the mechanical behaviour of nanomaterials under the extreme condition of very high deformation rates and to very large strains is needed to provide improved understanding for the development of new protective materials. Applications include protection against bullets for body armour, micrometeorites for satellites, and high-speed particle impact for jet engine turbine blades. Here we use a microscopic ballistic test to report the responses of periodic glassy-rubbery layered block-copolymer nanostructures to impact from hypervelocity micron-sized silica spheres. Entire deformation fields are experimentally visualized at an exceptionally high resolution (below 10 nm) and we discover how the microstructure dissipates the impact energy via layer kinking, layer compression, extreme chain conformational flattening, domain fragmentation and segmental mixing to form a liquid phase. Orientation-dependent experiments show that the dissipation can be enhanced by 30% by proper orientation of the layers. PMID:23132014

  1. Octane rating methods at high revolution speed

    SciTech Connect

    Millo, F.; Ferraro, C.V.; Barbera, E.; Margaria, G.

    1995-12-31

    An experimental investigation on a group of unleaded gasolines of different chemical composition has been carried out, in order to analyze their knock behavior in a mass-produced engine at high revolution speed, to highlight possible inconsistencies with their standard Research and Motor octane numbers and to try to discover explanations for the above mentioned inconsistencies. The investigation has been focused on fuels containing oxygenated compounds, such as alcohols (methanol and ethanol) and ethers (MTBE), with the aim of pointing out the influence of the fuel composition on the octane rating, especially as far as the variation in the stoichiometric air/fuel ratio (due to oxygenated compounds blending) is concerned. In particular, the rating of all the fuels under the same relative air/fuel ratio has shown to be a mandatory condition in order to obtain a proper estimate of antiknock performances. The evaluations obtained are consistent with the standard Motor octane numbers.

  2. Fuel droplet burning rates at high pressures.

    NASA Technical Reports Server (NTRS)

    Canada, G. S.; Faeth, G. M.

    1973-01-01

    Combustion of methanol, ethanol, propanol-1, n-pentane, n-heptane, and n-decane was observed in air under natural convection conditions, at pressures up to 100 atm. The droplets were simulated by porous spheres, with diameters in the range from 0.63 to 1.90 cm. The pressure levels of the tests were high enough so that near-critical combustion was observed for methanol and ethanol. Due to the high pressures, the phase-equilibrium models of the analysis included both the conventional low-pressure approach as well as high-pressure versions, allowing for real gas effects and the solubility of combustion-product gases in the liquid phase. The burning-rate predictions of the various theories were similar, and in fair agreement with the data. The high-pressure theory gave the best prediction for the liquid-surface temperatures of ethanol and propanol-1 at high pressure. The experiments indicated the approach of critical burning conditions for methanol and ethanol at pressures on the order of 80 to 100 atm, which was in good agreement with the predictions of both the low- and high-pressure analysis.

  3. Microalgal separation from high-rate ponds

    SciTech Connect

    Nurdogan, Y.

    1988-01-01

    High rate ponding (HRP) processes are playing an increasing role in the treatment of organic wastewaters in sunbelt communities. Photosynthetic oxygenation by algae has proved to cost only one-seventh as much as mechanical aeration for activated sludge systems. During this study, an advanced HRP, which produces an effluent equivalent to tertiary treatment has been studied. It emphasizes not only waste oxidation but also algal separation and nutrient removal. This new system is herein called advanced tertiary high rate ponding (ATHRP). Phosphorus removal in HRP systems is normally low because algal uptake of phosphorus is about one percent of their 200-300 mg/L dry weights. Precipitation of calcium phosphates by autofluocculation also occurs in HRP at high pH levels, but it is generally not complete due to insufficient calcium concentration in the pond. In the case of Richmond where the studies were conducted, the sewage is very low in calcium. Therefore, enhancement of natural autoflocculation was studied by adding small amounts of lime to the pond. Through this simple procedure phosphorus and nitrogen removals were virtually complete justifying the terminology ATHRP.

  4. Assessment of error rates in acoustic monitoring with the R package monitoR

    USGS Publications Warehouse

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were  for song event detection.

  5. Direct impact analysis of multi-leaf collimator leaf position errors on dose distributions in volumetric modulated arc therapy: a pass rate calculation between measured planar doses with and without the position errors

    NASA Astrophysics Data System (ADS)

    Tatsumi, D.; Hosono, M. N.; Nakada, R.; Ishii, K.; Tsutsumi, S.; Inoue, M.; Ichida, T.; Miki, Y.

    2011-10-01

    We propose a new method for analyzing the direct impact of multi-leaf collimator (MLC) leaf position errors on dose distributions in volumetric modulated arc therapy (VMAT). The technique makes use of the following processes. Systematic leaf position errors are generated by directly changing a leaf offset in a linac controller; dose distributions are measured by a two-dimensional diode array; pass rates of the dose difference between measured planar doses with and without the position errors are calculated as a function of the leaf position error. Three different treatment planning systems (TPSs) were employed to create VMAT plans for five prostate cancer cases and the pass rates were compared between the TPSs under various leaf position errors. The impact of the leaf position errors on dose distributions depended upon the final optimization result from each TPS, which was explained by the correlation between the dose error and the average leaf gap width. The presented method determines leaf position tolerances for VMAT delivery for each TPS, which may facilitate establishing a VMAT quality assurance program in a radiotherapy facility. This work was presented in part at the 52nd Annual Meeting of the American Society for Therapeutic Radiology and Oncology in San Diego on 1 November 2010.

  6. VARIABLE SELECTION FOR QUALITATIVE INTERACTIONS IN PERSONALIZED MEDICINE WHILE CONTROLLING THE FAMILY-WISE ERROR RATE

    PubMed Central

    Gunter, Lacey; Zhu, Ji; Murphy, Susan

    2012-01-01

    For many years, subset analysis has been a popular topic for the biostatistics and clinical trials literature. In more recent years, the discussion has focused on finding subsets of genomes which play a role in the effect of treatment, often referred to as stratified or personalized medicine. Though highly sought after, methods for detecting subsets with altering treatment effects are limited and lacking in power. In this article we discuss variable selection for qualitative interactions with the aim to discover these critical patient subsets. We propose a new technique designed specifically to find these interaction variables among a large set of variables while still controlling for the number of false discoveries. We compare this new method against standard qualitative interaction tests using simulations and give an example of its use on data from a randomized controlled trial for the treatment of depression. PMID:22023676

  7. High Rate Pulse Processing Algorithms for Microcalorimeters

    NASA Astrophysics Data System (ADS)

    Tan, Hui; Breus, Dimitry; Hennig, Wolfgang; Sabourov, Konstantin; Collins, Jeffrey W.; Warburton, William K.; Bertrand Doriese, W.; Ullom, Joel N.; Bacrania, Minesh K.; Hoover, Andrew S.; Rabin, Michael W.

    2009-12-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensors can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small in order to maintain good energy resolution, and pulse decay times are normally on the order of milliseconds due to slow thermal relaxation. Therefore, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. For large arrays, however, as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of waveform data to a host computer for post-processing. In this paper, we present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in readout electronics that we are also currently developing, is to achieve sufficiently good energy resolution for most applications while being: a) simple enough to be implemented in the readout electronics; and, b) capable of processing overlapping pulses, and thus achieving much higher output count rates than those achieved by existing algorithms. Details of our algorithms are presented, and their performance is compared to that of the "optimal filter" that is currently the predominantly used pulse processing algorithm in the cryogenic-detector community.

  8. High dose rate brachytherapy for oral cancer

    PubMed Central

    YamazakI, Hideya; Yoshida, Ken; Yoshioka, Yasuo; Shimizutani, Kimishige; Furukawa, Souhei; Koizumi, Masahiko; Ogawa, Kazuhiko

    2013-01-01

    Brachytherapy results in better dose distribution compared with other treatments because of steep dose reduction in the surrounding normal tissues. Excellent local control rates and acceptable side effects have been demonstrated with brachytherapy as a sole treatment modality, a postoperative method, and a method of reirradiation. Low-dose-rate (LDR) brachytherapy has been employed worldwide for its superior outcome. With the advent of technology, high-dose-rate (HDR) brachytherapy has enabled health care providers to avoid radiation exposure. This therapy has been used for treating many types of cancer such as gynecological cancer, breast cancer, and prostate cancer. However, LDR and pulsed-dose-rate interstitial brachytherapies have been mainstays for head and neck cancer. HDR brachytherapy has not become widely used in the radiotherapy community for treating head and neck cancer because of lack of experience and biological concerns. On the other hand, because HDR brachytherapy is less time-consuming, treatment can occasionally be administered on an outpatient basis. For the convenience and safety of patients and medical staff, HDR brachytherapy should be explored. To enhance the role of this therapy in treatment of head and neck lesions, we have reviewed its outcomes with oral cancer, including Phase I/II to Phase III studies, evaluating this technique in terms of safety and efficacy. In particular, our studies have shown that superficial tumors can be treated using a non-invasive mold technique on an outpatient basis without adverse reactions. The next generation of image-guided brachytherapy using HDR has been discussed. In conclusion, although concrete evidence is yet to be produced with a sophisticated study in a reproducible manner, HDR brachytherapy remains an important option for treatment of oral cancer. PMID:23179377

  9. The Influence of Relatives on the Efficiency and Error Rate of Familial Searching

    PubMed Central

    Rohlfs, Rori V.; Murphy, Erin; Song, Yun S.; Slatkin, Montgomery

    2013-01-01

    We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011), for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler® Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability () of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins) there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases. PMID:23967076

  10. Using state machines to model the Ion Torrent sequencing process and to improve read error rates

    PubMed Central

    Golan, David; Medvedev, Paul

    2013-01-01

    Motivation: The importance of fast and affordable DNA sequencing methods for current day life sciences, medicine and biotechnology is hard to overstate. A major player is Ion Torrent, a pyrosequencing-like technology which produces flowgrams – sequences of incorporation values – which are converted into nucleotide sequences by a base-calling algorithm. Because of its exploitation of ubiquitous semiconductor technology and innovation in chemistry, Ion Torrent has been gaining popularity since its debut in 2011. Despite the advantages, however, Ion Torrent read accuracy remains a significant concern. Results: We present FlowgramFixer, a new algorithm for converting flowgrams into reads. Our key observation is that the incorporation signals of neighboring flows, even after normalization and phase correction, carry considerable mutual information and are important in making the correct base-call. We therefore propose that base-calling of flowgrams should be done on a read-wide level, rather than one flow at a time. We show that this can be done in linear-time by combining a state machine with a Viterbi algorithm to find the nucleotide sequence that maximizes the likelihood of the observed flowgram. FlowgramFixer is applicable to any flowgram-based sequencing platform. We demonstrate FlowgramFixer’s superior performance on Ion Torrent Escherichia coli data, with a 4.8% improvement in the number of high-quality mapped reads and a 7.1% improvement in the number of uniquely mappable reads. Availability: Binaries and source code of FlowgramFixer are freely available at: http://www.cs.tau.ac.il/~davidgo5/flowgramfixer.html. Contact: davidgo5@post.tau.ac.il PMID:23813003

  11. High Prevalence of Refractive Errors in 7 Year Old Children in Iran

    PubMed Central

    HASHEMI, Hassan; YEKTA, Abbasali; JAFARZADEHPUR, Ebrahim; OSTADIMOGHADDAM, Hadi; ETEMAD, Koorosh; ASHARLOUS, Amir; NABOVATI, Payam; KHABAZKHOOB, Mehdi

    2016-01-01

    Background: The latest WHO report indicates that refractive errors are the leading cause of visual impairment throughout the world. The aim of this study was to determine the prevalence of myopia, hyperopia, and astigmatism in 7 yr old children in Iran. Methods: In a cross-sectional study in 2013 with multistage cluster sampling, first graders were randomly selected from 8 cities in Iran. All children were tested by an optometrist for uncorrected and corrected vision, and non-cycloplegic and cycloplegic refraction. Refractive errors in this study were determined based on spherical equivalent (SE) cyloplegic refraction. Results: From 4614 selected children, 89.0% participated in the study, and 4072 were eligible. The prevalence rates of myopia, hyperopia and astigmatism were 3.04% (95% CI: 2.30–3.78), 6.20% (95% CI: 5.27–7.14), and 17.43% (95% CI: 15.39–19.46), respectively. Prevalence of myopia (P=0.925) and astigmatism (P=0.056) were not statistically significantly different between the two genders, but the odds of hyperopia were 1.11 (95% CI: 1.01–2.05) times higher in girls (P=0.011). The prevalence of with-the-rule astigmatism was 12.59%, against-the-rule was 2.07%, and oblique 2.65%. Overall, 22.8% (95% CI: 19.7–24.9) of the schoolchildren in this study had at least one type of refractive error. Conclusion: One out of every 5 schoolchildren had some refractive error. Conducting multicenter studies throughout the Middle East can be very helpful in understanding the current distribution patterns and etiology of refractive errors compared to the previous decade. PMID:27114984

  12. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  13. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  14. The margin for error when releasing the high bar for dismounts.

    PubMed

    Hiley, M J; Yeadon, M R

    2003-03-01

    In Men's Artistic Gymnastics the current trend in elite high bar dismounts is to perform two somersaults in an extended body shape with a number of twists. Two techniques have been identified in the backward giant circles leading up to release for these dismounts (J. Biomech. 32 (1999) 811). At the Sydney 2000 Olympic Games 95% of gymnasts used the "scooped" backward giant circle technique rather than the "traditional" technique. It was speculated that the advantage gained from the scooped technique was an increased margin for error when releasing the high bar. A four segment planar simulation model of the gymnast and high bar was used to determine the margin for error when releasing the bar in performances at the Sydney 2000 Olympic Games. The eight high bar finalists and the three gymnasts who used the traditional backward giant circle technique were chosen for analysis. Model parameters were optimised to obtain a close match between simulated and actual performances in terms of rotation angle (1.2 degrees ), bar displacements (0.014 m) and release velocities (2%). Each matching simulation was used to determine the time window around the actual point of release for which the model had appropriate release parameters to complete the dismount successfully. The scooped backward giant circle technique resulted in a greater margin for error (release window 88-157 ms) when releasing the bar compared to the traditional technique (release window 73-84 ms). PMID:12594979

  15. Estimation of chromatic errors from broadband images for high contrast imaging

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Belikov, Ruslan

    2015-09-01

    Usage of an internal coronagraph with an adaptive optical system for wavefront correction for direct imaging of exoplanets is currently being considered for many mission concepts, including as an instrument addition to the WFIRST-AFTA mission to follow the James Web Space Telescope. The main technical challenge associated with direct imaging of exoplanets with an internal coronagraph is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, wavefront errors are usually estimated using probes on the DM. To date, most broadband lab demonstrations use narrowband filters to estimate the chromaticity of the wavefront error, but this reduces the photon flux per filter and requires a filter system. Here, we propose a method to estimate the chromaticity of wavefront errors using only a broadband image. This is achieved by using special DM probes that have sufficient chromatic diversity. As a case example, we simulate the retrieval of the spectrum of the central wavelength from broadband images for a simple shaped- pupil coronagraph with a conjugate DM and compute the resulting estimation error.

  16. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami-[InlineEquation not available: see fulltext.] Fading Channels

    NASA Astrophysics Data System (ADS)

    Li, Zexian; Latva-aho, Matti

    2004-12-01

    Multicarrier code division multiple access (MC-CDMA) is a promising technique that combines orthogonal frequency division multiplexing (OFDM) with CDMA. In this paper, based on an alternative expression for the[InlineEquation not available: see fulltext.]-function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER) of multiuser MC-CDMA systems in frequency-selective Nakagami-[InlineEquation not available: see fulltext.] fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC) or equal gain combining (EGC). The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.

  17. Advanced Communications Technology Satellite (ACTS) Fade Compensation Protocol Impact on Very Small-Aperture Terminal Bit Error Rate Performance

    NASA Technical Reports Server (NTRS)

    Cox, Christina B.; Coney, Thom A.

    1999-01-01

    The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.

  18. Consideration of wear rates at high velocity

    NASA Astrophysics Data System (ADS)

    Hale, Chad S.

    The development of the research presented here is one in which high velocity relative sliding motion between two bodies in contact has been considered. Overall, the wear environment is truly three-dimensional. The attempt to characterize three-dimensional wear was not economically feasible because it must be analyzed at the micro-mechanical level to get results. Thus, an engineering approximation was carried out. This approximation was based on a metallographic study identifying the need to include viscoplasticity constitutive material models, coefficient of friction, relationships between the normal load and velocity, and the need to understand wave propagation. A sled test run at the Holloman High Speed Test Track (HHSTT) was considered for the determination of high velocity wear rates. In order to adequately characterize high velocity wear, it was necessary to formulate a numerical model that contained all of the physical events present. The experimental results of a VascoMax 300 maraging steel slipper sliding on an AISI 1080 steel rail during a January 2008 sled test mission were analyzed. During this rocket sled test, the slipper traveled 5,816 meters in 8.14 seconds and reached a maximum velocity of 1,530 m/s. This type of environment was never considered previously in terms of wear evaluation. Each of the features of the metallography were obtained through micro-mechanical experimental techniques. The byproduct of this analysis is that it is now possible to formulate a model that contains viscoplasticity, asperity collisions, temperature and frictional features. Based on the observations of the metallographic analysis, these necessary features have been included in the numerical model, which makes use of a time-dynamic program which follows the movement of a slipper during its experimental test run. The resulting velocity and pressure functions of time have been implemented in the explicit finite element code, ABAQUS. Two-dimensional, plane strain models

  19. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  20. Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1998-01-01

    In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.

  1. Separable and Error-Free Reversible Data Hiding in Encrypted Image with High Payload

    PubMed Central

    Yin, Zhaoxia; Luo, Bin; Hong, Wien

    2014-01-01

    This paper proposes a separable reversible data-hiding scheme in encrypted image which offers high payload and error-free data extraction. The cover image is partitioned into nonoverlapping blocks and multigranularity encryption is applied to obtain the encrypted image. The data hider preprocesses the encrypted image and randomly selects two basic pixels in each block to estimate the block smoothness and indicate peak points. Additional data are embedded into blocks in the sorted order of block smoothness by using local histogram shifting under the guidance of the peak points. At the receiver side, image decryption and data extraction are separable and can be free to choose. Compared to previous approaches, the proposed method is simpler in calculation while offering better performance: larger payload, better embedding quality, and error-free data extraction, as well as image recovery. PMID:24977214

  2. Quantifying the Representation Error of Land Biosphere Models using High Resolution Footprint Analyses and UAS Observations

    NASA Astrophysics Data System (ADS)

    Hanson, C. V.; Schmidt, A.; Law, B. E.; Moore, W.

    2015-12-01

    The validity of land biosphere model outputs rely on accurate representations of ecosystem processes within the model. Typically, a vegetation or land cover type for a given area (several Km squared or larger resolution), is assumed to have uniform properties. The limited spacial and temporal resolution of models prevents resolving finer scale heterogeneous flux patterns that arise from variations in vegetation. This representation error must be quantified carefully if models are informed through data assimilation in order to assign appropriate weighting of model outputs and measurement data. The representation error is usually only estimated or ignored entirely due to the difficulty in determining reasonable values. UAS based gas sensors allow measurements of atmospheric CO2 concentrations with unprecedented spacial resolution, providing a means of determining the representation error for CO2 fluxes empirically. In this study we use three dimensional CO2 concentration data in combination with high resolution footprint analyses in order to quantify the representation error for modelled CO2 fluxes for typical resolutions of regional land biosphere models. CO2 concentration data were collected using an Atlatl X6A hexa-copter, carrying a highly calibrated closed path infra-red gas analyzer based sampling system with an uncertainty of ≤ ±0.2 ppm CO2. Gas concentration data was mapped in three dimensions using the UAS on-board position data and compared to footprints generated using WRF 3.61. Chad Hanson, Oregon State University, Corvallis, OR Andres Schmidt, Oregon State University, Corvallis, OR Bev Law, Oregon State University, Corvallis, OR

  3. Quality assurance and high count rate

    SciTech Connect

    Lindstrom, R.M.

    1994-12-31

    A high count rate can distort the expected linear relation between the charge spectrum generated in a semiconductor gamma-ray detector and that recorded in the pulse-height analyzer. The busy time of the analog-to-digital converter (ADC) is accurately compensated for in commercial analyzers by extending the live counting time. As fast successive-approximation ADCs have become more generally used (note that 10{mu}s fixed digitizing time for 8192 channels is equivalent to an 800-MHz Wilkinson ADC), the resolution times of the other components in the counting system have become relatively more important limitations of the throughput of the total system and also more important sources of nonlinearity, which lead to biased measurements. A loss-free counting technique (LFC) has been developed which gives an undistorted spectrum and zero dead time so that decay equations can be solved. Tests of an LFC system have shown that, with systematic calibration, the system can give stable values in practice for a reference spectrum up to at least 100 kHz. To obtain higher quality data with confidence, quality control test are needed.

  4. Error in radiology.

    PubMed

    Goddard, P; Leslie, A; Jones, A; Wakeley, C; Kabala, J

    2001-10-01

    The level of error in radiology has been tabulated from articles on error and on "double reporting" or "double reading". The level of error varies depending on the radiological investigation, but the range is 2-20% for clinically significant or major error. The greatest reduction in error rates will come from changes in systems.

  5. High resolution, high rate x-ray spectrometer

    DOEpatents

    Goulding, F.S.; Landis, D.A.

    1983-07-14

    It is an object of the invention to provide a pulse processing system for use with detected signals of a wide dynamic range which is capable of very high counting rates, with high throughput, with excellent energy resolution and a high signal-to-noise ratio. It is a further object to provide a pulse processing system wherein the fast channel resolving time is quite short and substantially independent of the energy of the detected signals. Another object is to provide a pulse processing system having a pile-up rejector circuit which will allow the maximum number of non-interfering pulses to be passed to the output. It is also an object of the invention to provide new methods for generating substantially symmetrically triangular pulses for use in both the main and fast channels of a pulse processing system.

  6. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    ERIC Educational Resources Information Center

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  7. Is there a general factor in ratings of job performance? A meta-analytic framework for disentangling substantive and error influences.

    PubMed

    Viswesvaran, Chockalingam; Schmidt, Frank L; Ones, Deniz S

    2005-01-01

    A database integrating 90 years of empirical studies reporting intercorrelations among rated job performance dimensions was used to test the hypothesis of a general factor in job performance. After controlling for halo error and 3 other sources of measurement error, there remained a general factor in job performance ratings at the construct level accounting for 60% of total variance. Construct-level correlations among rated dimensions of job performance were substantially inflated by halo for both supervisory (33%) and peer (63%) intrarater correlations. These findings have important implications for the measurement of job performance and for theories of job performance.

  8. Accurate human microsatellite genotypes from high-throughput resequencing data using informed error profiles

    PubMed Central

    Highnam, Gareth; Franck, Christopher; Martin, Andy; Stephens, Calvin; Puthige, Ashwin; Mittelman, David

    2013-01-01

    Repetitive sequences are biologically and clinically important because they can influence traits and disease, but repeats are challenging to analyse using short-read sequencing technology. We present a tool for genotyping microsatellite repeats called RepeatSeq, which uses Bayesian model selection guided by an empirically derived error model that incorporates sequence and read properties. Next, we apply RepeatSeq to high-coverage genomes from the 1000 Genomes Project to evaluate performance and accuracy. The software uses common formats, such as VCF, for compatibility with existing genome analysis pipelines. Source code and binaries are available at http://github.com/adaptivegenome/repeatseq. PMID:23090981

  9. A High-Precision Instrument for Mapping of Rotational Errors in Rotary Stages

    DOE PAGESBeta

    Xu, W.; Lauer, K.; Chu, Y.; Nazaretski, E.

    2014-11-02

    A rotational stage is a key component of every X-ray instrument capable of providing tomographic or diffraction measurements. To perform accurate three-dimensional reconstructions, runout errors due to imperfect rotation (e.g. circle of confusion) must be quantified and corrected. A dedicated instrument capable of full characterization and circle of confusion mapping in rotary stages down to the sub-10 nm level has been developed. A high-stability design, with an array of five capacitive sensors, allows simultaneous measurements of wobble, radial and axial displacements. The developed instrument has been used for characterization of two mechanical stages which are part of an X-ray microscope.

  10. Optimal error estimates for high order Runge-Kutta methods applied to evolutionary equations

    SciTech Connect

    McKinney, W.R.

    1989-01-01

    Fully discrete approximations to 1-periodic solutions of the Generalized Korteweg de-Vries and the Cahn-Hilliard equations are analyzed. These approximations are generated by an Implicit Runge-Kutta method for the temporal discretization and a Galerkin Finite Element method for the spatial discretization. Furthermore, these approximations may be of arbitrarily high order. In particular, it is shown that the well-known order reduction phenomenon afflicting Implicit Runge Kutta methods does not occur. Numerical results supporting these optimal error estimates for the Korteweg-de Vries equation and indicating the existence of a slow motion manifold for the Cahn-Hilliard equation are also provided.

  11. Bit-Error-Rate-Based Evaluation of Energy-Gap-Induced Super-Resolution Read-Only-Memory Disc in Blu-ray Disc Optics

    NASA Astrophysics Data System (ADS)

    Tajima, Hideharu; Yamada, Hirohisa; Hayashi, Tetsuya; Yamamoto, Masaki; Harada, Yasuhiro; Mori, Go; Akiyama, Jun; Maeda, Shigemi; Murakami, Yoshiteru; Takahashi, Akira

    2008-07-01

    Bit error rate (bER) of an energy-gap-induced super-resolution (EG-SR) read-only-memory (ROM) disc with a zinc oxide (ZnO) film was measured in Blu-ray Disc (BD) optics by the partial response maximum likelihood (PRML) detection method. The experimental capacity was 40 GB in a single-layered 120 mm disc, which was about 1.6 times as high as the commercially available BD with 25 GB capacity. BER near 1 ×10-5 was obtained in an EG-SR ROM disc with a tantalum (Ta) reflective film. Practically available characteristics, including readout power margin, readout cyclability, environmental resistance, tilt margins, and focus offset margin, were also confirmed in the EG-SR ROM disc with 40 GB capacity.

  12. Dislocation Mechanics of High-Rate Deformations

    NASA Astrophysics Data System (ADS)

    Armstrong, Ronald W.; Li, Qizhen

    2015-10-01

    Four topics associated with constitutive equation descriptions of rate-dependent metal plastic deformation behavior are reviewed in honor of previous research accomplished on the same issues by Professor Marc Meyers along with colleagues and students, as follow: (1) increasing strength levels attributed to thermally activated dislocation migration at higher loading rates; (2) inhomogeneous adiabatic shear banding; (3) controlling mechanisms of deformation in shock as compared with shock-less isentropic compression experiments and (4) Hall-Petch-based grain size-dependent strain rate sensitivities exhibited by nanopolycrystalline materials. Experimental results are reviewed on the topics for a wide range of metals.

  13. Effects of diffraction and static wavefront errors on high-contrast imaging from the Thirty Meter Telescope

    NASA Technical Reports Server (NTRS)

    Troya, Mitchell; Chananb, Gary; Crossfielda, Ian; Dumonta, Philip; Green, Joseph J.; Macintosh, Bruce

    2006-01-01

    High-contrast imaging, particularly direct detection of extrasolar planets, is a major science driver for the next generation of extremely large telescopes such as the segmented Thirty Meter Telescope. This goal requires more than merely diffraction-limited imaging, but also attention to residual scattered light from wavefront errors and diffraction effects at the contrast level of 10-8-10-9. Using a wave-optics simulation of adaptive optics and a diffraction suppression system we investigate diffraction from the segmentation geometry, intersegment gaps, obscuration by the secondary mirror and its supports. We find that the large obscurations pose a greater challenge than the much smaller segment gaps. In addition the impact of wavefront errors from the primary mirror, including segment alignment and figure errors, are analyzed. Segment-to-segment reflectivity variations and residual segment figure error will be the dominant error contributors from the primary mirror. Strategies to mitigate these errors are discussed.

  14. HIgh Rate X-ray Fluorescence Detector

    SciTech Connect

    Grudberg, Peter Matthew

    2013-04-30

    The purpose of this project was to develop a compact, modular multi-channel x-ray detector with integrated electronics. This detector, based upon emerging silicon drift detector (SDD) technology, will be capable of high data rate operation superior to the current state of the art offered by high purity germanium (HPGe) detectors, without the need for liquid nitrogen. In addition, by integrating the processing electronics inside the detector housing, the detector performance will be much less affected by the typically noisy electrical environment of a synchrotron hutch, and will also be much more compact than current systems, which can include a detector involving a large LN2 dewar and multiple racks of electronics. The combined detector/processor system is designed to match or exceed the performance and features of currently available detector systems, at a lower cost and with more ease of use due to the small size of the detector. In addition, the detector system is designed to be modular, so a small system might just have one detector module, while a larger system can have many you can start with one detector module, and add more as needs grow and budget allows. The modular nature also serves to simplify repair. In large part, we were successful in achieving our goals. We did develop a very high performance, large area multi-channel SDD detector, packaged with all associated electronics, which is easy to use and requires minimal external support (a simple power supply module and a closed-loop water cooling system). However, we did fall short of some of our stated goals. We had intended to base the detector on modular, large-area detectors from Ketek GmbH in Munich, Germany; however, these were not available in a suitable time frame for this project, so we worked instead with pnDetector GmbH (also located in Munich). They were able to provide a front-end detector module with six 100 m^2 SDD detectors (two monolithic arrays of three elements each) along with

  15. Packet error rate analysis of digital pulse interval modulation in intersatellite optical communication systems with diversified wavefront deformation.

    PubMed

    Zhu, Jin; Wang, Dayan; Xie, Wanqing

    2015-02-20

    Diversified wavefront deformation is an inevitable phenomenon in intersatellite optical communication systems, which will decrease system performance. In this paper, we investigate the description of wavefront deformation and its influence on the packet error rate (PER) of digital pulse interval modulation (DPIM). With the wavelet method, the diversified wavefront deformation can be described by wavelet parameters: coefficient, dilation, and shift factors, where the coefficient factor represents the depth, dilation factor represents the area, and shift factor is for location. Based on this, the relationship between PER and wavelet parameters is analyzed from a theoretical viewpoint. Numerical results illustrate the validity of theoretical analysis: PER increases with the depth and area and decreases if location gets farther from the center of the optical antenna. In addition to describing diversified deformation, the advantage of the wavelet method over Zernike polynomials in computational complexity is shown via numerical example. This work provides a feasible method for the description along with influence analysis of diversified wavefront deformation from a practical viewpoint and will be helpful for designing optical systems.

  16. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media.

    PubMed

    Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C

    2016-06-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.

  17. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    PubMed Central

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  18. Bipolar high-repetition-rate high-voltage nanosecond pulser

    SciTech Connect

    Tian Fuqiang; Wang Yi; Shi Hongsheng; Lei Qingquan

    2008-06-15

    The pulser designed is mainly used for producing corona plasma in waste water treatment system. Also its application in study of dielectric electrical properties will be discussed. The pulser consists of a variable dc power source for high-voltage supply, two graded capacitors for energy storage, and the rotating spark gap switch. The key part is the multielectrode rotating spark gap switch (MER-SGS), which can ensure wider range modulation of pulse repetition rate, longer pulse width, shorter pulse rise time, remarkable electrical field distortion, and greatly favors recovery of the gap insulation strength, insulation design, the life of the switch, etc. The voltage of the output pulses switched by the MER-SGS is in the order of 3-50 kV with pulse rise time of less than 10 ns and pulse repetition rate of 1-3 kHz. An energy of 1.25-125 J per pulse and an average power of up to 10-50 kW are attainable. The highest pulse repetition rate is determined by the driver motor revolution and the electrode number of MER-SGS. Even higher voltage and energy can be switched by adjusting the gas pressure or employing N{sub 2} as the insulation gas or enlarging the size of MER-SGS to guarantee enough insulation level.

  19. A Case Study using Token Reward on Oral Reading Rate, Error Reduction, and Comprehension of a Reading Deficient Child.

    ERIC Educational Resources Information Center

    Ervin, Tommye A.; Fox, Paul A.

    This case study reports the use of token reinforcement in remedial reading instruction with an eleven-year-old boy from rural Appalachia. During phase one, tokens were given for reading 50-word passages without error; token value was contingent upon the number of attempts necessary to read without error. During phase two, words missed in phase one…

  20. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  1. High strain rate behavior of alloy 800H at high temperatures

    NASA Astrophysics Data System (ADS)

    Shafiei, E.

    2016-05-01

    In this paper, a new model using linear estimation of strain hardening rate vs. stress, has been developed to predict dynamic behavior of alloy 800H at high temperatures. In order to prove the accuracy and competency of the presented model, Johnson-Cook model pertaining modeling of flow stress curves was used. Evaluation of mean error of flow stress at deformation temperatures from 850 °C to 1050 °C and at strain rates of 5 S-1 to 20 S-1 indicates that the predicted results are in a good agreement with experimentally measured ones. This analysis has been done for the stress-strain curves under hot working condition for alloy 800H. However, this model is not dependent on the type of material and can be extended for any similar conditions.

  2. High Ratings for Teachers Are Still Seen

    ERIC Educational Resources Information Center

    Sawchuk, Stephen

    2013-01-01

    In Michigan, 98 percent of teachers were rated effective or better under new teacher-evaluation systems recently put in place. In Florida, 97 percent of teachers were deemed effective or better. Principals in Tennessee judged 98 percent of teachers to be "at expectations" or better last school year, while evaluators in Georgia gave good reviews to…

  3. The Combustion of HMX. [burning rate at high pressures

    NASA Technical Reports Server (NTRS)

    Boggs, T. L.; Price, C. F.; Atwood, A. I.; Zurn, D. E.; Eisel, J. L.

    1980-01-01

    The burn rate of HMX was measured at high pressures (p more than 1000 psi). The self deflagration rate of HMX was determined from 1 atmosphere to 50,000 psi. The burning rate shows no significant slope breaks.

  4. Method and apparatus for reducing quantization error in laser gyro test data through high speed filtering

    SciTech Connect

    Mark, J.G.; Brown, A.K.; Matthews, A.

    1987-01-06

    A method is described for processing ring laser gyroscope test data comprising the steps of: (a) accumulating the data over a preselected sample period; and (b) filtering the data at a predetermined frequency so that non-time dependent errors are reduced by a substantially greater amount than are time dependent errors; then (c) analyzing the random walk error of the filtered data.

  5. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  6. Smoking Rates Still High in Some Racial Groups, CDC Reports

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_160256.html Smoking Rates Still High in Some Racial Groups, CDC ... lot of progress in getting Americans to stop smoking, some groups still have high smoking rates, a ...

  7. High Count Rate Electron Probe Microanalysis

    PubMed Central

    Geller, Joseph D.; Herrington, Charles

    2002-01-01

    Reducing the measurement uncertainty of quantitative analyses made using electron probe microanalyzers (EPMA) requires a careful study of the individual uncertainties from each definable step of the measurement. Those steps include measuring the incident electron beam current and voltage, knowing the angle between the electron beam and the sample (takeoff angle), collecting the emitted x rays from the sample, comparing the emitted x-ray flux to known standards (to determine the k-ratio) and transformation of the k-ratio to concentration using algorithms which includes, as a minimum, the atomic number, absorption, and fluorescence corrections. This paper discusses the collection and counting of the emitted x rays, which are diffracted into the gas flow or sealed proportional x-ray detectors. The representation of the uncertainty in the number of collected x rays collected reduces as the number of counts increase. The uncertainty of the collected signal is fully described by Poisson statistics. Increasing the number of x rays collected involves either counting longer or at a higher counting rate. Counting longer means the analysis time increases and may become excessive to get to the desired uncertainty. Instrument drift also becomes an issue. Counting at higher rates has its limitations, which are a function of the detector physics and the detecting electronics. Since the beginning of EPMA analysis, analog electronics have been used to amplify and discriminate the x-ray induced ionizations within the proportional counter. This paper will discuss the use of digital electronics for this purpose. These electronics are similar to that used for energy dispersive analysis of x rays with either Si(Li) or Ge(Li) detectors except that the shaping time constants are much smaller. PMID:27446749

  8. Detecting Unit of Analysis Problems in Nested Designs: Statistical Power and Type I Error Rates of the "F" Test for Groups-within-Treatments Effects.

    ERIC Educational Resources Information Center

    Kromrey, Jeffrey D.; Dickinson, Wendy B.

    1996-01-01

    Empirical estimates of the power and Type I error rate of the test of the classrooms-within-treatments effect in the nested analysis of variance approach are provided for a variety of nominal alpha levels and a range of classroom effect sizes and research designs. (SLD)

  9. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case.

    PubMed

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-07-25

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.

  10. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case

    NASA Astrophysics Data System (ADS)

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-07-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.

  11. Resource utilization. High dose rate versus low dose rate brachytherapy for gynecologic cancer.

    PubMed

    Bastin, K; Buchler, D; Stitt, J; Shanahan, T; Pola, Y; Paliwal, B; Kinsella, T

    1993-06-01

    A comparative analysis of anesthesia use, perioperative morbidity and mortality, capital, and treatment cost of high dose rate versus low dose rate intracavitary brachytherapy for gynecologic malignancy is presented. To assess current anesthesia utilization, application location, and high dose rate afterloader availability for gynecologic brachytherapy in private and academic practices, a nine-question survey was sent to 150 radiotherapy centers in the United States, of which 95 (63%) responded. Of these 95 respondents, 95% used low dose rate brachytherapy, and 18% possessed high dose rate capability. General anesthesia was used in 95% of programs for tandem + ovoid and in 31% for ovoids-only placement. Differences among private and academic practice respondents were minimal. In our institution, a cost comparison for low dose rate therapy (two applications with 3 hospital days per application, operating and recovery room use, spinal anesthesia, radiotherapy) versus high dose rate treatment (five outpatient departmental applications, intravenous anesthesia without an anesthesiologist, radiotherapy) revealed a 244% higher overall charge for low dose rate treatment, primarily due to hospital and operating room expenses. In addition to its ability to save thousands of dollars per intracavitary patient, high dose rate therapy generated a "cost-shift," increasing radiotherapy departmental billings by 438%. More importantly, perioperative morbidity and mortality in our experience of 500+ high dose rate applications compared favorably with recently reported data using low dose rate intracavitary treatment. Capital investment, maintenance requirements, and depreciation costs for high dose rate capability are reviewed. Application of the defined "revenue-cost ratio" formula demonstrates the importance of high application numbers and consistent reimbursement for parity in high dose rate operation. Logically, inadequate third-party reimbursement (e.g., Medicare) reduces high

  12. High-deposition-rate ceramics synthesis

    SciTech Connect

    Allendorf, M.D.; Osterheld, T.H.; Outka, D.A.

    1995-05-01

    Parallel experimental and computational investigations are conducted in this project to develop validated numerical models of ceramic synthesis processes. Experiments are conducted in the High-Temperature Materials Synthesis Laboratory in Sandia`s Combustion Research Facility. A high-temperature flow reactor that can accommodate small preforms (1-3 cm diameter) generates conditions under which deposition can be observed, with flexibility to vary both deposition temperature (up to 1500 K) and pressure (as low as 10 torr). Both mass spectrometric and laser diagnostic probes are available to provide measurements of gas-phase compositions. Experiments using surface analytical techniques are also applied to characterize important processes occuring on the deposit surface. Computational tools developed through extensive research in the combustion field are employed to simulate the chemically reacting flows present in typical industrial reactors. These include the CHEMKIN and Surface-CHEMKIN suites of codes, which permit facile development of complex reaction mechanisms and vastly simplify the implementation of multi-component transport and thermodynamics. Quantum chemistry codes are also used to estimate thermodynamic and kinetic data for species and reactions for which this information is unavailable.

  13. Assessment of high-rate GPS using a single-axis shake table

    NASA Astrophysics Data System (ADS)

    Häberling, S.; Rothacher, M.; Zhang, Y.; Clinton, J. F.; Geiger, A.

    2015-07-01

    The developments in GNSS receiver and antenna technologies, especially the increased sampling rate up to 100 sps, open up the possibility to measure high-rate earthquake ground motions with GNSS. In this paper we focus on the GPS errors in the frequency band above 1 Hz. The dominant error sources are mainly the carrier phase jitter caused by thermal noise and the stress error caused by the dynamics, e.g. antenna motions. To generate a large set of different motions, we used a single-axis shake table, where a GNSS antenna and a strong motion seismometer were mounted with a well-known ground truth. The generated motions were recorded with three different GNSS receivers with sampling rates up to 100 sps and different receiver baseband parameters. The baseband parameters directly dictate the carrier phase jitter and the correlations between subsequent epochs. A narrow loop filter bandwidth keeps the carrier phase jitter on a low level, but has an extreme impact on the receiver response for motions above 1 Hz. The amplitudes above 3 Hz are overestimated up to 50 % or reduced by well over half. The corresponding phase errors are between 30 and 90 degrees. Compared to the GNSS receiver response, the strong motion seismometer measurements do not show any amplitude or phase variations for the frequency range from 1 to 20 Hz. Due to the large errors for dynamic GNSS measurements, it is essential to account for the baseband parameters of the GNSS receivers if high-rate GNSS is to become a valuable tool for seismic displacement measurements above 1 Hz. Fortunately, the receiver response can be corrected by an inverse filter if the baseband parameters are known.

  14. High rate fabrication of compression molded components

    DOEpatents

    Matsen, Marc R.; Negley, Mark A.; Dykstra, William C.; Smith, Glen L.; Miller, Robert J.

    2016-04-19

    A method for fabricating a thermoplastic composite component comprises inductively heating a thermoplastic pre-form with a first induction coil by inducing current to flow in susceptor wires disposed throughout the pre-form, inductively heating smart susceptors in a molding tool to a leveling temperature with a second induction coil by applying a high-strength magnetic field having a magnetic flux that passes through surfaces of the smart susceptors, shaping the magnetic flux that passes through surfaces of the smart susceptors to flow substantially parallel to a molding surface of the smart susceptors, placing the heated pre-form between the heated smart susceptors; and applying molding pressure to the pre-form to form the composite component.

  15. The modern high rate digital cassette recorder

    NASA Technical Reports Server (NTRS)

    Clemow, Martin

    1993-01-01

    The magnetic tape recorder has played an essential role in the capture and storage of instrumentation data for more than thirty years. During this time, data recording technology has steadily progressed to meet user demands for more channels, wider bandwidths, and longer recording durations. When acquisition and processing moved from analog to digital techniques, so recorder design followed suit. Milestones marking the evolution of the data recorder through these various stages - multi-track analog, high density longitudinal digital, and more recently rotary digital - have often represented important breakthroughs in the handling of ever-greater quantities of data. Throughout this period there has been a very clear line of demarcation between data storage methods in the 'instrumentation world' on the one hand and the 'computer peripheral world' on the other. This is despite the fact that instrumentation data, whether analog or digital at the point of acquisition, is now likely to be processed on a digital computer at some stage. Regardless of whether the processing device is a small personal computer, a workstation, or the largest supercomputer, system integrators have traditionally been faced with the same basic problem - how to interface what is essentially a manually controlled, continuously running device (the tape recorder) into the fast start/stop computer environment without resorting to an excessive amount of complex custom interfacing and performance compromise. The increasing availability of affordable high power processing equipment throughout the scientific world is forcing recorder manufacturers to make their latest and perhaps most important breakthrough - the computer-friendly data recorder. The operating characteristics of such recorders are discussed and the resultant impact on both data acquisition and data analysis elements of system configuration are considered.

  16. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  17. Solar Cell Short Circuit Current Errors and Uncertainties During High Altitude Calibrations

    NASA Technical Reports Server (NTRS)

    Snyder, David D.

    2012-01-01

    High altitude balloon based facilities can make solar cell calibration measurements above 99.5% of the atmosphere to use for adjusting laboratory solar simulators. While close to on-orbit illumination, the small attenuation to the spectra may result in under measurements of solar cell parameters. Variations of stratospheric weather, may produce flight-to-flight measurement variations. To support the NSCAP effort, this work quantifies some of the effects on solar cell short circuit current (Isc) measurements on triple junction sub-cells. This work looks at several types of high altitude methods, direct high altitude meas urements near 120 kft, and lower stratospheric Langley plots from aircraft. It also looks at Langley extrapolation from altitudes above most of the ozone, for potential small balloon payloads. A convolution of the sub-cell spectral response with the standard solar spectrum modified by several absorption processes is used to determine the relative change from AMO, lscllsc(AMO). Rayleigh scattering, molecular scatterin g from uniformly mixed gases, Ozone, and water vapor, are included in this analysis. A range of atmosph eric pressures are examined, from 0. 05 to 0.25 Atm to cover the range of atmospheric altitudes where solar cell calibrations a reperformed. Generally these errors and uncertainties are less than 0.2%

  18. High data rate optical transceiver terminal

    NASA Technical Reports Server (NTRS)

    Clarke, E. S.

    1973-01-01

    The objectives of this study were: (1) to design a 400 Mbps optical transceiver terminal to operate from a high-altitude balloon-borne platform in order to permit the quantitative evaluation of a space-qualifiable optical communications system design, (2) to design an atmospheric propagation experiment to operate in conjunction with the terminal to measure the degrading effects of the atmosphere on the links, and (3) to design typical optical communications experiments for space-borne laboratories in the 1980-1990 time frame. As a result of the study, a transceiver package has been configured for demonstration flights during late 1974. The transceiver contains a 400 Mbps transmitter, a 400 Mbps receiver, and acquisition and tracking receivers. The transmitter is a Nd:YAG, 200 Mhz, mode-locked, CW, diode-pumped laser operating at 1.06 um requiring 50 mW for 6 db margin. It will be designed to implement Pulse Quaternary Modulation (PQM). The 400 Mbps receiver utilizes a Dynamic Crossed-Field Photomultiplier (DCFP) detector. The acquisition receiver is a Quadrant Photomultiplier Tube (QPMT) and receives a 400 Mbps signal chopped at 0.1 Mhz.

  19. The Effect of Minimum Wage Rates on High School Completion

    ERIC Educational Resources Information Center

    Warren, John Robert; Hamrock, Caitlin

    2010-01-01

    Does increasing the minimum wage reduce the high school completion rate? Previous research has suffered from (1. narrow time horizons, (2. potentially inadequate measures of states' high school completion rates, and (3. potentially inadequate measures of minimum wage rates. Overcoming each of these limitations, we analyze the impact of changes in…

  20. Resident Physicians' Clinical Training and Error Rate: The Roles of Autonomy, Consultation, and Familiarity with the Literature

    ERIC Educational Resources Information Center

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-01-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores…

  1. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  2. Dose rate in brachytherapy using after-loading machine: pulsed or high-dose rate?

    PubMed

    Hannoun-Lévi, J-M; Peiffert, D

    2014-10-01

    Since February 2014, it is no longer possible to use low-dose rate 192 iridium wires due to the end of industrial production of IRF1 and IRF2 sources. The Brachytherapy Group of the French society of radiation oncology (GC-SFRO) has recommended switching from iridium wires to after-loading machines. Two types of after-loading machines are currently available, based on the dose rate used: pulsed-dose rate or high-dose rate. In this article, we propose a comparative analysis between pulsed-dose rate and high-dose rate brachytherapy, based on biological, technological, organizational and financial considerations.

  3. Estimating the designated use attainment decision error rates of US Environmental Protection Agency's proposed numeric total phosphorus criteria for Florida, USA, colored lakes.

    PubMed

    McLaughlin, Douglas B

    2012-01-01

    The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors.

  4. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  5. High-Rate Strong-Signal Quantum Cryptography

    NASA Technical Reports Server (NTRS)

    Yuen, Horace P.

    1996-01-01

    Several quantum cryptosystems utilizing different kinds of nonclassical lights, which can accommodate high intensity fields and high data rate, are described. However, they are all sensitive to loss and both the high rate and the strong-signal character rapidly disappear. A squeezed light homodyne detection scheme is proposed which, with present-day technology, leads to more than two orders of magnitude data rate improvement over other current experimental systems for moderate loss.

  6. Prevalence of refractive errors in teenage high school students in Singapore.

    PubMed

    Quek, Timothy P L; Chua, Choon Guan; Chong, Choon Seng; Chong, Jin Ho; Hey, Hwee Weng; Lee, June; Lim, Yee Fei; Saw, Seang-Mei

    2004-01-01

    We aimed to study the prevalence of refractive conditions in Singapore teenagers. Grade 9 and 10 students (n = 946) aged 15-19 years from two secondary schools in Singapore were recruited. The refractive errors of the students' eyes were measured using non-cycloplegic autorefraction. Sociodemographic data and information on risk factors for myopia (such as reading and writing) were also obtained using an interviewer-administered questionnaire. The prevalence of refractive conditions was found to be: myopia [spherical equivalent (SE) at least -0.50 D] - 73.9%, hyperopia (SE at least +0.50 D) - 1.5%, astigmatism (cylinder at least -0.50 D) - 58.7% and anisometropia (SE difference at least 1.00 D) - 11.2%. After adjusting for age and gender, currently doing more than 20.5 h of reading and writing a week was found to be positively associated with myopia [odds ratio 1.12 (95% CI 1.04-1.20, p = 0.003)], as was reading and writing at a close distance and a better educational stream. The prevalence of myopia (73.9%) in Singapore teenagers is high. Current reading and writing habits, reading at close distances and a better educational stream are possible risk factors for myopia. PMID:14687201

  7. T7 RNA Polymerases Backed up by Covalently Trapped Proteins Catalyze Highly Error Prone Transcription*

    PubMed Central

    Nakano, Toshiaki; Ouchi, Ryo; Kawazoe, Junya; Pack, Seung Pil; Makino, Keisuke; Ide, Hiroshi

    2012-01-01

    RNA polymerases (RNAPs) transcribe genes through the barrier of nucleoproteins and site-specific DNA-binding proteins on their own or with the aid of accessory factors. Proteins are often covalently trapped on DNA by DNA damaging agents, forming DNA-protein cross-links (DPCs). However, little is known about how immobilized proteins affect transcription. To elucidate the effect of DPCs on transcription, we constructed DNA templates containing site-specific DPCs and performed in vitro transcription reactions using phage T7 RNAP. We show here that DPCs constitute strong but not absolute blocks to in vitro transcription catalyzed by T7 RNAP. More importantly, sequence analysis of transcripts shows that RNAPs roadblocked not only by DPCs but also by the stalled leading RNAP become highly error prone and generate mutations in the upstream intact template regions. This contrasts with the transcriptional mutations induced by conventional DNA lesions, which are delivered to the active site or its proximal position in RNAPs and cause direct misincorporation. Our data also indicate that the trailing RNAP stimulates forward translocation of the stalled leading RNAP, promoting the translesion bypass of DPCs. The present results provide new insights into the transcriptional fidelity and mutual interactions of RNAPs that encounter persistent roadblocks. PMID:22235136

  8. Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu

    2015-07-01

    Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in

  9. Correlation of anomalous write error rates and ferromagnetic resonance spectrum in spin-transfer-torque-magnetic-random-access-memory devices containing in-plane free layers

    SciTech Connect

    Evarts, Eric R.; Rippard, William H.; Pufall, Matthew R.; Heindl, Ranko

    2014-05-26

    In a small fraction of magnetic-tunnel-junction-based magnetic random-access memory devices with in-plane free layers, the write-error rates (WERs) are higher than expected on the basis of the macrospin or quasi-uniform magnetization reversal models. In devices with increased WERs, the product of effective resistance and area, tunneling magnetoresistance, and coercivity do not deviate from typical device properties. However, the field-swept, spin-torque, ferromagnetic resonance (FS-ST-FMR) spectra with an applied DC bias current deviate significantly for such devices. With a DC bias of 300 mV (producing 9.9 × 10{sup 6} A/cm{sup 2}) or greater, these anomalous devices show an increase in the fraction of the power present in FS-ST-FMR modes corresponding to higher-order excitations of the free-layer magnetization. As much as 70% of the power is contained in higher-order modes compared to ≈20% in typical devices. Additionally, a shift in the uniform-mode resonant field that is correlated with the magnitude of the WER anomaly is detected at DC biases greater than 300 mV. These differences in the anomalous devices indicate a change in the micromagnetic resonant mode structure at high applied bias.

  10. Evaluation by Monte Carlo simulations of the power limits and bit-error rate degradation in wavelength-division multiplexing networks caused by four-wave mixing.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2004-09-10

    Fiber nonlinearities can degrade the performance of a wavelength-division multiplexing optical network. For high input power, a low chromatic dispersion coefficient, or low channel spacing, the most severe penalties are due to four-wave mixing (FWM). To compute the bit-error rate that is due to FWM noise, one must evaluate accurately the probability-density functions (pdf) of both the space and the mark states. An accurate evaluation of the pdf of the FWM noise in the space state is given, for the first time to the authors' knowledge, by use of Monte Carlo simulations. Additionally, it is shown that the pdf in the mark state is not symmetric as had been assumed in previous studies. Diagrams are presented that permit estimation of the pdf, given the number of channels in the system. The accuracy of the previous models is also investigated, and finally the results of this study are used to estimate the power limits of a wavelength-division multiplexing system. PMID:15468703

  11. Evaluation by Monte Carlo simulations of the power limits and bit-error rate degradation in wavelength-division multiplexing networks caused by four-wave mixing.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2004-09-10

    Fiber nonlinearities can degrade the performance of a wavelength-division multiplexing optical network. For high input power, a low chromatic dispersion coefficient, or low channel spacing, the most severe penalties are due to four-wave mixing (FWM). To compute the bit-error rate that is due to FWM noise, one must evaluate accurately the probability-density functions (pdf) of both the space and the mark states. An accurate evaluation of the pdf of the FWM noise in the space state is given, for the first time to the authors' knowledge, by use of Monte Carlo simulations. Additionally, it is shown that the pdf in the mark state is not symmetric as had been assumed in previous studies. Diagrams are presented that permit estimation of the pdf, given the number of channels in the system. The accuracy of the previous models is also investigated, and finally the results of this study are used to estimate the power limits of a wavelength-division multiplexing system.

  12. Experimentally simulating high-rate behaviour: rate and temperature effects in polycarbonate and PMMA

    PubMed Central

    Kendall, M. J.; Siviour, C. R.

    2014-01-01

    This paper presents results from applying a recently developed technique for experimentally simulating the high-rate deformation response of polymers. The technique, which uses low strain rate experiments with temperature profiles to replicate high-rate behaviour, is here applied to two amorphous polymers, polymethylmethacrylate (PMMA) and polycarbonate, thereby complementing previously obtained data from plasticized polyvinyl chloride. The paper presents comparisons of the mechanical data obtained in the simulation, as opposed to those observed under high-rate loading. Discussion of these data, and the temperature profile required to produce them, gives important information about yield and post-yield behaviour in these materials. PMID:24711491

  13. Managing Errors to Reduce Accidents in High Consequence Networked Information Systems

    SciTech Connect

    Ganter, J.H.

    1999-02-01

    Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classify these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.

  14. Error mechanism analyses of an ultra-precision stage for high speed scan motion over a large stroke

    NASA Astrophysics Data System (ADS)

    Wang, Shaokai; Tan, Jiubin; Cui, Jiwen

    2015-02-01

    Reticle Stage (RS) is designed to complete scan motion with high speed in nanometer-scale over a large stroke. Comparing with the allowable scan accuracy of a few nanometers, errors caused by any internal or external disturbances are critical and must not be ignored. In this paper, RS is firstly introduced in aspects of mechanical structure, forms of motion, and controlling method. Based on that, mechanisms of disturbances transferred to final servo-related error in scan direction are analyzed, including feedforward error, coupling between the large stroke stage (LS) and the short stroke stage (SS), and movement of measurement reference. Especially, different forms of coupling between SS and LS are discussed in detail. After theoretical analysis above, the contributions of these disturbances to final error are simulated numerically. The residual positioning error caused by feedforward error in acceleration process is about 2 nm after settling time, the coupling between SS and LS about 2.19 nm, and the movements of MF about 0.6 nm.

  15. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case

    PubMed Central

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275

  16. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case.

    PubMed

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275

  17. Combinatorial FSK modulation for power-efficient high-rate communications

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Budinger, James M.; Vanderaar, Mark J.

    1991-01-01

    Deep-space and satellite communications systems must be capable of conveying high-rate data accurately with low transmitter power, often through dispersive channels. A class of noncoherent Combinatorial Frequency Shift Keying (CFSK) modulation schemes is investigated which address these needs. The bit error rate performance of this class of modulation formats is analyzed and compared to the more traditional modulation types. Candidate modulator, demodulator, and digital signal processing (DSP) hardware structures are examined in detail. System-level issues are also discussed.

  18. High-shear-rate capillary viscometer for inkjet inks

    SciTech Connect

    Wang Xi; Carr, Wallace W.; Bucknall, David G.; Morris, Jeffrey F.

    2010-06-15

    A capillary viscometer developed to measure the apparent shear viscosity of inkjet inks at high apparent shear rates encountered during inkjet printing is described. By using the Weissenberg-Rabinowitsch equation, true shear viscosity versus true shear rate is obtained. The device is comprised of a constant-flow generator, a static pressure monitoring device, a high precision submillimeter capillary die, and a high stiffness flow path. The system, which is calibrated using standard Newtonian low-viscosity silicone oil, can be easily operated and maintained. Results for measurement of the shear-rate-dependent viscosity of carbon-black pigmented water-based inkjet inks at shear rates up to 2x10{sup 5} s{sup -1} are discussed. The Cross model was found to closely fit the experimental data. Inkjet ink samples with similar low-shear-rate viscosities exhibited significantly different shear viscosities at high shear rates depending on particle loading.

  19. High-shear-rate capillary viscometer for inkjet inks

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Carr, Wallace W.; Bucknall, David G.; Morris, Jeffrey F.

    2010-06-01

    A capillary viscometer developed to measure the apparent shear viscosity of inkjet inks at high apparent shear rates encountered during inkjet printing is described. By using the Weissenberg-Rabinowitsch equation, true shear viscosity versus true shear rate is obtained. The device is comprised of a constant-flow generator, a static pressure monitoring device, a high precision submillimeter capillary die, and a high stiffness flow path. The system, which is calibrated using standard Newtonian low-viscosity silicone oil, can be easily operated and maintained. Results for measurement of the shear-rate-dependent viscosity of carbon-black pigmented water-based inkjet inks at shear rates up to 2×105 s-1 are discussed. The Cross model was found to closely fit the experimental data. Inkjet ink samples with similar low-shear-rate viscosities exhibited significantly different shear viscosities at high shear rates depending on particle loading.

  20. The Rate of Return to the High/Scope Perry Preschool Program

    PubMed Central

    Heckman, James J.; Moon, Seong Hyeok; Pinto, Rodrigo; Savelyev, Peter A.; Yavitz, Adam

    2010-01-01

    This paper estimates the rate of return to the High/Scope Perry Preschool Program, an early intervention program targeted toward disadvantaged African-American youth. Estimates of the rate of return to the Perry program are widely cited to support the claim of substantial economic benefits from preschool education programs. Previous studies of the rate of return to this program ignore the compromises that occurred in the randomization protocol. They do not report standard errors. The rates of return estimated in this paper account for these factors. We conduct an extensive analysis of sensitivity to alternative plausible assumptions. Estimated annual social rates of return generally fall between 7–10 percent, with most estimates substantially lower than those previously reported in the literature. However, returns are generally statistically significantly different from zero for both males and females and are above the historical return on equity. Estimated benefit-to-cost ratios support this conclusion. PMID:21804653

  1. High Graduate Unemployment Rate and Taiwanese Undergraduate Education

    ERIC Educational Resources Information Center

    Wu, Chih-Chun

    2011-01-01

    An expansion in higher education in combination with the recent global economic recession has resulted in a high college graduate unemployment rate in Taiwan. This study investigates how the high unemployment rate and financial constraints caused by economic cutbacks have shaped undergraduates' class choices, job needs, and future income…

  2. HIGH-RATE DISINFECTION TECHNIQUES FOR COMBIND SEWER OVERFLOW

    EPA Science Inventory

    This paper presents high-rate disinfection technologies for combined sewer overflow (CSO). The high-rate disinfection technologies of interest are: chlorination/dechlorination, ultraviolet light irradiation (UV), chlorine dioxide (ClO2 ), ozone (O3), peracetic acid (CH3COOOH )...

  3. High speed imaging for material parameters calibration at high strain rate

    NASA Astrophysics Data System (ADS)

    Sasso, M.; Fardmoshiri, M.; Mancini, E.; Rossi, M.; Cortese, L.

    2016-05-01

    To describe the material behaviour at high strain rates dynamic experimental tests are necessary, and appropriate constitutive models are to be calibrated accordingly. A way to achieve this is through an inverse procedure, based on the minimization of an error function calculated as the difference between experimental and numerical data coming from Finite Element analysis. This approach, widely used in the literature, has a heavy computational cost associated with the minimization process that requires, for each variation of the material model parameters, the execution of FE calculations. In this work, a faster but yet effective calibration procedure is studied Experimental tests were performed on an aluminium alloy AA6061-T6, by means of a direct tension-compression Split Hopkinson bar. A fast camera with a resolution of 192 × 128 pixels and capable of a sample rate of 100,000 fps captured images of the deformation process undergone by the samples during the tests. The profile of the sample obtained after the image binarization and processing, was postprocessed to derive the deformation history; afterwards it was possible to calculate the true stress and strain, and carry out the inverse calibration by analytical computations. The results of this method were compared with the ones coming from the Finite Element approach.

  4. Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto

    2006-01-01

    We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.

  5. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  6. High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.

    PubMed

    Wang, Fei; Xie, Zhaoxin; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.

  7. High Capacity Reversible Watermarking for Audio by Histogram Shifting and Predicted Error Expansion

    PubMed Central

    Wang, Fei; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability. PMID:25097883

  8. HIGH-RATE FORMABILITY OF HIGH-STRENGTH ALUMINUM ALLOYS: A STUDY ON OBJECTIVITY OF MEASURED STRAIN AND STRAIN RATE

    SciTech Connect

    Upadhyay, Piyush; Rohatgi, Aashish; Stephens, Elizabeth V.; Davies, Richard W.; Catalini, David

    2015-02-18

    Al alloy AA7075 sheets were deformed at room temperature at strain-rates exceeding 1000 /s using the electrohydraulic forming (EHF) technique. A method that combines high speed imaging and digital image correlation technique, developed at Pacific Northwest National Laboratory, is used to investigate high strain rate deformation behavior of AA7075. For strain-rate sensitive materials, the ability to accurately model their high-rate deformation behavior is dependent upon the ability to accurately quantify the strain-rate that the material is subjected to. This work investigates the objectivity of software-calculated strain and strain rate by varying different parameters within commonly used commercially available digital image correlation software. Except for very close to the time of crack opening the calculated strain and strain rates are very consistent and independent of the adjustable parameters of the software.

  9. A software solution to estimate the SEU-induced soft error rate for systems implemented on SRAM-based FPGAs

    NASA Astrophysics Data System (ADS)

    Zhongming, Wang; Zhibin, Yao; Hongxia, Guo; Min, Lu

    2011-05-01

    SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach.

  10. Experimental investigation on the high chip rate of 2D incoherent optical CDMA system

    NASA Astrophysics Data System (ADS)

    Su, Guorui; Wang, Rong; Pu, Tao; Fang, Tao; Zheng, Jilin; Zhu, Huatao; Wu, Weijiang

    2015-08-01

    An innovative approach to realise high chip rate in OCDMA transmission system is proposed and experimentally investigation, the high chip rate is achieved through a 2-D wavelength-hopping time-spreading en/decoder based on the supercontinuum light source. The source used in the experiment is generated by high nonlinear optical fiber (HNLF), Erbium-doped fiber amplifier (EDFA) which output power is 26 dBm, and distributed feed-back laser diode which works in the gain switch state. The span and the flatness of the light source are 20 nm and 3 dB, respectively, after equalization of wavelength selective switch (WSS). The wavelength-hopping time-spreading coder can be changed 20 nm in the wavelength and 400 ps in the time, is consist of WSS and delay lines. Therefore, the experimental results show that the chip rate can achieve 500 Gchip/s, in the case of 2.5 Gbit/s, while keeping a bit error rate below forward error correction limit after 40 km transmission.

  11. Correction of beam errors in high power laser diode bars and stacks

    NASA Astrophysics Data System (ADS)

    Monjardin, J. F.; Nowak, K. M.; Baker, H. J.; Hall, D. R.

    2006-09-01

    The beam errors of an 11 bar laser diode stack fitted with fast-axis collimator lenses have been corrected by a single refractive plate, produced by laser cutting and polishing. The so-called smile effect is virtually eliminated and collimator aberration greatly reduced, improving the fast-axis beam quality of each bar by a factor of up to 5. The single corrector plate for the whole stack ensures that the radiation from all the laser emitters is parallel to a common axis. Beam-pointing errors of the bars have been reduced to below 0.7 mrad.

  12. Correction of beam errors in high power laser diode bars and stacks.

    PubMed

    Monjardin, J F; Nowak, K M; Baker, H J; Hall, D R

    2006-09-01

    The beam errors of an 11 bar laser diode stack fitted with fast-axis collimator lenses have been corrected by a single refractive plate, produced by laser cutting and polishing. The so-called smile effect is virtually eliminated and collimator aberration greatly reduced, improving the fast-axis beam quality of each bar by a factor of up to 5. The single corrector plate for the whole stack ensures that the radiation from all the laser emitters is parallel to a common axis. Beam-pointing errors of the bars have been reduced to below 0.7 mrad.

  13. Line-Bisecting Performance in Highly Skilled Athletes: Does Preponderance of Rightward Error Reflect Unique Cortical Organization and Functioning?

    ERIC Educational Resources Information Center

    Carlstedt, Roland A.

    2004-01-01

    A line-bisecting test was administered to 250 highly skilled right-handed athletes and a control group of 60 right-handed age matched non-athletes. Results revealed that athletes made overwhelmingly more rightward errors than non-athletes, who predominantly bisected lines to the left of the veridical center. These findings were interpreted in the…

  14. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Burr, Tom; Favalli, Andrea; Nicholson, Andrew

    2016-03-01

    The declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar - Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to model the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. We find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters.

  15. High repetition rate optical switch using an electroabsorption modulator in TOAD configuration

    NASA Astrophysics Data System (ADS)

    Huo, Li; Yang, Yanfu; Lou, Caiyun; Gao, Yizhi

    2007-07-01

    A novel optical switch featured with high repetition rate, short switching window width, and high contrast ratio is proposed and demonstrated for the first time by placing an electroabsorption modulator (EAM) in a terahertz optical asymmetric demultiplexer (TOAD) configuration. The feasibility and main characteristics of the switch are investigated by numerical simulations and experiments. With this EAM-based TOAD, an error-free return-to-zero signal wavelength conversion with 0.62 dB power penalty at 20 Gbit/s is demonstrated.

  16. Miniature High Stability High Temperature Space Rated Blackbody Radiance Source

    NASA Astrophysics Data System (ADS)

    Jones, J. A.; Beswick, A. G.

    1987-09-01

    This paper presents the design and test performance of a conical cavity type blackbody radiance source that will meet the requirements of the Halogen Occultation Experiment (HALOE) on the NASA Upper Atmospheric Research Satellite program (UARS). Since a radiance source meeting the requirements of this experiment was unavailable in the commercial market, a development effort was undertaken by the HALOE Project. The blackbody radiance source operates in vacuum at 1300 K + 0.5 K over any 15-minute interval, uses less than 7.5 watts of power, maintains a 49°C outer case temperature, and fits within the 2.5 x 2.5 x 3.0 inch envelope allocated inside the HALOE instrument. Also, the unit operates in air, during ground testing of the HALOE instrument, where it uses 17 watts of power with an outer case temperature of 66°C. The thrust of this design effort was to minimize the heat losses, in order to keep the power usage under 7.5 watts, and to minimize the amount of silica in the materials. Silica in the presence of the platinum heater winding used in this design would cause the platinum to erode, changing the operating temperature set-point. The design required the development of fabrication techniques which would provide very small, close tolerance parts from extremely difficult-to-machine materials. Also, a space rated ceramic core and unique, low thermal conductance, ceramic-to-metal joint was developed, tested and incorporated in this design. The completed flight qualification hardware has undergone performance, environmental and life testing. The design configuration and test results are discussed in detail in this paper.

  17. Uncovering high-strain rate protection mechanism in nacre.

    PubMed

    Huang, Zaiwang; Li, Haoze; Pan, Zhiliang; Wei, Qiuming; Chao, Yuh J; Li, Xiaodong

    2011-01-01

    Under high-strain-rate compression (strain rate approximately 10(3) s(-1)), nacre (mother-of-pearl) exhibits surprisingly high fracture strength vis-à-vis under quasi-static loading (strain rate 10(-3) s(-1)). Nevertheless, the underlying mechanism responsible for such sharply different behaviors in these two loading modes remains completely unknown. Here we report a new deformation mechanism, adopted by nacre, the best-ever natural armor material, to protect itself against predatory penetrating impacts. It involves the emission of partial dislocations and the onset of deformation twinning that operate in a well-concerted manner to contribute to the increased high-strain-rate fracture strength of nacre. Our findings unveil that Mother Nature delicately uses an ingenious strain-rate-dependent stiffening mechanism with a purpose to fight against foreign attacks. These findings should serve as critical design guidelines for developing engineered body armor materials. PMID:22355664

  18. Uncovering high-strain rate protection mechanism in nacre

    PubMed Central

    Huang, Zaiwang; Li, Haoze; Pan, Zhiliang; Wei, Qiuming; Chao, Yuh J.; Li, Xiaodong

    2011-01-01

    Under high-strain-rate compression (strain rate ∼103 s−1), nacre (mother-of-pearl) exhibits surprisingly high fracture strength vis-à-vis under quasi-static loading (strain rate 10−3 s−1). Nevertheless, the underlying mechanism responsible for such sharply different behaviors in these two loading modes remains completely unknown. Here we report a new deformation mechanism, adopted by nacre, the best-ever natural armor material, to protect itself against predatory penetrating impacts. It involves the emission of partial dislocations and the onset of deformation twinning that operate in a well-concerted manner to contribute to the increased high-strain-rate fracture strength of nacre. Our findings unveil that Mother Nature delicately uses an ingenious strain-rate-dependent stiffening mechanism with a purpose to fight against foreign attacks. These findings should serve as critical design guidelines for developing engineered body armor materials. PMID:22355664

  19. FAST TRACK COMMUNICATION High rate straining of tantalum and copper

    NASA Astrophysics Data System (ADS)

    Armstrong, R. W.; Zerilli, F. J.

    2010-12-01

    High strain rate measurements reported recently for several tantalum and copper crystal/polycrystal materials are shown to follow dislocation mechanics-based constitutive relations, first at lower strain rates, for dislocation velocity control of the imposed plastic deformations and, then at higher rates, transitioning to nano-scale dislocation generation control by twinning or slip. For copper, there is the possibility of added-on slip dislocation displacements to be accounted for from the newly generated dislocations.

  20. Effects of spectral discrimination in high-spectral-resolution lidar on the retrieval errors for atmospheric aerosol optical properties.

    PubMed

    Cheng, Zhongtao; Liu, Dong; Luo, Jing; Yang, Yongying; Su, Lin; Yang, Liming; Huang, Hanlu; Shen, Yibing

    2014-07-10

    This paper presents detailed analysis about the effects of spectral discrimination on the retrieval errors for atmospheric aerosol optical properties in high-spectral-resolution lidar (HSRL). To the best of our knowledge, this is the first study that focuses on this topic comprehensively, and our goal is to provide some heuristic guidelines for the design of the spectral discrimination filter in HSRL. We first introduce a theoretical model for retrieval error evaluation of an HSRL instrument with a general three-channel configuration. The model only takes the error sources related to the spectral discrimination parameters into account, while other error sources not associated with these focused parameters are excluded on purpose. Monte Carlo (MC) simulations are performed to validate the correctness of the theoretical model. Results from both the model and MC simulations agree very well, and they illustrate one important, although not well realized, fact: a large molecular transmittance and a large spectral discrimination ratio (SDR, i.e., ratio of the molecular transmittance to the aerosol transmittance) are beneficial to promote the retrieval accuracy. More specifically, we find that a large SDR can reduce retrieval errors conspicuously for atmosphere at low altitudes, while its effect on the retrieval for high altitudes is very limited. A large molecular transmittance contributes to good retrieval accuracy everywhere, particularly at high altitudes, where the signal-to-noise ratio is small. Since the molecular transmittance and SDR are often trade-offs, we suggest considering a suitable SDR for higher molecular transmittance instead of using unnecessarily high SDR when designing the spectral discrimination filter. These conclusions are expected to be applicable to most of the HSRL instruments, which have similar configurations as the one discussed here.

  1. How Did Successful High Schools Improve Their Graduation Rates?

    ERIC Educational Resources Information Center

    Robertson, Janna Siegel; Smith, Robert W.; Rinka, Jason

    2016-01-01

    The researchers surveyed 23 North Carolina high schools that had markedly improved their graduation rates over the past five years. The administrators reported on the dropout prevention practices and programs to which they attributed their improved graduation rates. The majority of schools reported policy changes, especially with suspension. The…

  2. Estimates of rates and errors for measurements of direct-. gamma. and direct-. gamma. + jet production by polarized protons at RHIC

    SciTech Connect

    Beddo, M.E.; Spinka, H.; Underwood, D.G.

    1992-08-14

    Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.

  3. General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.

    2011-01-01

    The Coronagraph Performance Error Budget (CPEB) tool automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. The tool uses a Code V prescription of the optical train, and uses MATLAB programs to call ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled fine-steering mirrors (FSMs). The sensitivity matrices are imported by macros into Excel 2007, where the error budget is evaluated. The user specifies the particular optics of interest, and chooses the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions, and combines that with the sensitivity matrices to generate an error budget for the system. CPEB also contains a combination of form and ActiveX controls with Visual Basic for Applications code to allow for user interaction in which the user can perform trade studies such as changing engineering requirements, and identifying and isolating stringent requirements. It contains summary tables and graphics that can be instantly used for reporting results in view graphs. The entire process to obtain a coronagraphic telescope performance error budget has been automated into three stages: conversion of optical prescription from Zemax or Code V to MACOS (in-house optical modeling and analysis tool), a linear models process, and an error budget tool process. The first process was improved by developing a MATLAB package based on the Class Constructor Method with a number of user-defined functions that allow the user to modify the MACOS optical prescription. The second process was modified by creating a MATLAB package that contains user-defined functions that automate the process. The user interfaces with the process by utilizing an initialization file where the user defines the parameters of the linear model

  4. High density bit transition requirements versus the effects on BCH error correcting code. [bit synchronization

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.

  5. High Heating Rates Affect Greatly the Inactivation Rate of Escherichia coli.

    PubMed

    Huertas, Juan-Pablo; Aznar, Arantxa; Esnoz, Arturo; Fernández, Pablo S; Iguaz, Asunción; Periago, Paula M; Palop, Alfredo

    2016-01-01

    Heat resistance of microorganisms can be affected by different influencing factors. Although, the effect of heating rates has been scarcely explored by the scientific community, recent researches have unraveled its important effect on the thermal resistance of different species of vegetative bacteria. Typically heating rates described in the literature ranged from 1 to 20°C/min but the impact of much higher heating rates is unclear. The aim of this research was to explore the effect of different heating rates, such as those currently achieved in the heat exchangers used in the food industry, on the heat resistance of Escherichia coli. A pilot plant tubular heat exchanger and a thermoresistometer Mastia were used for this purpose. Results showed that fast heating rates had a deep impact on the thermal resistance of E. coli. Heating rates between 20 and 50°C/min were achieved in the heat exchanger, which were much slower than those around 20°C/s achieved in the thermoresistometer. In all cases, these high heating rates led to higher inactivation than expected: in the heat exchanger, for all the experiments performed, when the observed inactivation had reached about seven log cycles, the predictions estimated about 1 log cycle of inactivation; in the thermoresistometer these differences between observed and predicted values were even more than 10 times higher, from 4.07 log cycles observed to 0.34 predicted at a flow rate of 70 mL/min and a maximum heating rate of 14.7°C/s. A quantification of the impact of the heating rates on the level of inactivation achieved was established. These results point out the important effect that the heating rate has on the thermal resistance of E. coli, with high heating rates resulting in an additional sensitization to heat and therefore an effective food safety strategy in terms of food processing. PMID:27563300

  6. High Heating Rates Affect Greatly the Inactivation Rate of Escherichia coli.

    PubMed

    Huertas, Juan-Pablo; Aznar, Arantxa; Esnoz, Arturo; Fernández, Pablo S; Iguaz, Asunción; Periago, Paula M; Palop, Alfredo

    2016-01-01

    Heat resistance of microorganisms can be affected by different influencing factors. Although, the effect of heating rates has been scarcely explored by the scientific community, recent researches have unraveled its important effect on the thermal resistance of different species of vegetative bacteria. Typically heating rates described in the literature ranged from 1 to 20°C/min but the impact of much higher heating rates is unclear. The aim of this research was to explore the effect of different heating rates, such as those currently achieved in the heat exchangers used in the food industry, on the heat resistance of Escherichia coli. A pilot plant tubular heat exchanger and a thermoresistometer Mastia were used for this purpose. Results showed that fast heating rates had a deep impact on the thermal resistance of E. coli. Heating rates between 20 and 50°C/min were achieved in the heat exchanger, which were much slower than those around 20°C/s achieved in the thermoresistometer. In all cases, these high heating rates led to higher inactivation than expected: in the heat exchanger, for all the experiments performed, when the observed inactivation had reached about seven log cycles, the predictions estimated about 1 log cycle of inactivation; in the thermoresistometer these differences between observed and predicted values were even more than 10 times higher, from 4.07 log cycles observed to 0.34 predicted at a flow rate of 70 mL/min and a maximum heating rate of 14.7°C/s. A quantification of the impact of the heating rates on the level of inactivation achieved was established. These results point out the important effect that the heating rate has on the thermal resistance of E. coli, with high heating rates resulting in an additional sensitization to heat and therefore an effective food safety strategy in terms of food processing.

  7. High Heating Rates Affect Greatly the Inactivation Rate of Escherichia coli

    PubMed Central

    Huertas, Juan-Pablo; Aznar, Arantxa; Esnoz, Arturo; Fernández, Pablo S.; Iguaz, Asunción; Periago, Paula M.; Palop, Alfredo

    2016-01-01

    Heat resistance of microorganisms can be affected by different influencing factors. Although, the effect of heating rates has been scarcely explored by the scientific community, recent researches have unraveled its important effect on the thermal resistance of different species of vegetative bacteria. Typically heating rates described in the literature ranged from 1 to 20°C/min but the impact of much higher heating rates is unclear. The aim of this research was to explore the effect of different heating rates, such as those currently achieved in the heat exchangers used in the food industry, on the heat resistance of Escherichia coli. A pilot plant tubular heat exchanger and a thermoresistometer Mastia were used for this purpose. Results showed that fast heating rates had a deep impact on the thermal resistance of E. coli. Heating rates between 20 and 50°C/min were achieved in the heat exchanger, which were much slower than those around 20°C/s achieved in the thermoresistometer. In all cases, these high heating rates led to higher inactivation than expected: in the heat exchanger, for all the experiments performed, when the observed inactivation had reached about seven log cycles, the predictions estimated about 1 log cycle of inactivation; in the thermoresistometer these differences between observed and predicted values were even more than 10 times higher, from 4.07 log cycles observed to 0.34 predicted at a flow rate of 70 mL/min and a maximum heating rate of 14.7°C/s. A quantification of the impact of the heating rates on the level of inactivation achieved was established. These results point out the important effect that the heating rate has on the thermal resistance of E. coli, with high heating rates resulting in an additional sensitization to heat and therefore an effective food safety strategy in terms of food processing. PMID:27563300

  8. High repetition rate (100 Hz), high peak power, high contrast femtosecond laser chain

    NASA Astrophysics Data System (ADS)

    Clady, R.; Tcheremiskine, V.; Azamoum, Y.; Ferré, A.; Charmasson, L.; Utéza, O.; Sentis, M.

    2016-03-01

    High intensity femtosecond laser are now routinely used to produce energetic particles and photons via interaction with solid targets. However, the relatively low conversion efficiency of such processes requires the use of high repetition rate laser to increase the average power of the laser-induced secondary source. Furthermore, for high intensity laser-matter interaction, a high temporal contrast is of primary importance as the presence of a ns ASE pedestal (Amplified Spontaneous Emission) and/or various prepulses may significantly affect the governing interaction processes by creating a pre-plasma on the target surface. We present the characterization of a laser chain based on Ti:Sa technology and CPA technique, which presents unique laser characteristics : a high repetition rate (100 Hz), a high peak power (>5 TW) and a high contrast ratio (ASE<10-10) obtained thanks to a specific design with 3 saturable absorbers inserted in the amplification chain. A deformable mirror placed before the focusing parabolic mirror should allow us to focus the beam almost at the limit of diffraction. In these conditions, peak intensity above 1019W.cm-2 on target could be achieved at 100 Hz, allowing the study of relativistic optics at a high repetition rate.

  9. High strain rate loading of polymeric foams and solid plastics

    NASA Astrophysics Data System (ADS)

    Dick, Richard D.; Chang, Peter C.; Fourney, William L.

    2000-04-01

    The split-Hopkinson pressure bar (SHPB) provided a technique to determine the high strain rate response for low density foams and solid ABS and polypropylene plastics. These materials are used in the interior safety panels of automobiles and crash test dummies. Because the foams have a very low impedance, polycarbonate bars were used to acquire the strain rate data in the 100 to 1600 l/s range. An aluminum SPHB setup was used to obtain the solid plastics data which covered strain rates of 1000 to 4000 l/s. The curves for peak strain rate versus peak stress for the foams over the test range studied indicates only a slight strain rate dependence. Peak strain rate versus peak stress curves for polypropylene shows a strain rate dependence up to about 1500 l/s. At that rate the solid poly propylene indicates no strain rate dependence. The ABS plastics are strain rate dependent up to 3500 l/s and then are independent at larger strain rates.

  10. A General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.; Shaklan, Stuart B.

    2009-01-01

    This paper describes a general purpose Coronagraph Performance Error Budget (CPEB) tool that we have developed under the NASA Exoplanet Exploration Program. The CPEB automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. It operates in 3 steps: first, a CodeV or Zemax prescription is converted into a MACOS optical prescription. Second, a Matlab program calls ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled coarse and fine-steering mirrors. Third, the sensitivity matrices are imported by macros into Excel 2007 where the error budget is created. Once created, the user specifies the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions and combines them with the sensitivity matrices to generate an error budget for the system. The user can easily modify the motion allocations to perform trade studies.

  11. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  12. Average bit error rate performance analysis of subcarrier intensity modulated MRC and EGC FSO systems with dual branches over M distribution turbulence channels

    NASA Astrophysics Data System (ADS)

    Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang

    2015-07-01

    Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.

  13. Solidification at the High and Low Rate Extreme

    SciTech Connect

    Meco, Halim

    2004-12-19

    The microstructures formed upon solidification are strongly influenced by the imposed growth rates on an alloy system. Depending on the characteristics of the solidification process, a wide range of growth rates is accessible. The prevailing solidification mechanisms, and thus the final microstructure of the alloy, are governed by these imposed growth rates. At the high rate extreme, for instance, one can have access to novel microstructures that are unattainable at low growth rates. While the low growth rates can be utilized for the study of the intrinsic growth behavior of a certain phase growing from the melt. Although the length scales associated with certain processes, such as capillarity, and the diffusion of heat and solute, are different at low and high rate extremes, the phenomena that govern the selection of a certain microstructural length scale or a growth mode are the same. Consequently, one can analyze the solidification phenomena at both high and low rates by using the same governing principles. In this study, we examined the microstructural control at both low and high extremes. For the high rate extreme, the formation of crystalline products and factors that control the microstructure during rapid solidification by free-jet melt spinning are examined in Fe-Si-B system. Particular attention was given to the behavior of the melt pool at different quench-wheel speeds. Since the solidification process takes place within the melt-pool that forms on the rotating quench-wheel, we examined the influence of melt-pool dynamics on nucleation and growth of crystalline solidification products and glass formation. High-speed imaging of the melt-pool, analysis of ribbon microstructure, and measurement of ribbon geometry and surface character all indicate upper and lower limits for melt-spinning rates for which nucleation can be avoided, and fully amorphous ribbons can be achieved. Comparison of the relevant time scales reveals that surface-controlled melt

  14. The Nature of Error in Adolescent Student Writing

    ERIC Educational Resources Information Center

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  15. Authoritative School Climate and High School Dropout Rates

    ERIC Educational Resources Information Center

    Jia, Yuane; Konold, Timothy R.; Cornell, Dewey

    2016-01-01

    This study tested the association between school-wide measures of an authoritative school climate and high school dropout rates in a statewide sample of 315 high schools. Regression models at the school level of analysis used teacher and student measures of disciplinary structure, student support, and academic expectations to predict overall high…

  16. The perturbation paradigm modulates error-based learning in a highly automated task: outcomes in swallowing kinematics.

    PubMed

    Anderson, C; Macrae, P; Taylor-Kamara, I; Serel, S; Vose, A; Humbert, I A

    2015-08-15

    Traditional motor learning studies focus on highly goal-oriented, volitional tasks that often do not readily generalize to real-world movements. The goal of this study was to investigate how different perturbation paradigms alter error-based learning outcomes in a highly automated task. Swallowing was perturbed with neck surface electrical stimulation that opposes hyo-laryngeal elevation in 25 healthy adults (30 swallows: 10 preperturbation, 10 perturbation, and 10 postperturbation). The four study conditions were gradual-masked, gradual-unmasked, abrupt-masked, and abrupt-unmasked. Gradual perturbations increasingly intensified overtime, while abrupt perturbations were sustained at the same high intensity. The masked conditions reduced cues about the presence/absence of the perturbation (pre- and postperturbation periods had low stimulation), but unmasked conditions did not (pre- and postperturbation periods had no stimulation). Only hyo-laryngeal range of motion measures had significant outcomes; no timing measure demonstrated learning. Systematic-error reduction occurred only during the abrupt-masked and abrupt-unmasked perturbations. Only the abrupt-masked perturbation caused aftereffects. In this highly automated task, gradual perturbations did not induce learning similarly to findings of some volitional, goal-oriented adaptation task studies. Furthermore, our subtle and brief adjustment of the stimulation paradigm (masked vs. unmasked) determined whether aftereffects were present. This suggests that, in the unmasked group, sensory predictions of a motor plan were quickly and efficiently modified to disengage error-based learning behaviors.

  17. Evolution of High Tooth Replacement Rates in Sauropod Dinosaurs

    PubMed Central

    Smith, Kathlyn M.; Fisher, Daniel C.; Wilson, Jeffrey A.

    2013-01-01

    Background Tooth replacement rate can be calculated in extinct animals by counting incremental lines of deposition in tooth dentin. Calculating this rate in several taxa allows for the study of the evolution of tooth replacement rate. Sauropod dinosaurs, the largest terrestrial animals that ever evolved, exhibited a diversity of tooth sizes and shapes, but little is known about their tooth replacement rates. Methodology/Principal Findings We present tooth replacement rate, formation time, crown volume, total dentition volume, and enamel thickness for two coexisting but distantly related and morphologically disparate sauropod dinosaurs Camarasaurus and Diplodocus. Individual tooth formation time was determined by counting daily incremental lines in dentin. Tooth replacement rate is calculated as the difference between the number of days recorded in successive replacement teeth. Each tooth family in Camarasaurus has a maximum of three replacement teeth, whereas each Diplodocus tooth family has up to five. Tooth formation times are about 1.7 times longer in Camarasaurus than in Diplodocus (315 vs. 185 days). Average tooth replacement rate in Camarasaurus is about one tooth every 62 days versus about one tooth every 35 days in Diplodocus. Despite slower tooth replacement rates in Camarasaurus, the volumetric rate of Camarasaurus tooth replacement is 10 times faster than in Diplodocus because of its substantially greater tooth volumes. A novel method to estimate replacement rate was developed and applied to several other sauropodomorphs that we were not able to thin section. Conclusions/Significance Differences in tooth replacement rate among sauropodomorphs likely reflect disparate feeding strategies and/or food choices, which would have facilitated the coexistence of these gigantic herbivores in one ecosystem. Early neosauropods are characterized by high tooth replacement rates (despite their large tooth size), and derived titanosaurs and diplodocoids independently

  18. High-performance micromachined vibratory rate- and rate-integrating gyroscopes

    NASA Astrophysics Data System (ADS)

    Cho, Jae Yoong

    The performance of vibratory micromachined gyroscopes has been continuously improving for the past two decades. However, to further improve performance of the MEMS gyroscope in harsh environment, it is necessary for gyros to reduce the sensitivity to environmental parameters, including vibration and temperature change. In addition, conventional rate-mode MEMS gyroscopes have limitation in performance due to tradeoff between resolution, bandwidth, and full-scale range. In this research, we aim to reduce vibration sensitivity by developing gyros that operate in the balanced mode. The balanced mode creates zero net momentum and reduces energy loss through an anchor. The gyro can differentially cancel measurement errors from external vibration along both sensor axes. The vibration sensitivity of the balanced-mode gyroscope including structural imbalance from microfabrication reduces as the absolute difference between in-phase parasitic mode and operating mode frequencies increases. The parasitic sensing mode frequency is designed larger than the operating mode frequency to achieve both improved vibration insensitivity and shock resistivity. A single anchor is used in order to minimize thermoresidual stress change. We developed two gyroscope based on these design principles. The Balanced Oscillating Gyro (BOG) is a quad-mass tuning-fork rate gyroscope. The relationship between gyro design and modal characteristics is studied extensively using finite element method (FEM). The gyro is fabricated using the planar Si-on-glass (SOG) process with a device thickness of 100microm. The BOG is evaluated using the first-generation analog interface circuitry. Under a frequency mismatch of 5Hz between driving and sense modes, the angle random walk (ARW) is measured to be 0.44°/sec/✓Hz. The performance is limited by quadrature error and low-frequency noise in the circuit. The Cylindrical Rate-Integrating Gyroscope (CING) operates in whole-angle mode. The gyro is completely

  19. High rate and stable cycling of lithium metal anode

    DOE PAGESBeta

    Qian, Jiangfeng; Henderson, Wesley A.; Xu, Wu; Bhattacharya, Priyanka; Engelhard, Mark H.; Borodin, Oleg; Zhang, Jiguang

    2015-02-20

    Lithium (Li) metal is an ideal anode material for rechargeable batteries. However, dendritic Li growth and limited Coulombic efficiency (CE) during repeated Li deposition/stripping processes have prevented the application of this anode in rechargeable Li metal batteries, especially for use at high current densities. Here, we report that the use of highly concentrated electrolytes composed of ether solvents and the lithium bis(fluorosulfonyl)imide (LiFSI) salt enables the high rate cycling of a Li metal anode at high CE (up to 99.1 %) without dendrite growth. With 4 M LiFSI in 1,2-dimethoxyethane (DME) as the electrolyte, a Li|Li cell can be cycledmore » at high rates (10 mA cm-2) for more than 6000 cycles with no increase in the cell impedance, and a Cu|Li cell can be cycled at 4 mA cm-2 for more than 1000 cycles with an average CE of 98.4%. These excellent high rate performances can be attributed to the increased solvent coordination and increased availability of Li+ concentration in the electrolyte. Lastly, further development of this electrolyte may lead to practical applications for Li metal anode in rechargeable batteries. The fundamental mechanisms behind the high rate ion exchange and stability of the electrolytes also shine light on the stability of other electrochemical systems.« less

  20. High rate and stable cycling of lithium metal anode

    SciTech Connect

    Qian, Jiangfeng; Henderson, Wesley A.; Xu, Wu; Bhattacharya, Priyanka; Engelhard, Mark H.; Borodin, Oleg; Zhang, Jiguang

    2015-02-20

    Lithium (Li) metal is an ideal anode material for rechargeable batteries. However, dendritic Li growth and limited Coulombic efficiency (CE) during repeated Li deposition/stripping processes have prevented the application of this anode in rechargeable Li metal batteries, especially for use at high current densities. Here, we report that the use of highly concentrated electrolytes composed of ether solvents and the lithium bis(fluorosulfonyl)imide (LiFSI) salt enables the high rate cycling of a Li metal anode at high CE (up to 99.1 %) without dendrite growth. With 4 M LiFSI in 1,2-dimethoxyethane (DME) as the electrolyte, a Li|Li cell can be cycled at high rates (10 mA cm-2) for more than 6000 cycles with no increase in the cell impedance, and a Cu|Li cell can be cycled at 4 mA cm-2 for more than 1000 cycles with an average CE of 98.4%. These excellent high rate performances can be attributed to the increased solvent coordination and increased availability of Li+ concentration in the electrolyte. Lastly, further development of this electrolyte may lead to practical applications for Li metal anode in rechargeable batteries. The fundamental mechanisms behind the high rate ion exchange and stability of the electrolytes also shine light on the stability of other electrochemical systems.

  1. High power, high efficiency millimeter wavelength traveling wave tubes for high rate communications from deep space

    NASA Technical Reports Server (NTRS)

    Dayton, James A., Jr.

    1991-01-01

    The high-power transmitters needed for high data rate communications from deep space will require a new class of compact, high efficiency traveling wave tubes (TWT's). Many of the recent TWT developments in the microwave frequency range are generically applicable to mm wave devices, in particular much of the technology of computer aided design, cathodes, and multistage depressed collectors. However, because TWT dimensions scale approximately with wavelength, mm wave devices will be physically much smaller with inherently more stringent fabrication tolerances and sensitivity to thermal dissipation.

  2. Voigt profile introduces optical depth dependent systematic errors - Detected in high resolution laboratory spectra of water

    NASA Astrophysics Data System (ADS)

    Birk, Manfred; Wagner, Georg

    2016-02-01

    The Voigt profile commonly used in radiative transfer modeling of Earth's and planets' atmospheres for remote sensing/climate modeling produces systematic errors so far not accounted for. Saturated lines are systematically too narrow when calculated from pressure broadening parameters based on the analysis of laboratory data with the Voigt profile. This is caused by line narrowing effects leading to systematically too small fitted broadening parameters when applying the Voigt profile. These effective values are still valid to model non-saturated lines with sufficient accuracy. Saturated lines dominated by the wings of the line profile are sufficiently accurately modeled with a Voigt profile with the correct broadening parameters and are thus systematically too narrow when calculated with the effective values. The systematic error was quantified by mid infrared laboratory spectroscopy of the water ν2 fundamental. Correct Voigt profile based pressure broadening parameters for saturated lines were 3-4% larger than the effective ones in the spectroscopic database. Impacts on remote sensing and climate modeling are expected. Combination of saturated and non-saturated lines in the spectroscopic analysis will quantify line narrowing with unprecedented precision.

  3. Study of High Strain Rate Response of Composites

    NASA Technical Reports Server (NTRS)

    Gilat, Amos

    2003-01-01

    The objective of the research was to continue the experimental study of the effect of strain rate on mechanical response (deformation and failure) of epoxy resins and carbon fibers/epoxy matrix composites, and to initiate a study of the effects of temperature by developing an elevated temperature test. The experimental data provide the information needed for NASA scientists for the development of a nonlinear, rate dependent deformation and strength models for composites that can subsequently be used in design. This year effort was directed into testing the epoxy resin. Three types of epoxy resins were tested in tension and shear at various strain rates that ranges from 5 x 10(exp -5), to 1000 per second. Pilot shear experiments were done at high strain rate and an elevated temperature of 80 C. The results show that all, the strain rate, the mode of loading, and temperature significantly affect the response of epoxy.

  4. High rates of nitrogen fixation in equatorial upwelling region

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2013-05-01

    Surface waters in upwelling regions of the ocean are generally rich in nutrients. Scientists had thought that these areas would have low rates of nitrogen fixation because diazotrophs—microbes that convert nitrogen gas from the atmosphere into usable forms, such as ammonia—could use the nutrients in the water directly instead of having to fix nitrogen gas. However, researchers recently recorded high rates of nitrogen fixation in an upwelling region in the equatorial Atlantic.

  5. Calibration of the straightness and orthogonality error of a laser feedback high-precision stage using self-calibration methods

    NASA Astrophysics Data System (ADS)

    Kim, Dongmin; Kim, Kihyun; Park, Sang Hyun; Jang, Sangdon

    2014-12-01

    An ultra high-precision 3-DOF air-bearing stage is developed and calibrated in this study. The stage was developed for the transportation of a glass or wafer with x and y following errors in the nanometer regime. To apply the proposed stage to display or semiconductor fabrication equipment, x and y straightness errors should be at the sub-micron level and the x-y orthogonality error should be in the region of several arcseconds with strokes of several hundreds of mm. Our system was designed to move a 400 mm stroke on the x axis and a 700 mm stroke on the y axis. To do this, 1000 mm and 550 mm bar-type mirrors were adopted for real time Δx and Δy laser measurements and feedback control. In this system, with the laser wavelength variation and instability being kept to a minimum through environmental control, the straightness and orthogonality become purely dependent upon the surface shape of the bar mirrors. Compensation for the distortion of the bar mirrors is accomplished using a self-calibration method. The successful application of the method nearly eliminated the straightness and orthogonality errors of the stage, allowing their specifications to be fully satisfied. As a result, the straightness and orthogonality errors of the stage were successfully decreased from 4.4 μm to 0.8 μm and from 0.04° to 2.48 arcsec, respectively.

  6. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  7. THE AMERICAN HIGH SCHOOL GRADUATION RATE: TRENDS AND LEVELS*

    PubMed Central

    Heckman, James J.; LaFontaine, Paul A.

    2009-01-01

    This paper applies a unified methodology to multiple data sets to estimate both the levels and trends in U.S. high school graduation rates. We establish that (a) the true rate is substantially lower than widely used measures; (b) it peaked in the early 1970s; (c) majority/minority differentials are substantial and have not converged for 35 years; (d) lower post-1970 rates are not solely due to increasing immigrant and minority populations; (e) our findings explain part of the slowdown in college attendance and rising college wage premiums; and (f) widening graduation differentials by gender help explain increasing male-female college attendance gaps. PMID:20625528

  8. A review of reaction rates in high temperature air

    NASA Technical Reports Server (NTRS)

    Park, Chul

    1989-01-01

    The existing experimental data on the rate coefficients for the chemical reactions in nonequilibrium high temperature air are reviewed and collated, and a selected set of such values is recommended for use in hypersonic flow calculations. For the reactions of neutral species, the recommended values are chosen from the experimental data that existed mostly prior to 1970, and are slightly different from those used previously. For the reactions involving ions, the recommended rate coefficients are newly chosen from the experimental data obtained more recently. The reacting environment is assumed to lack thermal equilibrium, and the rate coefficients are expressed as a function of the controlling temperature, incorporating the recent multitemperature reaction concept.

  9. High-Strain-Rate Compression Testing of Ice

    NASA Technical Reports Server (NTRS)

    Shazly, Mostafa; Prakash, Vikas; Lerch, Bradley A.

    2006-01-01

    In the present study a modified split Hopkinson pressure bar (SHPB) was employed to study the effect of strain rate on the dynamic material response of ice. Disk-shaped ice specimens with flat, parallel end faces were either provided by Dartmouth College (Hanover, NH) or grown at Case Western Reserve University (Cleveland, OH). The SHPB was adapted to perform tests at high strain rates in the range 60 to 1400/s at test temperatures of -10 and -30 C. Experimental results showed that the strength of ice increases with increasing strain rates and this occurs over a change in strain rate of five orders of magnitude. Under these strain rate conditions the ice microstructure has a slight influence on the strength, but it is much less than the influence it has under quasi-static loading conditions. End constraint and frictional effects do not influence the compression tests like they do at slower strain rates, and therefore the diameter/thickness ratio of the samples is not as critical. The strength of ice at high strain rates was found to increase with decreasing test temperatures. Ice has been identified as a potential source of debris to impact the shuttle; data presented in this report can be used to validate and/or develop material models for ice impact analyses for shuttle Return to Flight efforts.

  10. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding.

    SciTech Connect

    Loughry, Thomas A.

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  11. Semi-solid electrodes having high rate capability

    DOEpatents

    Chiang, Yet-Ming; Duduta, Mihai; Holman, Richard; Limthongkul, Pimpa; Tan, Taison

    2016-07-05

    Embodiments described herein relate generally to electrochemical cells having high rate capability, and more particularly to devices, systems and methods of producing high capacity and high rate capability batteries having relatively thick semi-solid electrodes. In some embodiments, an electrochemical cell includes an anode, a semi-solid cathode that includes a suspension of an active material and a conductive material in a liquid electrolyte, and an ion permeable membrane disposed between the anode and the cathode. The semi-solid cathode has a thickness in the range of about 250 .mu.m-2,500 .mu.m, and the electrochemical cell has an area specific capacity of at least 5 mAh/cm.sup.2 at a C-rate of C/2.

  12. Semi-solid electrodes having high rate capability

    SciTech Connect

    Chiang, Yet-Ming; Duduta, Mihai; Holman, Richard; Limthongkul, Pimpa; Tan, Taison

    2015-11-10

    Embodiments described herein relate generally to electrochemical cells having high rate capability, and more particularly to devices, systems and methods of producing high capacity and high rate capability batteries having relatively thick semi-solid electrodes. In some embodiments, an electrochemical cell includes an anode, a semi-solid cathode that includes a suspension of an active material and a conductive material in a liquid electrolyte, and an ion permeable membrane disposed between the anode and the cathode. The semi-solid cathode has a thickness in the range of about 250 .mu.m-2,500 .mu.m, and the electrochemical cell has an area specific capacity of at least 5 mAh/cm.sup.2 at a C-rate of C/2.

  13. High strain rate superplasticity in metals and composites

    SciTech Connect

    Nieh, T.G.; Wadsworth, J.; Higashi, K.

    1993-07-01

    Superplastic behavior at very high strain rates (at or above 1 s{sup {minus}1}) in metallic-based materials is an area of increasing interest. The phenomenon has been observed quite extensively in metal alloys, metal-matrix composites (MMC), and mechanically-alloyed (MA) materials. In the present paper, experimental results on high strain rate behavior in 2124 Al-based materials, including Zr-modified 2124, SiC-reinforced 2124, MA 2124, and MA 2124 MMC, are presented. Except for the required fine grain size, details of the structural requirements of this phenomenon are not yet understood. Despite this, a systematic approach to produce high strain rate superplasticity (HSRS) in metallic materials is given in this paper. Evidences indicate that the presence of a liquid phase, or a low melting point region, at boundary interfaces is responsible for HSRS.

  14. Flexible high-repetition-rate ultrafast fiber laser

    PubMed Central

    Mao, Dong; Liu, Xueming; Sun, Zhipei; Lu, Hua; Han, Dongdong; Wang, Guoxi; Wang, Fengqiu

    2013-01-01

    High-repetition-rate pulses have widespread applications in the fields of fiber communications, frequency comb, and optical sensing. Here, we have demonstrated high-repetition-rate ultrashort pulses in an all-fiber laser by exploiting an intracavity Mach-Zehnder interferometer (MZI) as a comb filter. The repetition rate of the laser can be tuned flexibly from about 7 to 1100 GHz by controlling the optical path difference between the two arms of the MZI. The pulse duration can be reduced continuously from about 10.1 to 0.55 ps with the spectral width tunable from about 0.35 to 5.7 nm by manipulating the intracavity polarization controller. Numerical simulations well confirm the experimental observations and show that filter-driven four-wave mixing effect, induced by the MZI, is the main mechanism that governs the formation of the high-repetition-rate pulses. This all-fiber-based laser is a simple and low-cost source for various applications where high-repetition-rate pulses are necessary. PMID:24226153

  15. Automatic processing of high-rate, high-density multibeam echosounder data

    NASA Astrophysics Data System (ADS)

    Calder, B. R.; Mayer, L. A.

    2003-06-01

    Multibeam echosounders (MBES) are currently the best way to determine the bathymetry of large regions of the seabed with high accuracy. They are becoming the standard instrument for hydrographic surveying and are also used in geological studies, mineral exploration and scientific investigation of the earth's crustal deformations and life cycle. The significantly increased data density provided by an MBES has significant advantages in accurately delineating the morphology of the seabed, but comes with the attendant disadvantage of having to handle and process a much greater volume of data. Current data processing approaches typically involve (computer aided) human inspection of all data, with time-consuming and subjective assessment of all data points. As data rates increase with each new generation of instrument and required turn-around times decrease, manual approaches become unwieldy and automatic methods of processing essential. We propose a new method for automatically processing MBES data that attempts to address concerns of efficiency, objectivity, robustness and accuracy. The method attributes each sounding with an estimate of vertical and horizontal error, and then uses a model of information propagation to transfer information about the depth from each sounding to its local neighborhood. Embedded in the survey area are estimation nodes that aim to determine the true depth at an absolutely defined location, along with its associated uncertainty. As soon as soundings are made available, the nodes independently assimilate propagated information to form depth hypotheses which are then tracked and updated on-line as more data is gathered. Consequently, we can extract at any time a "current-best" estimate for all nodes, plus co-located uncertainties and other metrics. The method can assimilate data from multiple surveys, multiple instruments or repeated passes of the same instrument in real-time as data is being gathered. The data assimilation scheme is

  16. High salivary Staphylococcus aureus carriage rate among healthy paedodontic patients.

    PubMed

    Petti, Stefano; Boss, Maurizio; Messano, Giuseppe A; Protano, Carmela; Polimeni, Antonella

    2014-01-01

    Staphylococcus aureus can be responsible for oral and dental healthcare-associated infections. Patients with high salivary S. aureus levels are potential sources of infection, because saliva is spread in the environment during dental therapy. This study assessed the salivary S. aureus carriage rate in 97 children (6-12 years) in good general health, attending a paedodontic department. Samples of unstimulated saliva were collected, S. aureus was presumptively identified. The salivary carriage rate was 43% (95% confidence interval, 33%-53%). 6.2% children harboured levels >103 colony forming units/mL. These data suggest that the risk for environmental contamination and infection in dental healthcare settings could be high.

  17. High removal rate laser-based coating removal system

    DOEpatents

    Matthews, Dennis L.; Celliers, Peter M.; Hackel, Lloyd; Da Silva, Luiz B.; Dane, C. Brent; Mrowka, Stanley

    1999-11-16

    A compact laser system that removes surface coatings (such as paint, dirt, etc.) at a removal rate as high as 1000 ft.sup.2 /hr or more without damaging the surface. A high repetition rate laser with multiple amplification passes propagating through at least one optical amplifier is used, along with a delivery system consisting of a telescoping and articulating tube which also contains an evacuation system for simultaneously sweeping up the debris produced in the process. The amplified beam can be converted to an output beam by passively switching the polarization of at least one amplified beam. The system also has a personal safety system which protects against accidental exposures.

  18. Moving Indiana Forward: High Standards and High Graduation Rates. A Strategic Assessment for Indiana Education Policymakers

    ERIC Educational Resources Information Center

    Achieve, Inc., 2006

    2006-01-01

    Indiana was selected to participate in a new initiative--"Moving Forward: High Standards and High Graduation Rates"--jointly sponsored by Achieve, Inc. and Jobs for the Future and funded by Carnegie Corporation of New York. This effort is designed to spotlight the importance of pursuing a dual agenda of high standards and high graduation rates.…

  19. Sensitivity to Envelope Interaural Time Differences at High Modulation Rates

    PubMed Central

    Bleeck, Stefan; McAlpine, David

    2015-01-01

    Sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure of low-frequency tones and the modulated envelopes of high-frequency sounds are considered comparable, particularly for envelopes shaped to transmit similar fidelity of temporal information normally present for low-frequency sounds. Nevertheless, discrimination performance for envelope modulation rates above a few hundred Hertz is reported to be poor—to the point of discrimination thresholds being unattainable—compared with the much higher (>1,000 Hz) limit for low-frequency ITD sensitivity, suggesting the presence of a low-pass filter in the envelope domain. Further, performance for identical modulation rates appears to decline with increasing carrier frequency, supporting the view that the low-pass characteristics observed for envelope ITD processing is carrier-frequency dependent. Here, we assessed listeners’ sensitivity to ITDs conveyed in pure tones and in the modulated envelopes of high-frequency tones. ITD discrimination for the modulated high-frequency tones was measured as a function of both modulation rate and carrier frequency. Some well-trained listeners appear able to discriminate ITDs extremely well, even at modulation rates well beyond 500 Hz, for 4-kHz carriers. For one listener, thresholds were even obtained for a modulation rate of 800 Hz. The highest modulation rate for which thresholds could be obtained declined with increasing carrier frequency for all listeners. At 10 kHz, the highest modulation rate at which thresholds could be obtained was 600 Hz. The upper limit of sensitivity to ITDs conveyed in the envelope of high-frequency modulated sounds appears to be higher than previously considered. PMID:26721926

  20. Sensitivity to Envelope Interaural Time Differences at High Modulation Rates.

    PubMed

    Monaghan, Jessica J M; Bleeck, Stefan; McAlpine, David

    2015-01-01

    Sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure of low-frequency tones and the modulated envelopes of high-frequency sounds are considered comparable, particularly for envelopes shaped to transmit similar fidelity of temporal information normally present for low-frequency sounds. Nevertheless, discrimination performance for envelope modulation rates above a few hundred Hertz is reported to be poor-to the point of discrimination thresholds being unattainable-compared with the much higher (>1,000 Hz) limit for low-frequency ITD sensitivity, suggesting the presence of a low-pass filter in the envelope domain. Further, performance for identical modulation rates appears to decline with increasing carrier frequency, supporting the view that the low-pass characteristics observed for envelope ITD processing is carrier-frequency dependent. Here, we assessed listeners' sensitivity to ITDs conveyed in pure tones and in the modulated envelopes of high-frequency tones. ITD discrimination for the modulated high-frequency tones was measured as a function of both modulation rate and carrier frequency. Some well-trained listeners appear able to discriminate ITDs extremely well, even at modulation rates well beyond 500 Hz, for 4-kHz carriers. For one listener, thresholds were even obtained for a modulation rate of 800 Hz. The highest modulation rate for which thresholds could be obtained declined with increasing carrier frequency for all listeners. At 10 kHz, the highest modulation rate at which thresholds could be obtained was 600 Hz. The upper limit of sensitivity to ITDs conveyed in the envelope of high-frequency modulated sounds appears to be higher than previously considered. PMID:26721926

  1. Metasurface-based broadband hologram with high tolerance to fabrication errors.

    PubMed

    Zhang, Xiaohu; Jin, Jinjin; Wang, Yanqin; Pu, Mingbo; Li, Xiong; Zhao, Zeyu; Gao, Ping; Wang, Changtao; Luo, Xiangang

    2016-01-01

    With new degrees of freedom to achieve full control of the optical wavefront, metasurfaces could overcome the fabrication embarrassment faced by the metamaterials. In this paper, a broadband hologram using metasurface consisting of elongated nanoapertures array with different orientations has been experimentally demonstrated. Owing to broadband characteristic of the polarization-dependent scattering, the performance is verified at working wavelength ranging from 405 nm to 914 nm. Furthermore, the tolerance to the fabrication errors, which include the length and width of the elongated aperture, the shape deformation and the phase noise, has been theoretically investigated to be as large as 10% relative to the original hologram. We believe the method proposed here is promising in emerging applications such as holographic display, optical information processing and lithography technology etc.

  2. Metasurface-based broadband hologram with high tolerance to fabrication errors

    PubMed Central

    Zhang, Xiaohu; Jin, Jinjin; Wang, Yanqin; Pu, Mingbo; Li, Xiong; Zhao, Zeyu; Gao, Ping; Wang, Changtao; Luo, Xiangang

    2016-01-01

    With new degrees of freedom to achieve full control of the optical wavefront, metasurfaces could overcome the fabrication embarrassment faced by the metamaterials. In this paper, a broadband hologram using metasurface consisting of elongated nanoapertures array with different orientations has been experimentally demonstrated. Owing to broadband characteristic of the polarization-dependent scattering, the performance is verified at working wavelength ranging from 405 nm to 914 nm. Furthermore, the tolerance to the fabrication errors, which include the length and width of the elongated aperture, the shape deformation and the phase noise, has been theoretically investigated to be as large as 10% relative to the original hologram. We believe the method proposed here is promising in emerging applications such as holographic display, optical information processing and lithography technology etc. PMID:26818130

  3. Metasurface-based broadband hologram with high tolerance to fabrication errors.

    PubMed

    Zhang, Xiaohu; Jin, Jinjin; Wang, Yanqin; Pu, Mingbo; Li, Xiong; Zhao, Zeyu; Gao, Ping; Wang, Changtao; Luo, Xiangang

    2016-01-01

    With new degrees of freedom to achieve full control of the optical wavefront, metasurfaces could overcome the fabrication embarrassment faced by the metamaterials. In this paper, a broadband hologram using metasurface consisting of elongated nanoapertures array with different orientations has been experimentally demonstrated. Owing to broadband characteristic of the polarization-dependent scattering, the performance is verified at working wavelength ranging from 405 nm to 914 nm. Furthermore, the tolerance to the fabrication errors, which include the length and width of the elongated aperture, the shape deformation and the phase noise, has been theoretically investigated to be as large as 10% relative to the original hologram. We believe the method proposed here is promising in emerging applications such as holographic display, optical information processing and lithography technology etc. PMID:26818130

  4. A high-frequency analysis of radome-induced radar pointing error

    NASA Astrophysics Data System (ADS)

    Burks, D. G.; Graf, E. R.; Fahey, M. D.

    1982-09-01

    An analysis is presented of the effect of a tangent ogive radome on the pointing accuracy of a monopulse radar employing an aperture antenna. The radar is assumed to be operating in the receive mode, and the incident fields at the antenna are found by a ray tracing procedure. Rays entering the antenna aperture by direct transmission through the radome and by single reflection from the radome interior are considered. The radome wall is treated as being locally planar. The antenna can be scanned in two angular directions, and two orthogonal polarization states which produce an arbitrarily polarized incident field are considered. Numerical results are presented for both in-plane and cross-plane errors as a function of scan angle and polarization.

  5. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  6. Ultra High-Rate Germanium (UHRGe) Modeling Status Report

    SciTech Connect

    Warren, Glen A.; Rodriguez, Douglas C.

    2012-06-07

    The Ultra-High Rate Germanium (UHRGe) project at Pacific Northwest National Laboratory (PNNL) is conducting research to develop a high-purity germanium (HPGe) detector that can provide both the high resolution typical of germanium and high signal throughput. Such detectors may be beneficial for a variety of potential applications ranging from safeguards measurements of used fuel to material detection and verification using active interrogation techniques. This report describes some of the initial radiation transport modeling efforts that have been conducted to help guide the design of the detector as well as a description of the process used to generate the source spectrum for the used fuel application evaluation.

  7. Quality Control of High-Dose-Rate Brachytherapy: Treatment Delivery Analysis Using Statistical Process Control

    SciTech Connect

    Able, Charles M.; Bright, Megan; Frizzell, Bart

    2013-03-01

    Purpose: Statistical process control (SPC) is a quality control method used to ensure that a process is well controlled and operates with little variation. This study determined whether SPC was a viable technique for evaluating the proper operation of a high-dose-rate (HDR) brachytherapy treatment delivery system. Methods and Materials: A surrogate prostate patient was developed using Vyse ordnance gelatin. A total of 10 metal oxide semiconductor field-effect transistors (MOSFETs) were placed from prostate base to apex. Computed tomography guidance was used to accurately position the first detector in each train at the base. The plan consisted of 12 needles with 129 dwell positions delivering a prescribed peripheral dose of 200 cGy. Sixteen accurate treatment trials were delivered as planned. Subsequently, a number of treatments were delivered with errors introduced, including wrong patient, wrong source calibration, wrong connection sequence, single needle displaced inferiorly 5 mm, and entire implant displaced 2 mm and 4 mm inferiorly. Two process behavior charts (PBC), an individual and a moving range chart, were developed for each dosimeter location. Results: There were 4 false positives resulting from 160 measurements from 16 accurately delivered treatments. For the inaccurately delivered treatments, the PBC indicated that measurements made at the periphery and apex (regions of high-dose gradient) were much more sensitive to treatment delivery errors. All errors introduced were correctly identified by either the individual or the moving range PBC in the apex region. Measurements at the urethra and base were less sensitive to errors. Conclusions: SPC is a viable method for assessing the quality of HDR treatment delivery. Further development is necessary to determine the most effective dose sampling, to ensure reproducible evaluation of treatment delivery accuracy.

  8. Machining and grinding: High rate deformation in practice

    SciTech Connect

    Follansbee, P.S.

    1993-04-01

    Machining and grinding are well-established material-working operations involving highly non-uniform deformation and failure processes. A typical machining operation is characterized by uncertain boundary conditions (e.g.,surface interactions), three-dimensional stress states, large strains, high strain rates, non-uniform temperatures, highly localized deformations, and failure by both nominally ductile and brittle mechanisms. While machining and grinding are thought to be dominated by empiricism, even a cursory inspection leads one to the conclusion that this results more from necessity arising out of the complicated and highly interdisciplinary nature of the processes than from the lack thereof. With these conditions in mind, the purpose of this paper is to outline the current understanding of strain rate effects in metals.

  9. Machining and grinding: High rate deformation in practice

    SciTech Connect

    Follansbee, P.S.

    1993-01-01

    Machining and grinding are well-established material-working operations involving highly non-uniform deformation and failure processes. A typical machining operation is characterized by uncertain boundary conditions (e.g.,surface interactions), three-dimensional stress states, large strains, high strain rates, non-uniform temperatures, highly localized deformations, and failure by both nominally ductile and brittle mechanisms. While machining and grinding are thought to be dominated by empiricism, even a cursory inspection leads one to the conclusion that this results more from necessity arising out of the complicated and highly interdisciplinary nature of the processes than from the lack thereof. With these conditions in mind, the purpose of this paper is to outline the current understanding of strain rate effects in metals.

  10. Statistical Approach to Decreasing the Error Rate of Noninvasive Prenatal Aneuploid Detection caused by Maternal Copy Number Variation

    PubMed Central

    Zhang, Han; Zhao, Yang-Yu; Song, Jing; Zhu, Qi-Ying; Yang, Hua; Zheng, Mei-Ling; Xuan, Zhao-Ling; Wei, Yuan; Chen, Yang; Yuan, Peng-Bo; Yu, Yang; Li, Da-Wei; Liang, Jun-Bin; Fan, Ling; Chen, Chong-Jian; Qiao, Jie

    2015-01-01

    Analyses of cell-free fetal DNA (cff-DNA) from maternal plasma using massively parallel sequencing enable the noninvasive detection of feto-placental chromosome aneuploidy; this technique has been widely used in clinics worldwide. Noninvasive prenatal tests (NIPT) based on cff-DNA have achieved very high accuracy; however, they suffer from maternal copy-number variations (CNV) that may cause false positives and false negatives. In this study, we developed an algorithm to exclude the effect of maternal CNV and refined the Z-score that is used to determine fetal aneuploidy. The simulation results showed that the algorithm is robust against variations of fetal concentration and maternal CNV size. We also introduced a method based on the discrepancy between feto-placental concentrations to help reduce the false-positive ratio. A total of 6615 pregnant women were enrolled in a prospective study to validate the accuracy of our method. All 106 fetuses with T21, 20 with T18, and three with T13 were tested using our method, with sensitivity of 100% and specificity of 99.97%. In the results, two cases with maternal duplications in chromosome 21, which were falsely predicted as T21 by the previous NIPT method, were correctly classified as normal by our algorithm, which demonstrated the effectiveness of our approach. PMID:26534864

  11. Statistical Approach to Decreasing the Error Rate of Noninvasive Prenatal Aneuploid Detection caused by Maternal Copy Number Variation.

    PubMed

    Zhang, Han; Zhao, Yang-Yu; Song, Jing; Zhu, Qi-Ying; Yang, Hua; Zheng, Mei-Ling; Xuan, Zhao-Ling; Wei, Yuan; Chen, Yang; Yuan, Peng-Bo; Yu, Yang; Li, Da-Wei; Liang, Jun-Bin; Fan, Ling; Chen, Chong-Jian; Qiao, Jie

    2015-01-01

    Analyses of cell-free fetal DNA (cff-DNA) from maternal plasma using massively parallel sequencing enable the noninvasive detection of feto-placental chromosome aneuploidy; this technique has been widely used in clinics worldwide. Noninvasive prenatal tests (NIPT) based on cff-DNA have achieved very high accuracy; however, they suffer from maternal copy-number variations (CNV) that may cause false positives and false negatives. In this study, we developed an algorithm to exclude the effect of maternal CNV and refined the Z-score that is used to determine fetal aneuploidy. The simulation results showed that the algorithm is robust against variations of fetal concentration and maternal CNV size. We also introduced a method based on the discrepancy between feto-placental concentrations to help reduce the false-positive ratio. A total of 6615 pregnant women were enrolled in a prospective study to validate the accuracy of our method. All 106 fetuses with T21, 20 with T18, and three with T13 were tested using our method, with sensitivity of 100% and specificity of 99.97%. In the results, two cases with maternal duplications in chromosome 21, which were falsely predicted as T21 by the previous NIPT method, were correctly classified as normal by our algorithm, which demonstrated the effectiveness of our approach. PMID:26534864

  12. Sequencing error correction without a reference genome

    PubMed Central

    2013-01-01

    Background Next (second) generation sequencing is an increasingly important tool for many areas of molecular biology, however, care must be taken when interpreting its output. Even a low error rate can cause a large number of errors due to the high number of nucleotides being sequenced. Identifying sequencing errors from true biological variants is a challenging task. For organisms without a reference genome this difficulty is even more challenging. Results We have developed a method for the correction of sequencing errors in data from the Illumina Solexa sequencing platforms. It does not require a reference genome and is of relevance for microRNA studies, unsequenced genomes, variant detection in ultra-deep sequencing and even for RNA-Seq studies of organisms with sequenced genomes where RNA editing is being considered. Conclusions The derived error model is novel in that it allows different error probabilities for each position along the read, in conjunction with different error rates depending on the particular nucleotides involved in the substitution, and does not force these effects to behave in a multiplicative manner. The model provides error rates which capture the complex effects and interactions of the three main known causes of sequencing error associated with the Illumina platforms. PMID:24350580

  13. User microprogrammable processors for high data rate telemetry preprocessing

    NASA Technical Reports Server (NTRS)

    Pugsley, J. H.; Ogrady, E. P.

    1973-01-01

    The use of microprogrammable processors for the preprocessing of high data rate satellite telemetry is investigated. The following topics are discussed along with supporting studies: (1) evaluation of commercial microprogrammable minicomputers for telemetry preprocessing tasks; (2) microinstruction sets for telemetry preprocessing; and (3) the use of multiple minicomputers to achieve high data processing. The simulation of small microprogrammed processors is discussed along with examples of microprogrammed processors.

  14. Method and Apparatus for High Data Rate Demodulation

    NASA Technical Reports Server (NTRS)

    Grebowsky, Gerald J. (Inventor); Gray, Andrew A. (Inventor); Srinivasan, Meera (Inventor)

    2001-01-01

    A method to demodulate BPSK or QPSK data using clock rates for the receiver demodulator of one-fourth the data rate is presented. This is accomplished through multirate digital signal processing techniques. The data is sampled with an analog-to-digital converter and then converted from a serial data stream to a parallel data stream. This signal processing requires a clock cycle four times the data rate. Once converted into a parallel data stream, the demodulation operations including complex baseband mixing, lowpass filtering, detection filtering, symbol-timing recovery, and carrier recovery are all accomplished at a rate one-fourth the data rate. The clock cycle required is one-sixteenth that required by a traditional serial receiver based on straight convolution. The high rate data demodulator will demodulate BPSK, QPSK, UQPSK, and DQPSK with data rates ranging from 10 Mega-symbols to more than 300 Mega-symbols per second. This method requires less clock cycles per symbol tan traditional serial convolution techniques.

  15. Application of high-speed photography to the study of high-strain-rate materials testing

    NASA Astrophysics Data System (ADS)

    Ruiz, D.; Harding, John; Noble, J. P.; Hillsdon, Graham K.

    1991-04-01

    There is a growing interest in material behaviour at strain rates greater than 104sec1, for instance in the design of aero-engine turbine blades. It is necessary therefore, to develop material testing techniques that give well-defined information on mechanical behaviour in this very high strain-rate regime. A number of techniques are available, including the expanding ring test1, a miniaturised compression Hopkinson bar technique using direct impact and the double-notch shear test3 which has been described by Nicholas4 as "one of the most promising for future studies in dynamic plasticity". However, although it is believed to be a good test for determining the flow stress at shear strain rates of 104sec and above, the design of specimen used makes an accurate determination of strain extremely difficult while, in the later stages of the test the deformation mode involves rotation as well as shear. If this technique is to be used, therefore, it is necessary to examine in detail the progressive deformation and state of stress within the specimen during the impact process. An attempt can then be made to assess how far the data obtained is a reliable measure of the specimen material response and the test can be calibrated. An extensive three stage analysis has been undertaken. In the first stage, reported in a previous paper5, the initial elastic behaviour was studied. Dynamic photoelastic experiments were used to support linear elastic numerical results derived by the finite element method. Good qualitative agreement was obtained between the photoelastic experiment and the numerical model and the principal source of error in the elastic region of the double-notch shear test was identified as the assumption that all deformation of the specimen is concentrated in the two shear zones. For the epoxy (photoelastic) specimen a calibration factor of 5.3 was determined. This factor represents the ratio between the defined (nominal) gauge length and the effective gauge length

  16. Understanding High School Graduation Rates in North Carolina

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2009

    2009-01-01

    Graduation rates are a fundamental indicator of whether or not the nation's public school system is doing what it is intended to do: enroll, engage, and educate youth to be productive members of society. Since almost 90 percent of the fastest-growing and highest-paying jobs require some postsecondary education, having a high school diploma and the…

  17. Trends in High School Graduation Rates. Research Brief. Volume 0710

    ERIC Educational Resources Information Center

    Romanik, Dale; Froman, Terry

    2008-01-01

    This Research Brief addresses an outcome measure that is of paramount importance to senior high schools--graduation rate. Nationwide a student drops out of school approximately every nine seconds. The significance of this issue locally is exemplified by a recent American Civil Liberties Union filing of a class action law suit against the Palm…

  18. Plant respirometer enables high resolution of oxygen consumption rates

    NASA Technical Reports Server (NTRS)

    Foster, D. L.

    1966-01-01

    Plant respirometer permits high resolution of relatively small changes in the rate of oxygen consumed by plant organisms undergoing oxidative metabolism in a nonphotosynthetic state. The two stage supply and monitoring system operates by a differential pressure transducer and provides a calibrated output by digital or analog signals.

  19. Binary interactions with high accretion rates onto main sequence stars

    NASA Astrophysics Data System (ADS)

    Shiber, Sagiv; Schreier, Ron; Soker, Noam

    2016-07-01

    Energetic outflows from main sequence stars accreting mass at very high rates might account for the powering of some eruptive objects, such as merging main sequence stars, major eruptions of luminous blue variables, e.g., the Great Eruption of Eta Carinae, and other intermediate luminosity optical transients (ILOTs; red novae; red transients). These powerful outflows could potentially also supply the extra energy required in the common envelope process and in the grazing envelope evolution of binary systems. We propose that a massive outflow/jets mediated by magnetic fields might remove energy and angular momentum from the accretion disk to allow such high accretion rate flows. By examining the possible activity of the magnetic fields of accretion disks, we conclude that indeed main sequence stars might accrete mass at very high rates, up to ≈ 10‑2 M ⊙ yr‑1 for solar type stars, and up to ≈ 1 M ⊙ yr‑1 for very massive stars. We speculate that magnetic fields amplified in such extreme conditions might lead to the formation of massive bipolar outflows that can remove most of the disk's energy and angular momentum. It is this energy and angular momentum removal that allows the very high mass accretion rate onto main sequence stars.

  20. Binary interactions with high accretion rates onto main sequence stars

    NASA Astrophysics Data System (ADS)

    Shiber, Sagiv; Schreier, Ron; Soker, Noam

    2016-07-01

    Energetic outflows from main sequence stars accreting mass at very high rates might account for the powering of some eruptive objects, such as merging main sequence stars, major eruptions of luminous blue variables, e.g., the Great Eruption of Eta Carinae, and other intermediate luminosity optical transients (ILOTs; red novae; red transients). These powerful outflows could potentially also supply the extra energy required in the common envelope process and in the grazing envelope evolution of binary systems. We propose that a massive outflow/jets mediated by magnetic fields might remove energy and angular momentum from the accretion disk to allow such high accretion rate flows. By examining the possible activity of the magnetic fields of accretion disks, we conclude that indeed main sequence stars might accrete mass at very high rates, up to ≈ 10-2 M ⊙ yr-1 for solar type stars, and up to ≈ 1 M ⊙ yr-1 for very massive stars. We speculate that magnetic fields amplified in such extreme conditions might lead to the formation of massive bipolar outflows that can remove most of the disk's energy and angular momentum. It is this energy and angular momentum removal that allows the very high mass accretion rate onto main sequence stars.

  1. Childhood Onset Schizophrenia: High Rate of Visual Hallucinations

    ERIC Educational Resources Information Center

    David, Christopher N.; Greenstein, Deanna; Clasen, Liv; Gochman, Pete; Miller, Rachel; Tossell, Julia W.; Mattai, Anand A.; Gogtay, Nitin; Rapoport, Judith L.

    2011-01-01

    Objective: To document high rates and clinical correlates of nonauditory hallucinations in childhood onset schizophrenia (COS). Method: Within a sample of 117 pediatric patients (mean age 13.6 years), diagnosed with COS, the presence of auditory, visual, somatic/tactile, and olfactory hallucinations was examined using the Scale for the Assessment…

  2. Cassini High Rate Detector V16.0

    NASA Astrophysics Data System (ADS)

    Economou, T.; DiDonna, P.

    2016-05-01

    The High Rate Detector (HRD) from the University of Chicago is an independent part of the CDA instrument on the Cassini Orbiter that measures the dust flux and particle mass distribution of dust particles hitting the HRD detectors. This data set includes all data from the HRD through December 31, 2015. Please refer to Srama et al. (2004) for a detailed HRD description.

  3. Cassini High Rate Detector V14.0

    NASA Astrophysics Data System (ADS)

    Economou, T.; DiDonna, P.

    2014-06-01

    The High Rate Detector (HRD) from the University of Chicago is an independent part of the CDA instrument on the Cassini Orbiter that measures the dust flux and particle mass distribution of dust particles hitting the HRD detectors. This data set includes all data from the HRD through December 31, 2013. Please refer to Srama et al. (2004) for a detailed HRD description.

  4. Corrected High-Frame Rate Anchored Ultrasound with Software Alignment

    ERIC Educational Resources Information Center

    Miller, Amanda L.; Finch, Kenneth B.

    2011-01-01

    Purpose: To improve lingual ultrasound imaging with the Corrected High Frame Rate Anchored Ultrasound with Software Alignment (CHAUSA; Miller, 2008) method. Method: A production study of the IsiXhosa alveolar click is presented. Articulatory-to-acoustic alignment is demonstrated using a Tri-Modal 3-ms pulse generator. Images from 2 simultaneous…

  5. High Reported Spontaneous Stuttering Recovery Rates: Fact or Fiction?

    ERIC Educational Resources Information Center

    Ramig, Peter R.

    1993-01-01

    Contact after 6 to 8 years with families of 21 children who were diagnosed as stuttering but did not receive fluency intervention services found that almost all subjects still had a stuttering problem. Results dispute the high spontaneous recovery rates reported in the literature and support the value of early intervention. (Author/DB)

  6. High Precision Measurements of Temperature Dependence of Creep Rate of Polycrystalline Forsterite

    NASA Astrophysics Data System (ADS)

    Nakakoji, T.; Hiraga, T.

    2014-12-01

    Obtaining temperature dependence of creep rate, that is, activation energy for the creep is critical in geophysics, since its value can indicate deformation mechanism and also allows to extrapolate the creep rate measured in the room experiments to geological conditions when the creep mechanism is identical in both cases. Although numerous experimental results have been obtained so far, the obtained activation energy often contains error range of >50 kJ/mol, which often causes large uncertainties in strain rate at applied geological conditions. To minimize this error, it is important to collect strain rates at many different temperatures with high accuracy. We conducted high temperature compression experiments on synthetic forsterite (90%vol) and enstatite (10vol %) aggregates under increasing and decreasing temperatures. We applied a constant load of ~20 MPa using uniaxial testing machine (Shimadzu AG-X 50kN). The temperature was changed from 1360°C to 1240°C by furnace attached to the machine. Prior to the applying the load to the samples the grain size was saturated at 1360°C for 24 hours to minimize grain growth during the test. Decreasing-rate of temperature was 0.11min/°C and 0.02min/°C at temperature ranges of 1360 to 1300 and 1300 to 1240 respectively. The increasing-rate of the temperature was the same as the decreasing-rate. Strain rates from every 1 degree were obtained successfully. After the experiment, we analyzed the microstructure of the sample with scanning electron microscopy to measure the grain diameter. Arrhenius plots of strain rate demonstrate very linear distribution at > 1300 °C giving an activation energy of 649 ± 14 kJ/mol, whereas weak transition to lower activation energy 550 ± 23 kJ/mol below 1300°C was observed. Tasaka et al. (2013) obtained the activation energy of 370 ± 50 kJ/mol from similar temperature ranges used in our study but finer-grained samples. Combining these results, we interpret our results of high activation

  7. Optimization of high-throughput sequencing kinetics for determining enzymatic rate constants of thousands of RNA substrates.

    PubMed

    Niland, Courtney N; Jankowsky, Eckhard; Harris, Michael E

    2016-10-01

    Quantification of the specificity of RNA binding proteins and RNA processing enzymes is essential to understanding their fundamental roles in biological processes. High-throughput sequencing kinetics (HTS-Kin) uses high-throughput sequencing and internal competition kinetics to simultaneously monitor the processing rate constants of thousands of substrates by RNA processing enzymes. This technique has provided unprecedented insight into the substrate specificity of the tRNA processing endonuclease ribonuclease P. Here, we investigated the accuracy and robustness of measurements associated with each step of the HTS-Kin procedure. We examine the effect of substrate concentration on the observed rate constant, determine the optimal kinetic parameters, and provide guidelines for reducing error in amplification of the substrate population. Importantly, we found that high-throughput sequencing and experimental reproducibility contribute to error, and these are the main sources of imprecision in the quantified results when otherwise optimized guidelines are followed.

  8. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands

    PubMed Central

    Sánchez-Durán, José A.; Hidalgo-López, José A.; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando

    2015-01-01

    Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics. PMID:26295393

  9. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands.

    PubMed

    Sánchez-Durán, José A; Hidalgo-López, José A; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando

    2015-08-19

    Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics.

  10. Human PrimPol is a highly error-prone polymerase regulated by single-stranded DNA binding proteins

    PubMed Central

    Guilliam, Thomas A.; Jozwiakowski, Stanislaw K.; Ehlinger, Aaron; Barnes, Ryan P.; Rudd, Sean G.; Bailey, Laura J.; Skehel, J. Mark; Eckert, Kristin A.; Chazin, Walter J.; Doherty, Aidan J.

    2015-01-01

    PrimPol is a recently identified polymerase involved in eukaryotic DNA damage tolerance, employed in both re-priming and translesion synthesis mechanisms to bypass nuclear and mitochondrial DNA lesions. In this report, we investigate how the enzymatic activities of human PrimPol are regulated. We show that, unlike other TLS polymerases, PrimPol is not stimulated by PCNA and does not interact with it in vivo. We identify that PrimPol interacts with both of the major single-strand binding proteins, RPA and mtSSB in vivo. Using NMR spectroscopy, we characterize the domains responsible for the PrimPol-RPA interaction, revealing that PrimPol binds directly to the N-terminal domain of RPA70. In contrast to the established role of SSBs in stimulating replicative polymerases, we find that SSBs significantly limit the primase and polymerase activities of PrimPol. To identify the requirement for this regulation, we employed two forward mutation assays to characterize PrimPol's replication fidelity. We find that PrimPol is a mutagenic polymerase, with a unique error specificity that is highly biased towards insertion-deletion errors. Given the error-prone disposition of PrimPol, we propose a mechanism whereby SSBs greatly restrict the contribution of this enzyme to DNA replication at stalled forks, thus reducing the mutagenic potential of PrimPol during genome replication. PMID:25550423

  11. The fundamental limitations of high-rate gaseous detectors

    SciTech Connect

    Fonte, P.

    1999-06-01

    Future high-luminosity experiments make serious demands on detector technologies and have prompted a chain of inventions of new high-rate gaseous detectors: Microstrip Gas Counters (MSGC`s), Microgap Chambers (MGC`s), Compteur A Trou (CAT`s), Micromesh Gas Structure (MICROMEGAS), and Gas Electron Multipliers (GEM`s). The authors report results from a systematic study of breakdown mechanisms in these and other gaseous detectors recently chosen or considered as candidates for high-luminosity experiments. It was found that, for all the detectors tested, the maximum achievable gain before breakdown appeared, dropped dramatically with rate, sometimes inversely proportional to it. Further, in the presence of alpha particles, typical of the backgrounds in high-energy experiments, additional gain drops of 1--2 orders of magnitude were observed for some detectors. They discovered that the breakdown in these detectors was through a previously unknown mechanism for which they give a qualitative explanation. They also present possible ways of increasing the value of the maximum achievable detector gain at high rates and have verified these experimentally.

  12. Coal plasticity at high heating rates and temperatures

    SciTech Connect

    Darivakis, G.S.; Peters, W.A.; Howard, J.B.

    1990-01-01

    The broad objective of this project is to obtain improved, quantitative understanding of the transient plasticity of bituminous coals under high heating rates and other reaction and pretreatment conditions of scientific and practical interest. To these ends the research plan is to measure the softening and resolidification behavior of two US bituminous coals with a rapid-heating, fast response, high-temperature coal plastometer, previously developed in this laboratory. Specific measurements planned for the project include determinations of apparent viscosity, softening temperature, plastic period, and resolidificationtime for molten coal: (1) as a function of independent variations in coal type, heating rate, final temperature, gaseous atmosphere (inert, 0{sub 2} or H{sub 2}), and shear rate; and (2) in exploratory runs where coal is pretreated (preoxidation, pyridine extraction, metaplast cracking agents), before heating. The intra-coal inventory and molecular weight distribution of pyridine extractables will also be measured using a rapid quenching, electrical screen heater coal pyrolysis reactor. The yield of extractables is representative of the intra-coal inventory of plasticing agent (metaplast) remaining after quenching. Coal plasticity kinetics will then be mathematically modeled from metaplast generation and depletion rates, via a correlation between the viscosity of a suspension and the concentration of deformable medium (here metaplast) in that suspension. Work during this reporting period has been concerned with re-commissioning the rapid heating rate plastometer apparatus.

  13. High frame rate photoacoustic imaging using clinical ultrasound system

    NASA Astrophysics Data System (ADS)

    Sivasubramanian, Kathyayini; Pramanik, Manojit

    2016-03-01

    Photoacoustic tomography (PAT) is a potential hybrid imaging modality which is gaining attention in the field of medical imaging. Typically a Q-switched Nd:YAG laser is used to excite the tissue and generate photoacoustic signals. But, they are not suitable for clinical applications owing to their high cost, large size. Also, their low pulse repetition rate (PRR) of few tens of hertz prevents them from being used in real-time PAT. So, there is a growing need for an imaging system capable of real-time imaging for various clinical applications. In this work, we are using a nanosecond pulsed laser diode as an excitation source and a clinical ultrasound imaging system to obtain the photoacoustic imaging. The excitation laser is ~803 nm in wavelength with energy of ~1.4 mJ per pulse. So far, the reported frame rate for photoacoustic imaging is only a few hundred Hertz. We have demonstrated up to 7000 frames per second framerate in photoacoustic imaging (B-mode) and measured the flow rate of fast moving obje ct. Phantom experiments were performed to test the fast imaging capability and measure the flow rate of ink solution inside a tube. This fast photoacoustic imaging can be used for various clinical applications including cardiac related problems, where the blood flow rate is quite high, or other dynamic studies.

  14. High strain rate behavior of pure metals at elevated temperature

    NASA Astrophysics Data System (ADS)

    Testa, Gabriel; Bonora, Nicola; Ruggiero, Andrew; Iannitti, Gianluca; Domenico, Gentile

    2013-06-01

    In many applications and technology processes, such as stamping, forging, hot working etc., metals and alloys are subjected to elevated temperature and high strain rate deformation process. Characterization tests, such as quasistatic and dynamic tension or compression test, and validation tests, such as Taylor impact and DTE - dynamic tensile extrusion -, provide the experimental base of data for constitutive model validation and material parameters identification. Testing material at high strain rate and temperature requires dedicated equipment. In this work, both tensile Hopkinson bar and light gas gun where modified in order to allow material testing under sample controlled temperature conditions. Dynamic tension tests and Taylor impact tests, at different temperatures, on high purity copper (99.98%), tungsten (99.95%) and 316L stainless steel were performed. The accuracy of several constitutive models (Johnson and Cook, Zerilli-Armstrong, etc.) in predicting the observed material response was verified by means of extensive finite element analysis (FEA).

  15. Magnetic Implosion for Novel Strength Measurements at High Strain Rates

    SciTech Connect

    Lee, H.; Preston, D.L.; Bartsch, R.R.; Bowers, R.L.; Holtkamp, D.; Wright, B.L.

    1998-10-19

    Recently Lee and Preston have proposed to use magnetic implosions as a new method for measuring material strength in a regime of large strains and high strain rates inaccessible to previously established techniques. By its shockless nature, this method avoids the intrinsic difficulties associated with an earlier approach using high explosives. The authors illustrate how the stress-strain relation for an imploding liner can be obtained by measuring the velocity and temperature history of its inner surface. They discuss the physical requirements that lead us to a composite liner design applicable to different test materials, and also compare the code-simulated prediction with the measured data for the high strain-rate experiments conducted recently at LANL. Finally, they present a novel diagnostic scheme that will enable us to remove the background in the pyrometric measurement through data reduction.

  16. High repetition rate plasma mirror device for attosecond science

    SciTech Connect

    Borot, A.; Douillet, D.; Iaquaniello, G.; Lefrou, T.; Lopez-Martens, R.; Audebert, P.; Geindre, J.-P.

    2014-01-15

    This report describes an active solid target positioning device for driving plasma mirrors with high repetition rate ultra-high intensity lasers. The position of the solid target surface with respect to the laser focus is optically monitored and mechanically controlled on the nm scale to ensure reproducible interaction conditions for each shot at arbitrary repetition rate. We demonstrate the target capabilities by driving high-order harmonic generation from plasma mirrors produced on glass targets with a near-relativistic intensity few-cycle pulse laser system operating at 1 kHz. During experiments, residual target surface motion can be actively stabilized down to 47 nm (root mean square), which ensures sub-300-as relative temporal stability of the plasma mirror as a secondary source of coherent attosecond extreme ultraviolet radiation in pump-probe experiments.

  17. High rate sputter deposition of wear resistant tantalum coatings

    SciTech Connect

    Matson, D.W.; Merz, M.D.; McClanahan, E.D.

    1991-11-01

    The refractory nature and high ductility of body centered cubic (bcc) phase tantalum makes it a suitable material for corrosion- and wear-resistant coatings on surfaces which are subjected to high stresses and harsh chemical and erosive environments. Sputter deposition can produce thick tantalum films but is prone to forming the brittle tetragonal beta phase of this material. Efforts aimed at forming thick bcc phase tantalum coatings in both flat plate and cylindrical geometries by high-rate triode sputtering methods are discussed. In addition to substrate temperature, the bcc-to-beta phase ratio in sputtered tantalum coatings is shown to be sensitive to other substrate surface effects.

  18. High-rate mechanical properties of energetic materials

    NASA Astrophysics Data System (ADS)

    Walley, S. M.; Siviour, C. R.; Drodge, D. R.; Williamson, D. M.

    2010-01-01

    Compared to the many thousands of studies that have been performed on the energy release mechanisms of high energy materials, relatively few studies have been performed (a few hundred) into their mechanical properties. Since it is increasingly desired to model the high rate deformation of such materials, it is of great importance to gather data on their response so that predictive constitutive models can be constructed. This paper reviews the state of the art concerning what is known about the mechanical response of high energy materials. Examples of such materials are polymer bonded explosives (used in munitions), propellants (used to propel rockets), and pyrotechnics (used to initiate munitions and also in flares).

  19. Characterisation of human diaphragm at high strain rate loading.

    PubMed

    Gaur, Piyush; Chawla, Anoop; Verma, Khyati; Mukherjee, Sudipto; Lalvani, Sanjeev; Malhotra, Rajesh; Mayer, Christian

    2016-07-01

    Motor vehicle crashes (MVC׳s) commonly results in life threating thoracic and abdominal injuries. Finite element models are becoming an important tool in analyzing automotive related injuries to soft tissues. Establishment of accurate material models including tissue tolerance limits is critical for accurate injury evaluation. The diaphragm is the most important skeletal muscle for respiration having a bi-domed structure, separating the thoracic cavity from abdominal cavity. Traumatic rupture of the diaphragm is a potentially serious injury which presents in different forms depending upon the mechanisms of the causative trauma. A major step to gain insight into the mechanism of traumatic rupture of diaphragm is to understand the high rate failure properties of diaphragm tissue. Thus, the main objective of this study was to estimate the mechanical and failure properties of human diaphragm at strain rates associated with blunt thoracic and abdominal trauma. A total of 23 uniaxial tensile tests were performed at various strain rates ranging from 0.001-200s(-1) in order to characterize the mechanical and failure properties on human diaphragm tissue. Each specimen was tested to failure at one of the four strain rates (0.001s(-1), 65s(-1), and 130s(-1), 190s(-1)) to investigate the effects of strain rate dependency. High speed video and markers placed on the grippers were used to measure the gripper to gripper displacement. Engineering stresses reported in the study is calculated from the ratio of force measured and initial cross sectional area whereas engineering strain is calculated from the ratio of the elongation to the undeformed length (gauge length) of the specimen.The results of this study showed that the diaphragm tissues is rate dependent with higher strain rate tests giving higher failure stress and higher failure strains. The failure stress for all tests ranged from 1.17MPa to 4.1MPa and failure strain ranged from 12.15% to 24.62%. PMID:27062242

  20. Marking Errors: A Simple Strategy

    ERIC Educational Resources Information Center

    Timmons, Theresa Cullen

    1987-01-01

    Indicates that using highlighters to mark errors produced a 76% class improvement in removing comma errors and a 95.5% improvement in removing apostrophe errors. Outlines two teaching procedures, to be followed before introducing this tool to the class, that enable students to remove errors at this effective rate. (JD)

  1. Attenuation and bit error rate for four co-propagating spatially multiplexed optical communication channels of exactly same wavelength in step index multimode fibers

    NASA Astrophysics Data System (ADS)

    Murshid, Syed H.; Chakravarty, Abhijit

    2011-06-01

    Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.

  2. Bit-Error-Rate Evaluation of Energy-Gap-Induced Super-Resolution Read-Only-Memory Disc with Dual-Layer Structure

    NASA Astrophysics Data System (ADS)

    Yamada, Hirohisa; Hayashi, Tetsuya; Yamamoto, Masaki; Harada, Yasuhiro; Tajima, Hideharu; Maeda, Shigemi; Murakami, Yoshiteru; Takahashi, Akira

    2009-03-01

    Practically available readout characteristics were obtained in a dual-layer energy-gap-induced super-resolution (EG-SR) read-only-memory (ROM) disc with an 80 gigabytes (GB) capacity. One of the dual layers consisted of zinc oxide and titanium films and the other layer consisted of zinc oxide and tantalum films. Bit error rates better than 3.0×10-4 were obtained with a minimum readout power of approximately 1.6 mW in both layers using a Blu-ray Disc tester by a partial response maximum likelihood (PRML) detection method. The dual-layer disc showed good tolerances in disc tilts and focus offset and also showed good readout cyclability in both layers.

  3. High rate constitutive modeling of aluminium alloy tube

    NASA Astrophysics Data System (ADS)

    Salisbury, C. P.; Worswick, M. J.; Mayer, R.

    2006-08-01

    As the need for fuel efficient automobiles increases, car designers are investigating light-weight materials for automotive bodies that will reduce the overall automobile weight. Aluminium alloy tube is a desirable material to use in automotive bodies due to its light weight. However, aluminium suffers from lower formability than steel and its energy absorption ability in a crash event after a forming operation is largely unknown. As part of a larger study on the relationship between crashworthiness and forming processes, constitutive models for 3mm AA5754 aluminium tube were developed. A nominal strain rate of 100/s is often used to characterize overall automobile crash events, whereas strain rates on the order of 1000/s can occur locally. Therefore, tests were performed at quasi-static rates using an Instron test fixture and at strain rates of 500/s to 1500/s using a tensile split Hopkinson bar. High rate testing was then conducted at rates of 500/s, 1000/s and 1500/s at 21circC, 150circC and 300circC. The generated data was then used to determine the constitutive parameters for the Johnson-Cook and Zerilli-Armstrong material models.

  4. Highly Variable Rates of Genome Rearrangements between Hemiascomycetous Yeast Lineages

    PubMed Central

    Fischer, Gilles; Rocha, Eduardo P. C; Brunet, Frédéric; Vergassola, Massimo; Dujon, Bernard

    2006-01-01

    Hemiascomycete yeasts cover an evolutionary span comparable to that of the entire phylum of chordates. Since this group currently contains the largest number of complete genome sequences it presents unique opportunities to understand the evolution of genome organization in eukaryotes. We inferred rates of genome instability on all branches of a phylogenetic tree for 11 species and calculated species-specific rates of genome rearrangements. We characterized all inversion events that occurred within synteny blocks between six representatives of the different lineages. We show that the rates of macro- and microrearrangements of gene order are correlated within individual lineages but are highly variable across different lineages. The most unstable genomes correspond to the pathogenic yeasts Candida albicans and Candida glabrata. Chromosomal maps have been intensively shuffled by numerous interchromosomal rearrangements, even between species that have retained a very high physical fraction of their genomes within small synteny blocks. Despite this intensive reshuffling of gene positions, essential genes, which cluster in low recombination regions in the genome of Saccharomyces cerevisiae, tend to remain syntenic during evolution. This work reveals that the high plasticity of eukaryotic genomes results from rearrangement rates that vary between lineages but also at different evolutionary times of a given lineage. PMID:16532063

  5. High strain-rate model for fiber-reinforced composites

    SciTech Connect

    Aidun, J.B.; Addessio, F.L.

    1995-07-01

    Numerical simulations of dynamic uniaxial strain loading of fiber-reinforced composites are presented that illustrate the wide range of deformation mechanisms that can be captured using a micromechanics-based homogenization technique as the material model in existing continuum mechanics computer programs. Enhancements to the material model incorporate high strain-rate plastic response, elastic nonlinearity, and rate-dependent strength degradation due to material damage, fiber debonding, and delamination. These make the model relevant to designing composite structural components for crash safety, armor, and munitions applications.

  6. CW Interference Effects on High Data Rate Transmission Through the ACTS Wideband Channel

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Ngo, Duc H.; Tran, Quang K.; Tran, Diepchi T.; Yu, John; Kachmar, Brian A.; Svoboda, James S.

    1996-01-01

    Satellite communications channels are susceptible to various sources of interference. Wideband channels have a proportionally greater probability of receiving interference than narrowband channels. NASA's Advanced Communications Technology Satellite (ACTS) includes a 900 MHz bandwidth hardlimiting transponder which has provided an opportunity for the study of interference effects of wideband channels. A series of interference tests using two independent ACTS ground terminals measured the effects of continuous-wave (CW) uplink interference on the bit-error rate of a 220 Mbps digitally modulated carrier. These results indicate the susceptibility of high data rate transmissions to CW interference and are compared to results obtained with a laboratory hardware-based system simulation and a computer simulation.

  7. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  8. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations. PMID:26560913

  9. Hispanic High School Graduates Pass Whites in Rate of College Enrollment: High School Drop-out Rate at Record Low

    ERIC Educational Resources Information Center

    Fry, Richard; Taylor, Paul

    2013-01-01

    A record seven-in-ten (69%) Hispanic high school graduates in the class of 2012 enrolled in college that fall, two percentage points higher than the rate (67%) among their white counterparts, according to a Pew Research Center analysis of new data from the U.S. Census Bureau. This milestone is the result of a long-term increase in Hispanic…

  10. Vitreous bond CBN high speed and high material removal rate grinding of ceramics

    SciTech Connect

    Shih, A.J.; Grant, M.B.; Yonushonis, T.M.; Morris, T.O.; McSpadden, S.B.

    1998-08-01

    High speed (up to 127 m/s) and high material removal rate (up to 10 mm{sup 3}/s/mm) grinding experiments using a vitreous bond CBN wheel were conducted to investigate the effects of material removal rate, wheel speed, dwell time and truing speed ratio on cylindrical grinding of silicon nitride and zirconia. Experimental results show that the high grinding wheel surface speed can reduce the effective chip thickness, lower grinding forces, enable high material removal rate grinding and achieve a higher G-ratio. The radial feed rate was increased to as high as 0.34 {micro}m/s for zirconia and 0.25 {micro}m/s for silicon nitride grinding to explore the advantage of using high wheel speed for cost-effective high material removal rate grinding of ceramics.

  11. A high-rate PCI-based telemetry processor system

    NASA Astrophysics Data System (ADS)

    Turri, R.

    2002-07-01

    The high performances reached by the Satellite on-board telemetry generation and transmission, as consequently, will impose the design of ground facilities with higher processing capabilities at low cost to allow a good diffusion of these ground station. The equipment normally used are based on complex, proprietary bus and computing architectures that prevent the systems from exploiting the continuous and rapid increasing in computing power available on market. The PCI bus systems now allow processing of high-rate data streams in a standard PC-system. At the same time the Windows NT operating system supports multitasking and symmetric multiprocessing, giving the capability to process high data rate signals. In addition, high-speed networking, 64 bit PCI-bus technologies and the increase in processor power and software, allow creating a system based on COTS products (which in future may be easily and inexpensively upgraded). In the frame of EUCLID RTP 9.8 project, a specific work element was dedicated to develop the architecture of a system able to acquire telemetry data of up to 600 Mbps. Laben S.p.A - a Finmeccanica Company -, entrusted of this work, has designed a PCI-based telemetry system making possible the communication between a satellite down-link and a wide area network at the required rate.

  12. High data rate transient sensing using dielectric micro-resonator.

    PubMed

    Ali, Amir R; Ötügen, Volkan; Ioppolo, Tindaro

    2015-08-10

    An approach to high-speed tracking of optical mode shifts of microresonators for wide-bandwidth sensing applications is presented. In the typical microresonator sensor, the whispering gallery optical modes (WGM) are excited by tangentially coupling tunable laser light into the resonator cavity, such as a microsphere. The light coupling is achieved by overlapping the evanescent field of the cavity with that of a prism or the tapered section of a single-mode optical fiber. The transmission spectrum through the fiber is observed to detect WGM shifts as the laser is tuned across a narrow wavelength range. High data rate transient-sensing applications require the tuning of the diode laser at high repetition rates and tracking of the WGM shifts. At high repetition rates, the thermal inertia prevents appropriate tuning of the laser, thus leading to smaller tuning ranges and waveform distortions. In the present paper, the laser is tuned using a harmonic (rather than ramp or triangular) waveform, and its output is calibrated at various input frequencies and amplitudes using a Fabry-Perot interferometer to account for the tuning range variations. The WGM shifts are tracked by performing a modified cross-correlation method on the transmission spectra. Force sensor experiments were performed using ramp and harmonic waveform tuning of the diode laser with rates up to 10 kHz. Results show that the harmonic tuning of the laser eliminates the high-speed transient thermal effects. The thermal model developed to predict the laser tuning agrees well the experiments. PMID:26368378

  13. High Pressure Burn Rate Measurements on an Ammonium Perchlorate Propellant

    SciTech Connect

    Glascoe, E A; Tan, N

    2010-04-21

    High pressure deflagration rate measurements of a unique ammonium perchlorate (AP) based propellant are required to design the base burn motor for a Raytheon weapon system. The results of these deflagration rate measurements will be key in assessing safety and performance of the system. In particular, the system may experience transient pressures on the order of 100's of MPa (10's kPSI). Previous studies on similar AP based materials demonstrate that low pressure (e.g. P < 10 MPa or 1500 PSI) burn rates can be quite different than the elevated pressure deflagration rate measurements (see References and HPP results discussed herein), hence elevated pressure measurements are necessary in order understand the deflagration behavior under relevant conditions. Previous work on explosives have shown that at 100's of MPa some explosives will transition from a laminar burn mechanism to a convective burn mechanism in a process termed deconsolidative burning. The resulting burn rates that are orders-of-magnitude faster than the laminar burn rates. Materials that transition to the deconsolidative-convective burn mechanism at elevated pressures have been shown to be considerably more violent in confined heating experiments (i.e. cook-off scenarios). The mechanisms of propellant and explosive deflagration are extremely complex and include both chemical, and mechanical processes, hence predicting the behavior and rate of a novel material or formulation is difficult if not impossible. In this work, the AP/HTPB based material, TAL-1503 (B-2049), was burned in a constant volume apparatus in argon up to 300 MPa (ca. 44 kPSI). The burn rate and pressure were measured in-situ and used to calculate a pressure dependent burn rate. In general, the material appears to burn in a laminar fashion at these elevated pressures. The experiment was reproduced multiple times and the burn rate law using the best data is B = (0.6 {+-} 0.1) x P{sup (1.05{+-}0.02)} where B is the burn rate in mm/s and

  14. High data rate recording: Moving to 2 Gbit/s

    NASA Astrophysics Data System (ADS)

    Taratorin, A.; Yuan, S.; Nikitin, V.

    2003-05-01

    High data rate recording can be achieved using fast write drivers and fast heads. Advanced short-yoke write heads and write drivers with 450 ps rise time and programmable current overshoot were used to study recording at data rates up to 2 Gbit/s. The head flux rise time causes shifts of recorded transitions. It is well known that current overshoot helps to overcome bandwidth limitations in the write driver, interconnects, and write head. However, excessive overshoot may cause pattern-dependent transition shifts and significant distortions of recorded transitions. We present the data rate performance of short-yoke recording heads, analysis of nonlinear pattern-dependent distortions, and optimization of the write current wave form in the 1-2 Gbit/s range. Simple dibit and tribit patterns were recorded at 2 Gbit/s. Low-distortion recording for arbitrary data patterns was demonstrated at 1.6 Gbit/s after optimization of write current overshoot.

  15. Sample size and sampling errors as the source of dispersion in chemical analyses. [for high-Ti lunar basalt

    NASA Technical Reports Server (NTRS)

    Clanton, U. S.; Fletcher, C. R.

    1976-01-01

    The paper describes a Monte Carlo model for simulation of two-dimensional representations of thin sections of some of the more common igneous rock textures. These representations are extrapolated to three dimensions to develop a volume of 'rock'. The model (here applied to a medium-grained high-Ti basalt) can be used to determine a statistically significant sample for a lunar rock or to predict the probable errors in the oxide contents that can occur during the analysis of a sample that is not representative of the parent rock.

  16. Dynamic High-Temperature Characterization of an Iridium Alloy in Compression at High Strain Rates

    SciTech Connect

    Song, Bo; Nelson, Kevin; Lipinski, Ronald J.; Bignell, John L.; Ulrich, G. B.; George, E. P.

    2014-06-01

    Iridium alloys have superior strength and ductility at elevated temperatures, making them useful as structural materials for certain high-temperature applications. However, experimental data on their high-temperature high-strain-rate performance are needed for understanding high-speed impacts in severe elevated-temperature environments. Kolsky bars (also called split Hopkinson bars) have been extensively employed for high-strain-rate characterization of materials at room temperature, but it has been challenging to adapt them for the measurement of dynamic properties at high temperatures. Current high-temperature Kolsky compression bar techniques are not capable of obtaining satisfactory high-temperature high-strain-rate stress-strain response of thin iridium specimens investigated in this study. We analyzed the difficulties encountered in high-temperature Kolsky compression bar testing of thin iridium alloy specimens. Appropriate modifications were made to the current high-temperature Kolsky compression bar technique to obtain reliable compressive stress-strain response of an iridium alloy at high strain rates (300 – 10000 s-1) and temperatures (750°C and 1030°C). Uncertainties in such high-temperature high-strain-rate experiments on thin iridium specimens were also analyzed. The compressive stress-strain response of the iridium alloy showed significant sensitivity to strain rate and temperature.

  17. Soft Error Vulnerability of Iterative Linear Algebra Methods

    SciTech Connect

    Bronevetsky, G; de Supinski, B

    2008-01-19

    Devices are increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft error rates were significant primarily in space and high-atmospheric computing. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming important even at terrestrial altitudes. Due to their large number of components, supercomputers are particularly susceptible to soft errors. Since many large scale parallel scientific applications use iterative linear algebra methods, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. Many users consider these methods invulnerable to most soft errors since they converge from an imprecise solution to a precise one. However, we show in this paper that iterative methods are vulnerable to soft errors, exhibiting both silent data corruptions and poor ability to detect errors. Further, we evaluate a variety of soft error detection and tolerance techniques, including checkpointing, linear matrix encodings, and residual tracking techniques.

  18. Over-the-air demonstration of spatial multiplexing at high data rates using real-time base-band processing

    NASA Astrophysics Data System (ADS)

    Jungnickel, V.; Haustein, T.; Forck, A.; Krueger, U.; Pohl, V.; von Helmolt, C.

    2004-05-01

    Over-the-air transmission experiments with a realtime MIMO test-bed are reported. We describe in principle a hardware architecture for spatial multiplexing at high data rates, discuss in detail the implementation on a hybrid FPGA/DSP platform and show measured bit error rates from indoor transmission experiments. Per-antenna rate control and joint transmission are enabled as well using an ideal feed-back link. A functional test of these new techniques is described while detailed transmission experiments are still ongoing.

  19. High Rate Proton Irradiation of 15mm Muon Drifttubes

    NASA Astrophysics Data System (ADS)

    Zibell, A.; Biebel, O.; Hertenberger, R.; Ruschke, A.; Schmitt, Ch.; Kroha, H.; Bittner, B.; Schwegler, P.; Dubbert, J.; Ott, S.

    2012-08-01

    Future LHC luminosity upgrades will significantly increase the amount of background hits from photons, neutrons 11.11d protons in the detectors of the ATLAS muon spectrometer. At the proposed LHC peak luminosity of 5\\cdot 1034(1)/(cm2s), background hit rates of more than 10(kHz)/(cm2) are expected in the innermost forward region, leading to a loss of performance of the current tracking chambers. Based on the ATLAS Monitored Drift Tube chambers, a new high rate capable drift tube detecor using tubes with a reduced diameter of 15mm was developed. To test the response to highly ionizing particles, a prototype chamber of 46 15mm drift tubes was irradiated with a 20 MeV proton beam at the tandem accelerator at the Maier-Leibnitz Laboratory, Munich. Three tubes in a planar layer were irradiated while all other tubes were used for reconstruction of cosmic muon tracks through irradiated and nonirradiated parts of the chamber. To determine the rate capability of the 15mm drifttubes we investigated the effect of the proton hit rate on pulse height, efficiency and spatial resolution of the cosmic muon signals.

  20. High rates of evolution preceded the origin of birds.

    PubMed

    Puttick, Mark N; Thomas, Gavin H; Benton, Michael J

    2014-05-01

    The origin of birds (Aves) is one of the great evolutionary transitions. Fossils show that many unique morphological features of modern birds, such as feathers, reduction in body size, and the semilunate carpal, long preceded the origin of clade Aves, but some may be unique to Aves, such as relative elongation of the forelimb. We study the evolution of body size and forelimb length across the phylogeny of coelurosaurian theropods and Mesozoic Aves. Using recently developed phylogenetic comparative methods, we find an increase in rates of body size and body size dependent forelimb evolution leading to small body size relative to forelimb length in Paraves, the wider clade comprising Aves and Deinonychosauria. The high evolutionary rates arose primarily from a reduction in body size, as there were no increased rates of forelimb evolution. In line with a recent study, we find evidence that Aves appear to have a unique relationship between body size and forelimb dimensions. Traits associated with Aves evolved before their origin, at high rates, and support the notion that numerous lineages of paravians were experimenting with different modes of flight through the Late Jurassic and Early Cretaceous.

  1. High frame rate measurements of semiconductor pixel detector readout IC

    NASA Astrophysics Data System (ADS)

    Szczygiel, R.; Grybos, P.; Maj, P.

    2012-07-01

    We report on high count rate and high frame rate measurements of a prototype IC named FPDR90, designed for readouts of hybrid pixel semiconductor detectors used for X-ray imaging applications. The FPDR90 is constructed in 90 nm CMOS technology and has dimensions of 4 mm×4 mm. Its main part is a matrix of 40×32 pixels with 100 μm×100 μm pixel size. The chip works in the single photon counting mode with two discriminators and two 16-bit ripple counters per pixel. The count rate per pixel depends on the effective CSA feedback resistance and can be set up to 6 Mcps. The FPDR90 can operate in the continuous readout mode, with zero dead time. Due to the architecture of digital blocks in pixel, one can select the number of bits read out from each counter from 1 to 16. Because in the FPDR90 prototype only one data output is available, the frame rate is 9 kfps and 72 kfps for 16 bits and 1 bit readout, respectively (with nominal clock frequency of 200 MHz).

  2. High rates of organic carbon burial in fjord sediments globally

    NASA Astrophysics Data System (ADS)

    Smith, Richard W.; Bianchi, Thomas S.; Allison, Mead; Savage, Candida; Galy, Valier

    2015-06-01

    The deposition and long-term burial of organic carbon in marine sediments has played a key role in controlling atmospheric O2 and CO2 concentrations over the past 500 million years. Marine carbon burial represents the dominant natural mechanism of long-term organic carbon sequestration. Fjords--deep, glacially carved estuaries at high latitudes--have been hypothesized to be hotspots of organic carbon burial, because they receive high rates of organic material fluxes from the watershed. Here we compile organic carbon concentrations from 573 fjord surface sediment samples and 124 sediment cores from nearly all fjord systems globally. We use sediment organic carbon content and sediment delivery rates to calculate rates of organic carbon burial in fjord systems across the globe. We estimate that about 18 Mt of organic carbon are buried in fjord sediments each year, equivalent to 11% of annual marine carbon burial globally. Per unit area, fjord organic carbon burial rates are one hundred times as large as the global ocean average, and fjord sediments contain twice as much organic carbon as biogenous sediments underlying the upwelling regions of the ocean. We conclude that fjords may play an important role in climate regulation on glacial-interglacial timescales.

  3. Failure Rate Data Analysis for High Technology Components

    SciTech Connect

    L. C. Cadwallader

    2007-07-01

    Understanding component reliability helps designers create more robust future designs and supports efficient and cost-effective operations of existing machines. The accelerator community can leverage the commonality of its high-vacuum and high-power systems with those of the magnetic fusion community to gain access to a larger database of reliability data. Reliability studies performed under the auspices of the International Energy Agency are the result of an international working group, which has generated a component failure rate database for fusion experiment components. The initial database work harvested published data and now analyzes operating experience data. This paper discusses the usefulness of reliability data, describes the failure rate data collection and analysis effort, discusses reliability for components with scarce data, and points out some of the intersections between magnetic fusion experiments and accelerators.

  4. The use of high-resolution atmospheric simulations over mountainous terrain for deriving error correction functions of satellite precipitation products

    NASA Astrophysics Data System (ADS)

    Bartsotas, Nikolaos S.; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Kallos, George

    2015-04-01

    Mountainous regions account for a significant part of the Earth's surface. Such areas are persistently affected by heavy precipitation episodes, which induce flash floods and landslides. The limitation of inadequate in-situ observations has put remote sensing rainfall estimates on a pedestal concerning the analyses of these events, as in many mountainous regions worldwide they serve as the only available data source. However, well-known issues of remote sensing techniques over mountainous areas, such as the strong underestimation of precipitation associated with low-level orographic enhancement, limit the way these estimates can accommodate operational needs. Even locations that fall within the range of weather radars suffer from strong biases in precipitation estimates due to terrain blockage and vertical rainfall profile issues. A novel approach towards the reduction of error in quantitative precipitation estimates lies upon the utilization of high-resolution numerical simulations in order to derive error correction functions for corresponding satellite precipitation data. The correction functions examined consist of 1) mean field bias adjustment and 2) pdf matching, two procedures that are simple and have been widely used in gauge-based adjustment techniques. For the needs of this study, more than 15 selected storms over the mountainous Upper Adige region of Northern Italy were simulated at 1-km resolution from a state-of-the-art atmospheric model (RAMS/ICLAMS), benefiting from the explicit cloud microphysical scheme, prognostic treatment of natural pollutants such as dust and sea-salt and the detailed SRTM90 topography that are implemented in the model. The proposed error correction approach is applied on three quasi-global and widely used satellite precipitation datasets (CMORPH, TRMM 3B42 V7 and PERSIANN) and the evaluation of the error model is based on independent in situ precipitation measurements from a dense rain gauge network (1 gauge / 70 km2

  5. Pre-Compensation for Continuous-Path Running Trajectory Error in High-Speed Machining of Parts with Varied Curvature Features

    NASA Astrophysics Data System (ADS)

    Jia, Zhenyuan; Song, Dening; Ma, Jianwei; Gao, Yuanyuan

    2016-04-01

    Parts with varied curvature features play increasingly critical roles in engineering, and are often machined under high-speed continuous-path running mode to ensure the machining efficiency. However, the continuous-path running trajectory error is significant during high-feed-speed machining, which seriously restricts the machining precision for such parts with varied curvature features. In order to reduce the continuous-path running trajectory error without sacrificing the machining efficiency, a pre-compensation method for the trajectory error is proposed. Based on the formation mechanism of the continuous-path running trajectory error analyzed, this error is estimated in advance by approximating the desired toolpath with spline curves. Then, an iterative error pre-compensation method is presented. By machining with the regenerated toolpath after pre-compensation instead of the uncompensated toolpath, the continuous-path running trajectory error can be effectively decreased without the reduction of the feed speed. To demonstrate the feasibility of the proposed pre-compensation method, a heart curve toolpath that possesses varied curvature features is employed. Experimental results indicate that compared with the uncompensated processing trajectory, the maximum and average machining errors for the pre-compensated processing trajectory are reduced by 67.19% and 82.30%, respectively. An easy to implement solution for high efficiency and high precision machining of the parts with varied curvature features is provided.

  6. High dose rate (HDR) brachytherapy quality assurance: a practical guide

    PubMed Central

    Wilkinson, DA

    2006-01-01

    The widespread adoption of high dose rate brachytherapy with its inherent dangers necessitates adoption of appropriate quality assurance measures to minimize risks to both patients and medical staff. This paper is aimed at assisting someone who is establishing a new program or revising one already in place into adhere to the recently issued Nuclear Regulatory Commission (USA) regulations and the guidelines from the American Association of Physicists in Medicine. PMID:21614233

  7. The incidence of diagnostic error in medicine.

    PubMed

    Graber, Mark L

    2013-10-01

    A wide variety of research studies suggest that breakdowns in the diagnostic process result in a staggering toll of harm and patient deaths. These include autopsy studies, case reviews, surveys of patient and physicians, voluntary reporting systems, using standardised patients, second reviews, diagnostic testing audits and closed claims reviews. Although these different approaches provide important information and unique insights regarding diagnostic errors, each has limitations and none is well suited to establishing the incidence of diagnostic error in actual practice, or the aggregate rate of error and harm. We argue that being able to measure the incidence of diagnostic error is essential to enable research studies on diagnostic error, and to initiate quality improvement projects aimed at reducing the risk of error and harm. Three approaches appear most promising in this regard: (1) using 'trigger tools' to identify from electronic health records cases at high risk for diagnostic error; (2) using standardised patients (secret shoppers) to study the rate of error in practice; (3) encouraging both patients and physicians to voluntarily report errors they encounter, and facilitating this process. PMID:23771902

  8. Electrochemical cell with high discharge/charge rate capability

    DOEpatents

    Redey, Laszlo

    1988-01-01

    A fully charged positive electrode composition for an electrochemical cell includes FeS.sub.2 and NiS.sub.2 in about equal molar amounts along with about 2-20 mole percent of the reaction product Li.sub.2 S. Through selection of appropriate electrolyte compositions, high power output or low operating temperatures can be obtained. The cell includes a substantially constant electrode impedance through most of its charge and discharge range. Exceptionally high discharge rates and overcharge protection are obtainable through use of the inventive electrode composition.

  9. Semi-solid electrodes having high rate capability

    DOEpatents

    Chiang, Yet-Ming; Duduta, Mihai; Holman, Richard; Limthongkul, Pimpa; Tan, Taison

    2016-06-07

    Embodiments described herein relate generally to electrochemical cells having high rate capability, and more particularly to devices, systems and methods of producing high capacity and high rate capability batteries having relatively thick semi-solid electrodes. In some embodiments, an electrochemical cell includes an anode and a semi-solid cathode. The semi-solid cathode includes a suspension of an active material of about 35% to about 75% by volume of an active material and about 0.5% to about 8% by volume of a conductive material in a non-aqueous liquid electrolyte. An ion-permeable membrane is disposed between the anode and the semi-solid cathode. The semi-solid cathode has a thickness of about 250 .mu.m to about 2,000 .mu.m, and the electrochemical cell has an area specific capacity of at least about 7 mAh/cm.sup.2 at a C-rate of C/4. In some embodiments, the semi-solid cathode slurry has a mixing index of at least about 0.9.

  10. High-rate electrochemical energy storage through Li+ intercalation pseudocapacitance.

    PubMed

    Augustyn, Veronica; Come, Jérémy; Lowe, Michael A; Kim, Jong Woung; Taberna, Pierre-Louis; Tolbert, Sarah H; Abruña, Héctor D; Simon, Patrice; Dunn, Bruce

    2013-06-01

    Pseudocapacitance is commonly associated with surface or near-surface reversible redox reactions, as observed with RuO2·xH2O in an acidic electrolyte. However, we recently demonstrated that a pseudocapacitive mechanism occurs when lithium ions are inserted into mesoporous and nanocrystal films of orthorhombic Nb2O5 (T-Nb2O5; refs 1,2). Here, we quantify the kinetics of charge storage in T-Nb2O5: currents that vary inversely with time, charge-storage capacity that is mostly independent of rate, and redox peaks that exhibit small voltage offsets even at high rates. We also define the structural characteristics necessary for this process, termed intercalation pseudocapacitance, which are a crystalline network that offers two-dimensional transport pathways and little structural change on intercalation. The principal benefit realized from intercalation pseudocapacitance is that high levels of charge storage are achieved within short periods of time because there are no limitations from solid-state diffusion. Thick electrodes (up to 40 μm thick) prepared with T-Nb2O5 offer the promise of exploiting intercalation pseudocapacitance to obtain high-rate charge-storage devices.

  11. High rate data systems. [for High Resolution Imaging Spectrometer and SAR

    NASA Technical Reports Server (NTRS)

    Miller, Richard B.; Nichols, David A.

    1987-01-01

    The characteristics of the high resolution imaging spectrometer (HIRIS) and the synthetic aperture radar (SAR) are described with consideration given to the source of their high data rates. A functional-level description of the end-to-end data flow for HIRIS and SAR is provided. Attention is also given to major technological challenges that must be met in achieving an implementation of the system. Management issues associated with high rate, high volume data are also discussed.

  12. Calibration of high flow rate thoracic-size selective samplers

    PubMed Central

    Lee, Taekhee; Thorpe, Andrew; Cauda, Emanuele; Harper, Martin

    2016-01-01

    High flow rate respirable size selective samplers, GK4.126 and FSP10 cyclones, were calibrated for thoracic-size selective sampling in two different laboratories. The National Institute for Occupational Safety and Health (NIOSH) utilized monodisperse ammonium fluorescein particles and scanning electron microscopy to determine the aerodynamic particle size of the monodisperse aerosol. Fluorescein intensity was measured to determine sampling efficiencies of the cyclones. The Health Safety and Laboratory (HSL) utilized a real time particle sizing instrument (Aerodynamic Particle Sizer) and poly-disperse glass sphere particles and particle size distributions between the cyclone and reference sampler were compared. Sampling efficiency of the cyclones were compared to the thoracic convention defined by the American Conference of Governmental Industrial Hygienists (ACGIH)/Comité Européen de Normalisation (CEN)/International Standards Organization (ISO). The GK4.126 cyclone showed minimum bias compared to the thoracic convention at flow rates of 3.5 l min−1 (NIOSH) and 2.7–3.3 l min−1 (HSL) and the difference may be from the use of different test systems. In order to collect the most dust and reduce the limit of detection, HSL suggested using the upper end in range (3.3 l min−1). A flow rate of 3.4 l min−1 would be a reasonable compromise, pending confirmation in other laboratories. The FSP10 cyclone showed minimum bias at the flow rate of 4.0 l min−1 in the NIOSH laboratory test. The high flow rate thoracic-size selective samplers might be used for higher sample mass collection in order to meet analytical limits of quantification. PMID:26891196

  13. Calibration of high flow rate thoracic-size selective samplers.

    PubMed

    Lee, Taekhee; Thorpe, Andrew; Cauda, Emanuele; Harper, Martin

    2016-01-01

    High flow rate respirable size selective samplers, GK4.126 and FSP10 cyclones, were calibrated for thoracic-size selective sampling in two different laboratories. The National Institute for Occupational Safety and Health (NIOSH) utilized monodisperse ammonium fluorescein particles and scanning electron microscopy to determine the aerodynamic particle size of the monodisperse aerosol. Fluorescein intensity was measured to determine sampling efficiencies of the cyclones. The Health Safety and Laboratory (HSL) utilized a real time particle sizing instrument (Aerodynamic Particle Sizer) and polydisperse glass sphere particles and particle size distributions between the cyclone and reference sampler were compared. Sampling efficiency of the cyclones were compared to the thoracic convention defined by the American Conference of Governmental Industrial Hygienists (ACGIH)/Comité Européen de Normalisation (CEN)/International Standards Organization (ISO). The GK4.126 cyclone showed minimum bias compared to the thoracic convention at flow rates of 3.5 l min(-1) (NIOSH) and 2.7-3.3 l min(-1) (HSL) and the difference may be from the use of different test systems. In order to collect the most dust and reduce the limit of detection, HSL suggested using the upper end in range (3.3 l min(-1)). A flow rate of 3.4 l min(-1) would be a reasonable compromise, pending confirmation in other laboratories. The FSP10 cyclone showed minimum bias at the flow rate of 4.0 l min(-1) in the NIOSH laboratory test. The high flow rate thoracic-size selective samplers might be used for higher sample mass collection in order to meet analytical limits of quantification. PMID:26891196

  14. High Strain Rate Compression Testing of Ceramics and Ceramic Composites.

    SciTech Connect

    Blumenthal, W. R.

    2005-01-01

    The compressive deformation and failure behavior of ceramics and ceramic-metal composites for armor applications has been studied as a function of strain rate at Los Alamos National Laboratory since the late 1980s. High strain rate ({approx}10{sup 3} s{sup -1}) uniaxial compression loading can be achieved using the Kolsky-split-Hopkinson pressure bar (SHPB) technique, but special methods must be used to obtain valid strength results. This paper reviews these methods and the limitations of the Kolsky-SHPB technique for this class of materials. The Kolsky-split-Hopkinson pressure bar (Kolsky-SHPB) technique was originally developed to characterize the mechanical behavior of ductile materials such as metals and polymers where the results can be used to develop strain-rate and temperature-dependent constitutive behavior models that empirically describe macroscopic plastic flow. The flow behavior of metals and polymers is generally controlled by thermally-activated and rate-dependent dislocation motion or polymer chain motion in response to shear stresses. Conversely, the macroscopic mechanical behavior of dense, brittle, ceramic-based materials is dominated by elastic deformation terminated by rapid failure associated with the propagation of defects in the material in response to resolved tensile stresses. This behavior is usually characterized by a distribution of macroscopically measured failure strengths and strains. The basis for any strain-rate dependence observed in the failure strength must originate from rate-dependence in the damage and fracture process, since uniform, uniaxial elastic behavior is rate-independent (e.g. inertial effects on crack growth). The study of microscopic damage and fracture processes and their rate-dependence under dynamic loading conditions is a difficult experimental challenge that is not addressed in this paper. The purpose of this paper is to review the methods that have been developed at the Los Alamos National Laboratory to

  15. [Medical errors in obstetrics].

    PubMed

    Marek, Z

    1984-08-01

    Errors in medicine may fall into 3 main categories: 1) medical errors made only by physicians, 2) technical errors made by physicians and other health care specialists, and 3) organizational errors associated with mismanagement of medical facilities. This classification of medical errors, as well as the definition and treatment of them, fully applies to obstetrics. However, the difference between obstetrics and other fields of medicine stems from the fact that an obstetrician usually deals with healthy women. Conversely, professional risk in obstetrics is very high, as errors and malpractice can lead to very serious complications. Observations show that the most frequent obstetrical errors occur in induced abortions, diagnosis of pregnancy, selection of optimal delivery techniques, treatment of hemorrhages, and other complications. Therefore, the obstetrician should be prepared to use intensive care procedures similar to those used for resuscitation.

  16. Method for generating high-energy and high repetition rate laser pulses from CW amplifiers

    DOEpatents

    Zhang, Shukui

    2013-06-18

    A method for obtaining high-energy, high repetition rate laser pulses simultaneously using continuous wave (CW) amplifiers is described. The method provides for generating micro-joule level energy in pico-second laser pulses at Mega-hertz repetition rates.

  17. Scale dependence of rock friction at high work rate.

    PubMed

    Yamashita, Futoshi; Fukuyama, Eiichi; Mizoguchi, Kazuo; Takizawa, Shigeru; Xu, Shiqing; Kawakata, Hironori

    2015-12-10

    Determination of the frictional properties of rocks is crucial for an understanding of earthquake mechanics, because most earthquakes are caused by frictional sliding along faults. Prior studies using rotary shear apparatus revealed a marked decrease in frictional strength, which can cause a large stress drop and strong shaking, with increasing slip rate and increasing work rate. (The mechanical work rate per unit area equals the product of the shear stress and the slip rate.) However, those important findings were obtained in experiments using rock specimens with dimensions of only several centimetres, which are much smaller than the dimensions of a natural fault (of the order of 1,000 metres). Here we use a large-scale biaxial friction apparatus with metre-sized rock specimens to investigate scale-dependent rock friction. The experiments show that rock friction in metre-sized rock specimens starts to decrease at a work rate that is one order of magnitude smaller than that in centimetre-sized rock specimens. Mechanical, visual and material observations suggest that slip-evolved stress heterogeneity on the fault accounts for the difference. On the basis of these observations, we propose that stress-concentrated areas exist in which frictional slip produces more wear materials (gouge) than in areas outside, resulting in further stress concentrations at these areas. Shear stress on the fault is primarily sustained by stress-concentrated areas that undergo a high work rate, so those areas should weaken rapidly and cause the macroscopic frictional strength to decrease abruptly. To verify this idea, we conducted numerical simulations assuming that local friction follows the frictional properties observed on centimetre-sized rock specimens. The simulations reproduced the macroscopic frictional properties observed on the metre-sized rock specimens. Given that localized stress concentrations commonly occur naturally, our results suggest that a natural fault may lose its

  18. Scale dependence of rock friction at high work rate.

    PubMed

    Yamashita, Futoshi; Fukuyama, Eiichi; Mizoguchi, Kazuo; Takizawa, Shigeru; Xu, Shiqing; Kawakata, Hironori

    2015-12-10

    Determination of the frictional properties of rocks is crucial for an understanding of earthquake mechanics, because most earthquakes are caused by frictional sliding along faults. Prior studies using rotary shear apparatus revealed a marked decrease in frictional strength, which can cause a large stress drop and strong shaking, with increasing slip rate and increasing work rate. (The mechanical work rate per unit area equals the product of the shear stress and the slip rate.) However, those important findings were obtained in experiments using rock specimens with dimensions of only several centimetres, which are much smaller than the dimensions of a natural fault (of the order of 1,000 metres). Here we use a large-scale biaxial friction apparatus with metre-sized rock specimens to investigate scale-dependent rock friction. The experiments show that rock friction in metre-sized rock specimens starts to decrease at a work rate that is one order of magnitude smaller than that in centimetre-sized rock specimens. Mechanical, visual and material observations suggest that slip-evolved stress heterogeneity on the fault accounts for the difference. On the basis of these observations, we propose that stress-concentrated areas exist in which frictional slip produces more wear materials (gouge) than in areas outside, resulting in further stress concentrations at these areas. Shear stress on the fault is primarily sustained by stress-concentrated areas that undergo a high work rate, so those areas should weaken rapidly and cause the macroscopic frictional strength to decrease abruptly. To verify this idea, we conducted numerical simulations assuming that local friction follows the frictional properties observed on centimetre-sized rock specimens. The simulations reproduced the macroscopic frictional properties observed on the metre-sized rock specimens. Given that localized stress concentrations commonly occur naturally, our results suggest that a natural fault may lose its

  19. Pedalling rate affects endurance performance during high-intensity cycling.

    PubMed

    Nielsen, Jens Steen; Hansen, Ernst Albin; Sjøgaard, Gisela

    2004-06-01

    The purpose of this study into high-intensity cycling was to: (1) test the hypothesis that endurance time is longest at a freely chosen pedalling rate (FCPR), compared to pedalling rates 25% lower (FCPR-25) and higher (FCPR+25) than FCPR, and (2) investigate how physiological variables, such as muscle fibre type composition and power reserve, relate to endurance time. Twenty males underwent testing to determine their maximal oxygen uptake (VO(2max)), power output corresponding to 90% of VO(2max) at 80 rpm (W90), FCPR at W90, percentage of slow twitch muscle fibres (% MHC I), maximal leg power, and endurance time at W90 with FCPR-25, FCPR, and FCPR+25. Power reserve was calculated as the difference between applied power output at a given pedalling rate and peak crank power at this same pedalling rate. W90 was 325 (47) W. FCPR at W90 was 78 (11) rpm, resulting in FCPR-25 being 59 (8) rpm and FCPR+25 being 98 (13) rpm. Endurance time at W90(FCPR+25) [441 (188) s] was significantly shorter than at W90(FCPR) [589 (232) s] and W90(FCPR-25) [547 (170) s]. Metabolic responses such as VO(2) and blood lactate concentration were generally higher at W90(FCPR+25) than at W90(FCPR-25) and W90(FCPR). Endurance time was negatively related to VO(2max), W90 and % MHC I, while positively related to power reserve. In conclusion, at group level, endurance time was longer at FCPR and at a pedalling rate 25% lower compared to a pedalling rate 25% higher than FCPR. Further, inter-individual physiological variables were of significance for endurance time, % MHC I showing a negative and power reserve a positive relationship.

  20. Brachytherapy for early oral tongue cancer: low dose rate to high dose rate.

    PubMed

    Yamazaki, Hideya; Inoue, Takehiro; Yoshida, Ken; Yoshioka, Yasuo; Furukawa, Souhei; Kakimoto, Naoya; Shimizutani, Kimishige; Inoue, Toshihiko

    2003-03-01

    To examine the compatibility of low dose rate (LDR) with high dose rate (HDR) brachytherapy, we reviewed 399 patients with early oral tongue cancer (T1-2N0M0) treated solely by brachytherapy at Osaka University Hospital between 1967 and 1999. For patients in the LDR group (n = 341), the treatment sources consisted of Ir-192 pin for 227 patients (1973-1996; irradiated dose, 61-85 Gy; median, 70 Gy), Ra-226 needle for 113 patients (1967-1986; 55-93 Gy; median, 70 Gy). Ra-226 and Ir-192 were combined for one patient. Ir-192 HDR (microSelectron-HDR) was used for 58 patients in the HDR group (1991-present; 48-60 Gy; median, 60 Gy). LDR implantations were performed via oral and HDR via a submental/submandibular approach. The dose rates at the reference point for the LDR group were 0.30 to 0.8 Gy/h, and for the HDR group 1.0 to 3.4 Gy/min. The patients in the HDR group received a total dose of 48-60 Gy (8-10 fractions) during one week. Two fractions were administered per day (at least a 6-h interval). The 3- and 5-year local control rates for patients in the LDR group were 85% and 80%, respectively, and those in the HDR group were both 84%. HDR brachytherapy showed the same lymph-node control rate as did LDR brachytherapy (67% at 5 years). HDR brachytherapy achieved the same locoregional result as did LDR brachytherapy. A converting factor of 0.86 is applicable for HDR in the treatment of early oral tongue cancer.

  1. High Dose-Rate Versus Low Dose-Rate Brachytherapy for Lip Cancer

    SciTech Connect

    Ghadjar, Pirus; Bojaxhiu, Beat; Simcock, Mathew; Terribilini, Dario; Isaak, Bernhard; Gut, Philipp; Wolfensberger, Patrick; Broemme, Jens O.; Geretschlaeger, Andreas; Behrensmeier, Frank; Pica, Alessia; Aebersold, Daniel M.

    2012-07-15

    Purpose: To analyze the outcome after low-dose-rate (LDR) or high-dose-rate (HDR) brachytherapy for lip cancer. Methods and Materials: One hundred and three patients with newly diagnosed squamous cell carcinoma of the lip were treated between March 1985 and June 2009 either by HDR (n = 33) or LDR brachytherapy (n = 70). Sixty-eight patients received brachytherapy alone, and 35 received tumor excision followed by brachytherapy because of positive resection margins. Acute and late toxicity was assessed according to the Common Terminology Criteria for Adverse Events 3.0. Results: Median follow-up was 3.1 years (range, 0.3-23 years). Clinical and pathological variables did not differ significantly between groups. At 5 years, local recurrence-free survival, regional recurrence-free survival, and overall survival rates were 93%, 90%, and 77%. There was no significant difference for these endpoints when HDR was compared with LDR brachytherapy. Forty-two of 103 patients (41%) experienced acute Grade 2 and 57 of 103 patients (55%) experienced acute Grade 3 toxicity. Late Grade 1 toxicity was experienced by 34 of 103 patients (33%), and 5 of 103 patients (5%) experienced late Grade 2 toxicity; no Grade 3 late toxicity was observed. Acute and late toxicity rates were not significantly different between HDR and LDR brachytherapy. Conclusions: As treatment for lip cancer, HDR and LDR brachytherapy have comparable locoregional control and acute and late toxicity rates. HDR brachytherapy for lip cancer seems to be an effective treatment with acceptable toxicity.

  2. The Development of an African-Centered Urban High School by Trial and Error

    ERIC Educational Resources Information Center

    Robinson, Theresa Y.; Jeremiah, Maxine

    2011-01-01

    As part of the Small Schools movement in Chicago Public Schools, a high school dedicated to African-centered education was chartered. The virtues of Ma'at and the Nguzo Saba, otherwise known as the seven principles of Kwanza, were the foundational principles of the school and were to be integrated into all of the practices and policies of the…

  3. A simple method for high-precision calibration of long-range errors in an angle encoder using an electronic nulling autocollimator

    NASA Astrophysics Data System (ADS)

    Kinnane, Mark N.; Hudson, Lawrence T.; Henins, Albert; Mendenhall, Marcus H.

    2015-04-01

    We describe a simple method for high-precision rotary angle encoder calibration for long-range angular errors. By using a redesigned electronic nulling autocollimator, an optical-polygon artifact is calibrated simultaneously with determining the encoder error function over a rotation of 2π rad. The technique is applied to the NIST vacuum double crystal spectrometer, which depends on precise measurement of diffraction angles to determine absolute x-ray wavelengths. By oversampling, the method returned the encoder error function with an expanded uncertainty (k = 2) of 0.004 s of plane angle. Knowledge of the error function permits the instrument to make individual encoder readings with an accuracy of 0.06 s (k = 2), which is limited primarily by the least count and noise of the encoder electronics. While the error function lay within the nominal specifications, it differed from the intrinsic factory curve, indicating the need for in situ calibration in high-precision applications.

  4. High rate reactive sputtering of MoN(x) coatings

    NASA Technical Reports Server (NTRS)

    Rudnik, Paul J.; Graham, Michael E.; Sproul, William D.

    1991-01-01

    High rate reactive sputtering of MoN(x) films was performed using feedback control of the nitorgen partial pressure. Coatings were made at four different target powers: 2.5, 5.0, 7.5 and 10 kW. No hysteresis was observed in the nitrogen partial pressure vs. flow plot, as is typically seen for the Ti-N system. Four phases were determined by X-ray diffraction: molybdenum, Mo-N solid solution, Beta-Mo2N and gamma-Mo2N. The hardness of the coatings depended upon composition, substrate bias, and target power. The phases present in the hardest films differed depending upon deposition parameters. For example, the Beta-Mo2N phase was hardest (load 25 gf) at 5.0 kW with a value of 3200 kgf/sq mm, whereas the hardest coatings at 10 kW were the gamma-Mo2N phase (3000 kgf/sq mm). The deposition rate generally decreased with increasing nitrogen partial pressure, but there was a range of partial pressures where the rate was relatively constant. At a target power of 5.0 kW, for example, the deposition rates were 3300 A/min for a N2 partial pressure of 0.05 - 1.0 mTorr.

  5. High-rate measurement-device-independent quantum cryptography

    NASA Astrophysics Data System (ADS)

    Pirandola, Stefano; Ottaviani, Carlo; Spedalieri, Gaetana; Weedbrook, Christian; Braunstein, Samuel L.; Lloyd, Seth; Gehring, Tobias; Jacobsen, Christian S.; Andersen, Ulrik L.

    2015-06-01

    Quantum cryptography achieves a formidable task—the remote distribution of secret keys by exploiting the fundamental laws of physics. Quantum cryptography is now headed towards solving the practical problem of constructing scalable and secure quantum networks. A significant step in this direction has been the introduction of measurement-device independence, where the secret key between two parties is established by the measurement of an untrusted relay. Unfortunately, although qubit-implemented protocols can reach long distances, their key rates are typically very low, unsuitable for the demands of a metropolitan network. Here we show, theoretically and experimentally, that a solution can come from the use of continuous-variable systems. We design a coherent-state network protocol able to achieve remarkably high key rates at metropolitan distances, in fact three orders of magnitude higher than those currently achieved. Our protocol could be employed to build high-rate quantum networks where devices securely connect to nearby access points or proxy servers.

  6. Analysis of the strain-rate sensitivity at high strain rates in FCC and BCC metals

    SciTech Connect

    Follansbee, P.S.

    1988-01-01

    The development of a constitutive model based on the use of internal state variables and phenomenological models describing glide kinetics is reviewed. Application of the model to the deformation of fcc metals and alloys is illustrated, with an emphasis on the behavior at high strain rates. Preliminary results in pure iron and 4340 steel are also presented. Deformation twinning is observed in iron samples deformed in the Hopkinson pressure bar. The influence of twinning on the proposed constitutive is discussed. 11 refs., 8 figs.

  7. Explaining errors in children's questions.

    PubMed

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  8. Low resistance bakelite RPC study for high rate working capability

    DOE PAGESBeta

    Dai, T.; Han, L.; Hou, S.; Liu, M.; Li, Q.; Song, H.; Xia, L.; Zhang, Z.

    2014-11-19

    This paper presents series efforts to lower resistance of bakelite electrode plate to improve the RPC capability under high rate working condition. New bakelite material with alkali metallic ion doping has been manufactured and tested. This bakelite is found unstable under large charge flux and need further investigation. A new structure of carbon-embedded bakelite RPC has been developed, which can reduce the effective resistance of electrode by a factor of 10. The prototype of the carbon-embedded chamber could function well under gamma radiation source at event rate higher than 10 kHz/cm2. The preliminary tests show that this kind of newmore » structure performs as efficiently as traditional RPCs.« less

  9. Low resistance bakelite RPC study for high rate working capability

    SciTech Connect

    Dai, T.; Han, L.; Hou, S.; Liu, M.; Li, Q.; Song, H.; Xia, L.; Zhang, Z.

    2014-11-19

    This paper presents series efforts to lower resistance of bakelite electrode plate to improve the RPC capability under high rate working condition. New bakelite material with alkali metallic ion doping has been manufactured and tested. This bakelite is found unstable under large charge flux and need further investigation. A new structure of carbon-embedded bakelite RPC has been developed, which can reduce the effective resistance of electrode by a factor of 10. The prototype of the carbon-embedded chamber could function well under gamma radiation source at event rate higher than 10 kHz/cm2. The preliminary tests show that this kind of new structure performs as efficiently as traditional RPCs.

  10. High-pressure burning rate studies of solid rocket propellants

    NASA Astrophysics Data System (ADS)

    Atwood, A. I.; Ford, K. P.; Wheeler, C. J.

    2013-03-01

    Increased rocket motor performance is a major driver in the development of solid rocket propellant formulations for chemical propulsion systems. The use of increased operating pressure is an option to improve performance potentially without the cost of reformulation. A technique has been developed to obtain burning rate data across a range of pressures from ambient to 345 MPa. The technique combines the use of a low loading density combustion bomb with a high loading density closed bomb technique. A series of nine ammonium perchlorate (AP) based propellants were used to demonstrate the use of the technique, and the results were compared to the neat AP burning rate "barrier". The effect of plasticizer, oxidizer particle size, catalyst, and binder type were investigated.

  11. Multianode cylindrical proportional counter for high count rates

    DOEpatents

    Hanson, J.A.; Kopp, M.K.

    1980-05-23

    A cylindrical, multiple-anode proportional counter is provided for counting of low-energy photons (< 60 keV) at count rates of greater than 10/sup 5/ counts/sec. A gas-filled proportional counter cylinder forming an outer cathode is provided with a central coaxially disposed inner cathode and a plurality of anode wires disposed in a cylindrical array in coaxial alignment with and between the inner and outer cathodes to form a virtual cylindrical anode coaxial with the inner and outer cathodes. The virtual cylindrical anode configuration improves the electron drift velocity by providing a more uniform field strength throughout the counter gas volume, thus decreasing the electron collection time following the detection of an ionizing event. This avoids pulse pile-up and coincidence losses at these high count rates. Conventional RC position encoding detection circuitry may be employed to extract the spatial information from the counter anodes.

  12. Multianode cylindrical proportional counter for high count rates

    DOEpatents

    Hanson, James A.; Kopp, Manfred K.

    1981-01-01

    A cylindrical, multiple-anode proportional counter is provided for counting of low-energy photons (<60 keV) at count rates of greater than 10.sup.5 counts/sec. A gas-filled proportional counter cylinder forming an outer cathode is provided with a central coaxially disposed inner cathode and a plurality of anode wires disposed in a cylindrical array in coaxial alignment with and between the inner and outer cathodes to form a virtual cylindrical anode coaxial with the inner and outer cathodes. The virtual cylindrical anode configuration improves the electron drift velocity by providing a more uniform field strength throughout the counter gas volume, thus decreasing the electron collection time following the detection of an ionizing event. This avoids pulse pile-up and coincidence losses at these high count rates. Conventional RC position encoding detection circuitry may be employed to extract the spatial information from the counter anodes.

  13. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  14. Self-heating probe instrument and method for measuring high temperature melting volume change rate of material

    NASA Astrophysics Data System (ADS)

    Wang, Junwei; Wang, Zhiping; Lu, Yang; Cheng, Bo

    2013-03-01

    The castings defects are affected by the melting volume change rate of material. The change rate has an important effect on running safety of the high temperature thermal storage chamber, too. But the characteristics of existing measuring installations are complex structure, troublesome operation and low precision. In order to measure the melting volume change rate of material accurately and conveniently, a self-designed measuring instrument, self-heating probe instrument, and measuring method are described. Temperature in heating cavity is controlled by PID temperature controller; melting volume change rate υ and molten density are calculated based on the melt volume which is measured by the instrument. Positive and negative υ represent expansion and shrinkage of the sample volume after melting, respectively. Taking eutectic LiF+CaF2 for example, its melting volume change rate and melting density at 1 123 K are -20.6% and 2 651 kg·m-3 measured by this instrument, which is only 0.71% smaller than literature value. Density and melting volume change rate of industry pure aluminum at 973 K and analysis pure NaCl at 1 123 K are detected by the instrument too. The measure results are agreed with report values. Measuring error sources are analyzed and several improving measures are proposed. In theory, the measuring errors of the change rate and molten density which are measured by the self-designed instrument is nearly 1/20-1/50 of that measured by the refitted mandril thermal expansion instrument. The self-designed instrument and method have the advantages of simple structure, being easy to operate, extensive applicability for material, relatively high accuracy, and most importantly, temperature and sample vapor pressure have little effect on the measurement accuracy. The presented instrument and method solve the problems of complicated structure and procedures, and large measuring errors for the samples with high vapor pressure by existing installations.

  15. Experimental investigation of bond strength under high loading rates

    NASA Astrophysics Data System (ADS)

    Michal, Mathias; Keuser, Manfred; Solomos, George; Peroni, Marco; Larcher, Martin; Esteban, Beatriz

    2015-09-01

    The structural behaviour of reinforced concrete is governed significantly by the transmission of forces between steel and concrete. The bond is of special importance for the overlapping joint and anchoring of the reinforcement, where rigid bond is required. It also plays an important role in the rotational capacity of plastic hinges, where a ductile bond behaviour is preferable. Similar to the mechanical properties of concrete and steel also the characteristics of their interaction changes with the velocity of the applied loading. For smooth steel bars with its main bond mechanisms of adhesion and friction, nearly no influence of loading rate is reported in literature. In contrast, a high rate dependence can be found for the nowadays mainly used deformed bars. For mechanical interlock, where ribs of the reinforcing steel are bracing concrete material surrounding the bar, one reason can be assumed to be in direct connection with the increase of concrete compressive strength. For splitting failure of bond, characterized by the concrete tensile strength, an even higher dynamic increase is observed. For the design of Structures exposed to blast or impact loading the knowledge of a rate dependent bond stress-slip relationship is required to consider safety and economical aspects at the same time. The bond behaviour of reinforced concrete has been investigated with different experimental methods at the University of the Bundeswehr Munich (UniBw) and the Joint Research Centre (JRC) in Ispra. Both static and dynamic tests have been carried out, where innovative experimental apparatuses have been used. The bond stress-slip relationship and maximum pull-out-forces for varying diameter of the bar, concrete compressive strength and loading rates have been obtained. It is expected that these experimental results will contribute to a better understanding of the rate dependent bond behaviour and will serve for calibration of numerical models.

  16. Handling high data rate detectors at Diamond Light Source

    NASA Astrophysics Data System (ADS)

    Pedersen, U. K.; Rees, N.; Basham, M.; Ferner, F. J. K.

    2013-03-01

    An increasing number of area detectors, in use at Diamond Light Source, produce high rates of data. In order to capture, store and process this data High Performance Computing (HPC) systems have been implemented. This paper will present the architecture and usage for handling high rate data: detector data capture, large volume storage and parallel processing. The EPICS area Detector frame work has been adopted to abstract the detectors for common tasks including live processing, file format and storage. The chosen data format is HDF5 which provides multidimensional data storage and NeXuS compatibility. The storage system and related computing infrastructure include: a centralised Lustre based parallel file system, a dedicated network and a HPC cluster. A well defined roadmap is in place for the evolution of this to meet demand as the requirements and technology advances. For processing the science data the HPC cluster allow efficient parallel computing, on a mixture of ×86 and GPU processing units. The nature of the Lustre storage system in combination with the parallel HDF5 library allow efficient disk I/O during computation jobs. Software developments, which include utilising optimised parallel file reading for a variety of post processing techniques, are being developed in collaboration as part of the Pan-Data EU Project (www.pan-data.eu). These are particularly applicable to tomographic reconstruction and processing of non crystalline diffraction data.

  17. Highly heterogeneous mutation rates in the hepatitis C virus genome.

    PubMed

    Geller, Ron; Estada, Úrsula; Peris, Joan B; Andreu, Iván; Bou, Juan-Vicente; Garijo, Raquel; Cuevas, José M; Sabariegos, Rosario; Mas, Antonio; Sanjuán, Rafael

    2016-01-01

    Spontaneous mutations are the ultimate source of genetic variation and have a prominent role in evolution. RNA viruses such as hepatitis C virus (HCV) have extremely high mutation rates, but these rates have been inferred from a minute fraction of genome sites, limiting our view of how RNA viruses create diversity. Here, by applying high-fidelity ultradeep sequencing to a modified replicon system, we scored >15,000 spontaneous mutations, encompassing more than 90% of the HCV genome. This revealed >1,000-fold differences in mutability across genome sites, with extreme variations even between adjacent nucleotides. We identify base composition, the presence of high- and low-mutation clusters and transition/transversion biases as the main factors driving this heterogeneity. Furthermore, we find that mutability correlates with the ability of HCV to diversify in patients. These data provide a site-wise baseline for interrogating natural selection, genetic load and evolvability in HCV, as well as for evaluating drug resistance and immune evasion risks. PMID:27572964

  18. GPU accelerated processing of astronomical high frame-rate videosequences

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav; Švihlík, Jan; Krasula, Lukáš; Fliegel, Karel; Páta, Petr

    2015-09-01

    Astronomical instruments located around the world are producing an incredibly large amount of possibly interesting scientific data. Astronomical research is expanding into large and highly sensitive telescopes. Total volume of data rates per night of operations also increases with the quality and resolution of state-of-the-art CCD/CMOS detectors. Since many of the ground-based astronomical experiments are placed in remote locations with limited access to the Internet, it is necessary to solve the problem of the data storage. It mostly means that current data acquistion, processing and analyses algorithm require review. Decision about importance of the data has to be taken in very short time. This work deals with GPU accelerated processing of high frame-rate astronomical video-sequences, mostly originating from experiment MAIA (Meteor Automatic Imager and Analyser), an instrument primarily focused to observing of faint meteoric events with a high time resolution. The instrument with price bellow 2000 euro consists of image intensifier and gigabite ethernet camera running at 61 fps. With resolution better than VGA the system produces up to 2TB of scientifically valuable video data per night. Main goal of the paper is not to optimize any GPU algorithm, but to propose and evaluate parallel GPU algorithms able to process huge amount of video-sequences in order to delete all uninteresting data.

  19. Diamond detector for high rate monitors of fast neutrons beams

    SciTech Connect

    Giacomelli, L.; Rebai, M.; Cippo, E. Perelli; Tardocchi, M.; Fazzi, A.; Andreani, C.; Pietropaolo, A.; Frost, C. D.; Rhodes, N.; Schooneveld, E.; Gorini, G.

    2012-06-19

    A fast neutron detection system suitable for high rate measurements is presented. The detector is based on a commercial high purity single crystal diamond (SDD) coupled to a fast digital data acquisition system. The detector was tested at the ISIS pulsed spallation neutron source. The SDD event signal was digitized at 1 GHz to reconstruct the deposited energy (pulse amplitude) and neutron arrival time; the event time of flight (ToF) was obtained relative to the recorded proton beam signal t{sub 0}. Fast acquisition is needed since the peak count rate is very high ({approx}800 kHz) due to the pulsed structure of the neutron beam. Measurements at ISIS indicate that three characteristics regions exist in the biparametric spectrum: i) background gamma events of low pulse amplitudes; ii) low pulse amplitude neutron events in the energy range E{sub dep}= 1.5-7 MeV ascribed to neutron elastic scattering on {sup 12}C; iii) large pulse amplitude neutron events with E{sub n} < 7 MeV ascribed to {sup 12}C(n,{alpha}){sup 9}Be and 12C(n,n')3{alpha}.

  20. Motion artifacts of extended high frame rate imaging.

    PubMed

    Wang, Jing; Lu, Jian-yu

    2007-07-01

    Based on the high frame rate (HFR) imaging method developed in our lab, an extended high frame rate imaging method with various transmission schemes was developed recently. In this method, multiple, limited-diffraction array beams or steered plane wave transmissions are used to increase image resolution and field of view as well as to reduce sidelobes. Furthermore, the multiple, limited-diffraction array beam transmissions can be approximated with square-wave aperture weightings, allowing one or two transmitters to be used with a multielement array transducer to simplify imaging systems. By varying the number of transmissions, the extended HFR imaging method allows a continuous trade-off between image quality and frame rate. Because multiple transmissions are needed to obtain one frame of image for the method, motion could cause phase misalignment and thus produce artifacts, reducing image contrast and resolution and leading to an inaccurate clinical interpretation of images. Therefore, it is important to study how motion affects the method and provide a useful guidance of using the method properly in various applications. In this paper, computer simulations, in vitro and in vivo experiments were performed to study the effects of motion on the method in different conditions. Results show that a number of factors may affect the motion effects. However, it was found that the extended HFR imaging method is not sensitive to the motions commonly encountered in the clinical applications, as is demonstrated by an in vivo heart experiment, unless the number of transmissions is large and objects are moving at a high velocity near the surface of a transducer.

  1. High Rate Laser Pitting Technique for Solar Cell Texturing

    SciTech Connect

    Hans J. Herfurth; Henrikki Pantsar

    2013-01-10

    High rate laser pitting technique for solar cell texturing Efficiency of crystalline silicon solar cells can be improved by creating a texture on the surface to increase optical absorption. Different techniques have been developed for texturing, with the current state-of-the-art (SOA) being wet chemical etching. The process has poor optical performance, produces surfaces that are difficult to passivate or contact and is relatively expensive due to the use of hazardous chemicals. This project shall develop an alternative process for texturing mc-Si using laser micromachining. It will have the following features compared to the current SOA texturing process: -Superior optical surfaces for reduced front-surface reflection and enhanced optical absorption in thin mc-Si substrates -Improved surface passivation -More easily integrated into advanced back-contact cell concepts -Reduced use of hazardous chemicals and waste treatment -Similar or lower cost The process is based on laser pitting. The objective is to develop and demonstrate a high rate laser pitting process which will exceed the rate of former laser texturing processes by a factor of ten. The laser and scanning technologies will be demonstrated on a laboratory scale, but will use inherently technologies that can easily be scaled to production rates. The drastic increase in process velocity is required for the process to be implemented as an in-line process in PV manufacturing. The project includes laser process development, development of advanced optical systems for beam manipulation and cell reflectivity and efficiency testing. An improvement of over 0.5% absolute in efficiency is anticipated after laser-based texturing. The surface textures will be characterized optically, and solar cells will be fabricated with the new laser texturing to ensure that the new process is compatible with high-efficiency cell processing. The result will be demonstration of a prototype process that is suitable for scale-up to a

  2. Radiation Hardened, Modulator ASIC for High Data Rate Communications

    NASA Technical Reports Server (NTRS)

    McCallister, Ron; Putnam, Robert; Andro, Monty; Fujikawa, Gene

    2000-01-01

    Satellite-based telecommunication services are challenged by the need to generate down-link power levels adequate to support high quality (BER approx. equals 10(exp 12)) links required for modem broadband data services. Bandwidth-efficient Nyquist signaling, using low values of excess bandwidth (alpha), can exhibit large peak-to-average-power ratio (PAPR) values. High PAPR values necessitate high-power amplifier (HPA) backoff greater than the PAPR, resulting in unacceptably low HPA efficiency. Given the high cost of on-board prime power, this inefficiency represents both an economical burden, and a constraint on the rates and quality of data services supportable from satellite platforms. Constant-envelope signals offer improved power-efficiency, but only by imposing a severe bandwidth-efficiency penalty. This paper describes a radiation- hardened modulator which can improve satellite-based broadband data services by combining the bandwidth-efficiency of low-alpha Nyquist signals with high power-efficiency (negligible HPA backoff).

  3. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  4. Resistance of the boreal forest to high burn rates.

    PubMed

    Héon, Jessie; Arseneault, Dominique; Parisien, Marc-André

    2014-09-23

    Boreal ecosystems and their large carbon stocks are strongly shaped by extensive wildfires. Coupling climate projections with records of area burned during the last 3 decades across the North American boreal zone suggests that area burned will increase by 30-500% by the end of the 21st century, with a cascading effect on ecosystem dynamics and on the boreal carbon balance. Fire size and the frequency of large-fire years are both expected to increase. However, how fire size and time since previous fire will influence future burn rates is poorly understood, mostly because of incomplete records of past fire overlaps. Here, we reconstruct the length of overlapping fires along a 190-km-long transect during the last 200 y in one of the most fire-prone boreal regions of North America to document how fire size and time since previous fire will influence future fire recurrence. We provide direct field evidence that extreme burn rates can be sustained by a few occasional droughts triggering immense fires. However, we also show that the most fire-prone areas of the North American boreal forest are resistant to high burn rates because of overabundant young forest stands, thereby creating a fuel-mediated negative feedback on fire activity. These findings will help refine projections of fire effect on boreal ecosystems and their large carbon stocks. PMID:25201981

  5. Spall fracture in aluminium alloy at high strain rates

    NASA Astrophysics Data System (ADS)

    Joshi, K. D.; Rav, Amit; Sur, Amit; Kaushik, T. C.; Gupta, Satish C.

    2016-05-01

    Spall fracture strength and dynamic yield strength has been measured in 8mm thick target plates of aluminium alloy Al2024-T4 at high strain rates generated in three plate impact experiments carried out at impact velocities of 180 m/s, 370 m/s and 560m/s, respectively, using single stage gas gun facility. In each experiment, the free surface velocity history of the Al2024-T4 sample plate measured employing velocity interferometer system for any reflector (VISAR) is used to determine the spall strength and dynamic yield strength of this material. The spall strength of 1.11 GPa, 1.16 GPa and 1.43 GPa, determined from measured free surface velocity history of sample material in three experiments performed at impact velocity of 180 m/s, 370 m/s and 560 m/s, respectively, are higher than the quasi static value of 0.469 GPa and display almost linearly increasing trend with increasing impact velocity or equivalently with increasing strain rates. The average strain rates just ahead of the spall fracture are determined to be 1.9×10 4/s, 2.0×104/s and 2.5×104/s, respectively. The dynamic yield strength determined in the three experiments range from 0.383 GPa to 0.407 GPa, which is higher than the quasi static value of 0.324GPa.

  6. Resistance of the boreal forest to high burn rates.

    PubMed

    Héon, Jessie; Arseneault, Dominique; Parisien, Marc-André

    2014-09-23

    Boreal ecosystems and their large carbon stocks are strongly shaped by extensive wildfires. Coupling climate projections with records of area burned during the last 3 decades across the North American boreal zone suggests that area burned will increase by 30-500% by the end of the 21st century, with a cascading effect on ecosystem dynamics and on the boreal carbon balance. Fire size and the frequency of large-fire years are both expected to increase. However, how fire size and time since previous fire will influence future burn rates is poorly understood, mostly because of incomplete records of past fire overlaps. Here, we reconstruct the length of overlapping fires along a 190-km-long transect during the last 200 y in one of the most fire-prone boreal regions of North America to document how fire size and time since previous fire will influence future fire recurrence. We provide direct field evidence that extreme burn rates can be sustained by a few occasional droughts triggering immense fires. However, we also show that the most fire-prone areas of the North American boreal forest are resistant to high burn rates because of overabundant young forest stands, thereby creating a fuel-mediated negative feedback on fire activity. These findings will help refine projections of fire effect on boreal ecosystems and their large carbon stocks.

  7. Optical transcutaneous link for low power, high data rate telemetry.

    PubMed

    Liu, Tianyi; Bihr, Ulrich; Anis, Syed M; Ortmanns, Maurits

    2012-01-01

    A low power and high data rate wireless optical link for implantable data transmission is presented in this paper. In some neural prosthetic applications particularly in regard to neural recording system, there is a demand for high speed communication between an implanted device and an external device. An optical transcutaneous link is a promising implantable telemetry solution, since it shows lower power requirements than RF telemetry. In this paper, this advantage is further enhanced by using a modified on-off keying and a simple custom designed low power VCSEL driver. This transmitter achieves an optical transcutaneous link capable of transmitting data at 50 Mbps through the 4 mm tissue, with a tolerance of 2 mm misalignment and a BER of less than 10(-5), while the power consumption is only 4.1 mW or less. PMID:23366690

  8. On the response of rubbers at high strain rates.

    SciTech Connect

    Niemczura, Johnathan Greenberg

    2010-02-01

    In this report, we examine the propagation of tensile waves of finite deformation in rubbers through experiments and analysis. Attention is focused on the propagation of one-dimensional dispersive and shock waves in strips of latex and nitrile rubber. Tensile wave propagation experiments were conducted at high strain-rates by holding one end fixed and displacing the other end at a constant velocity. A high-speed video camera was used to monitor the motion and to determine the evolution of strain and particle velocity in the rubber strips. Analysis of the response through the theory of finite waves and quantitative matching between the experimental observations and analytical predictions was used to determine an appropriate instantaneous elastic response for the rubbers. This analysis also yields the tensile shock adiabat for rubber. Dispersive waves as well as shock waves are also observed in free-retraction experiments; these are used to quantify hysteretic effects in rubber.

  9. High repetition rate laser systems: targets, diagnostics and radiation protection

    SciTech Connect

    Gizzi, Leonida A.; Clark, Eugene; Neely, David; Tolley, Martin; Roso, Luis

    2010-02-02

    Accessing the high repetition regime of ultra intense laser-target interactions at small or moderate laser energies is now possible at a large number of facilities worldwide. New projects such as HiPER and ELI promise to extend this regime to the high energy realm at the multi-kJ level. This opportunity raises several issues on how best to approach this new regime of operation in a safe and efficient way. At the same time, a new class of experiments or a new generation of secondary sources of particles and radiation may become accessible, provided that target fabrication and diagnostics are capable of handling this rep-rated regime. In this paper, we explore this scenario and analyse existing and perspective techniques that promise to address some of the above issues.

  10. Automated Production of High Rep Rate Foam Targets

    NASA Astrophysics Data System (ADS)

    Hall, F.; Spindloe, C.; Haddock, D.; Tolley, M.; Nazarov, W.

    2016-04-01

    Manufacturing low density targets in the numbers needed for high rep rate experiments is highly challenging. This report summarises advances from manual production to semiautomated and the improvements that follow both in terms of production time and target uniformity. The production process is described and shown to be improved by the integration of an xyz robot with dispensing capabilities. Results are obtained from manual and semiautomated production runs and compared. The variance in the foam thickness is reduced significantly which should decrease experimental variation due to target parameters and could allow for whole batches to be characterised by the measurement of a few samples. The work applies to both foil backed and free standing foam targets.

  11. High-rate lithium thionyl-chloride battery development

    SciTech Connect

    Cieslak, W.R.; Weigand, D.E.

    1993-12-31

    We have developed a lithium thionyl-chloride cell for use in a high rate battery application to provide power for a missile computer and stage separation detonators. The battery pack contains 20 high surface area ``DD`` cells wired in a series-parallel configuration to supply a nominal 28 volts with a continuous draw of 20 amperes. The load profile also requires six squib firing pulses of one second duration at a 20 ampere peak. Performance and safety of the cells were optimized in a ``D`` cell configuration before progressing to the longer ``DD` cell. Active surface area in the ``D`` cell is 735 cm{sup 2}, and 1650 cm{sup 2} in the ``DD`` cell. The design includes 1.5M LiAlCl{sub 4}/SOCl{sub 2} electrolyte, a cathode blend of Shawinigan Acetylene Black and Cabot Black Pearls 2000 carbons, Scimat ETFE separator, and photoetched current collectors.

  12. High rates of methane emissions from south taiga wetland ponds.

    NASA Astrophysics Data System (ADS)

    Glagolev, M.; Kleptsova, I.; Maksyutov, S.

    2012-04-01

    Since wetland ponds are often assumed to be insignificant sources of methane, there is a limited data about its fluxes. In this study, we found surprisingly high rates of methane emission at several shallow ponds in the south taiga zone of West Siberia. Wetland ponds within the Great Vasyugan Mire ridge-hollow-pool patterned bog system were investigated. 22 and 24 flux measurements from ponds and surrounded mires, respectively, were simultaneously made by a static chamber method in July, 2011. In contrast to previous measurements, fluxes were measured using the small boat with floated chamber to avoid disturbance to the water volume. Since the ebullition is most important emission pathway, minimization of physical disturbance provoking gas bubbling significantly increases the data accuracy. Air temperature varied from 15 to 22° C during the measurements, and pH at different pond depths - from 4.4 to 5. As it was found, background emission from surrounding ridges and hollows was 1.7/2.6/3.3 mgC·m-2·h1 (1st/2nd/3rd quartiles). These rates are in a perfect correspondence with the typical methane emission fluxes from other south taiga bogs. Methane emission from wetland ponds turned out to be by order of magnitude higher (9.3/11.3/15.6 mgC·m-2·h1). Comparing to other measurements in West Siberia, many times higher emissions (70.9/111.6/152.3 mgC·m-2·h1) were found in forest-steppe and subtaiga fen ponds. On the contrary, West Siberian tundra lakes emit methane insignificantly, with the flux rate close to surrounding wetlands (about 0.2-0.3 mgC·m-2·h1). Apparently, there is a naturally determined distribution of ponds with different flux rates over different West Siberia climate-vegetation zones. Further investigations aiming at revelation of the zones with different fluxes would be helpful for total flux revision purposes. With respect to other studies, high emission rates were already detected, for instance, in Baltic ponds (Dzyuban, 2002) and U.K. lakes

  13. High mitochondrial mutation rates estimated from deep-rooting Costa Rican pedigrees

    PubMed Central

    Madrigal, Lorena; Melendez-Obando, Mauricio; Villegas-Palma, Ramon; Barrantes, Ramiro; Raventos, Henrieta; Pereira, Reynaldo; Luiselli, Donata; Pettener, Davide; Barbujani, Guido

    2012-01-01

    Estimates of mutation rates for the noncoding hypervariable Region I (HVR-I) of mitochondrial DNA (mtDNA) vary widely, depending on whether they are inferred from phylogenies (assuming that molecular evolution is clock-like) or directly from pedigrees. All pedigree-based studies so far were conducted on populations of European origin. In this paper we analyzed 19 deep-rooting pedigrees in a population of mixed origin in Costa Rica. We calculated two estimates of the HVR-I mutation rate, one considering all apparent mutations, and one disregarding changes at sites known to be mutational hot spots and eliminating genealogy branches which might be suspected to include errors, or unrecognized adoptions along the female lines. At the end of this procedure, we still observed a mutation rate equal to 1.24 × 10−6, per site per year, i.e., at least three-fold as high as estimates derived from phylogenies. Our results confirm that mutation rates observed in pedigrees are much higher than estimated assuming a neutral model of long-term HVRI evolution. We argue that, until the cause of these discrepancies will be fully understood, both lower estimates (i.e., those derived from phylogenetic comparisons) and higher, direct estimates such as those obtained in this study, should be considered when modeling evolutionary and demographic processes. PMID:22460349

  14. A Investigation of Deadtime and Count Rate Limitations for High Resolution, Multiplane PET Systems

    NASA Astrophysics Data System (ADS)

    Germano, Guido

    1991-02-01

    The goal of this dissertation was to measure and characterize the data loss and the inaccuracies/artifacts caused by high count rates in the latest generations of PET scanners based on two-dimensional matrix detectors, and then, to develop and evaluate methods for compensating for these sources of error. It is important to have a quantitative knowledge of the count rate characteristics for accurate quantitation in PET studies, and it is useful for planning qualitative studies in such a manner that high data rates will not cause inordinate data loss or loss of resolution due to pileup in the detector system. Our approach has been to analyze the PET system as a series of sections or components in a pipeline, independent of one another except for the fact that they are receiving data from the previous section. We have found that all losses of true coincidence events in a PET study, for any PET system, can be seen as occurring in three separate system sections: (1) the front end, comprising the detector assembly and some pre-processing electronics, (2) the coincidence processing stage and (3) the transfer stage, where coincidence data must travel before being sorted into a sinogram. We postulated a model for the loss mechanisms in those three sections, applied it to data collected on neuroPET, total body and animal PET systems, and demonstrated that data loss can be estimated and compensated with excellent precision over a wide range of activity levels. With the advent of matrix detectors, cost and other practical considerations have imposed the multiplexing of all individual detector elements in the matrix through a single channel. This has in turn led to the front end of current PET systems becoming the section that suffers the most, under high count rate conditions. Future PET systems' optimization with respect to count rate and data loss shall concentrate on hardware and firmware modifications of the system's front end.

  15. Preventing errors in laterality.

    PubMed

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2015-04-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in separate colors. This allows the radiologist to correlate all detected laterality terms of the report with the images open in PACS and correct them before the report is finalized. The system is monitored every time an error in laterality was detected. The system detected 32 errors in laterality over a 7-month period (rate of 0.0007 %), with CT containing the highest error detection rate of all modalities. Significantly, more errors were detected in male patients compared with female patients. In conclusion, our study demonstrated that with our system, laterality errors can be detected and corrected prior to finalizing reports.

  16. Design and construction of a high frame rate imaging system

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Waugaman, John L.; Liu, Anjun; Lu, Jian-Yu

    2002-05-01

    A new high frame rate imaging method has been developed recently [Jian-yu Lu, ``2D and 3D high frame rate imaging with limited diffraction beams,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44, 839-856 (1997)]. This method may have a clinical application for imaging of fast moving objects such as human hearts, velocity vector imaging, and low-speckle imaging. To implement the method, an imaging system has been designed. The system consists of one main printed circuit board (PCB) and 16 channel boards (each channel board contains 8 channels), in addition to a set-top box for connections to a personal computer (PC), a front panel board for user control and message display, and a power control and distribution board. The main board contains a field programmable gate array (FPGA) and controls all channels (each channel has also an FPGA). We will report the analog and digital circuit design and simulations, multiplayer PCB designs with commercial software (Protel 99), PCB signal integrity testing and system RFI/EMI shielding, and the assembly and construction of the entire system. [Work supported in part by Grant 5RO1 HL60301 from NIH.

  17. Final Report, Photocathodes for High Repetition Rate Light Sources

    SciTech Connect

    Ben-Zvi, Ilan

    2014-04-20

    This proposal brought together teams at Brookhaven National Laboratory (BNL), Lawrence Berkeley National Laboratory (LBNL) and Stony Brook University (SBU) to study photocathodes for high repetition rate light sources such as Free Electron Lasers (FEL) and Energy Recovery Linacs (ERL). The work done under this grant comprises a comprehensive program on critical aspects of the production of the electron beams needed for future user facilities. Our program pioneered in situ and in operando diagnostics for alkali antimonide growth. The focus is on development of photocathodes for high repetition rate Free Electron Lasers (FELs) and Energy Recovery Linacs (ERLs), including testing SRF photoguns, both normal-conducting and superconducting. Teams from BNL, LBNL and Stony Brook University (SBU) led this research, and coordinated their work over a range of topics. The work leveraged a robust infrastructure of existing facilities and the support was used for carrying out the research at these facilities. The program concentrated in three areas: a) Physics and chemistry of alkali-antimonide cathodes b) Development and testing of a diamond amplifier for photocathodes c) Tests of both cathodes in superconducting RF photoguns and copper RF photoguns

  18. Inexpensive, Low-Rate High-Speed Videography

    NASA Astrophysics Data System (ADS)

    Miller, C. E.

    1989-02-01

    High-speed videography (HSV) at 60 frames-per-second (FPS) has since the earliest systems been an attractive goal. Vast numbers of information gathering and motion analysis problems lend themselves to solution at this rate. Seemingly suitable equipment to implement inexpensive systems has been available off-the-shelf. However, technical problems, complexity and economic factors have pushed development toward higher frame rate systems. The application of new technology recently has combined with a shift of the perceived needs of mass (security and consumer) markets to make available the components for truly inexpensive, high-performance 60 FPS HSV systems. Cameras employing solid-state sensors having electronic shuttering built into the chip architecture are widely available. They provide a simple solution to the temporal resolution problem which formerly required synched/phased mechanical shutters, or synchronized strobe illumination. More recently, the crucial need for standard-format (such as VHS) videocorders capable of field sequential playback in stopped- or in slow-motion has been satisfied. The low cost of effective 60 FPS systems will likely be an incentive for a dramatic increase in the general awareness of the power of HSV as a problem-solving tool. A "trickle-up" effect will be to substantially increase the demand for higher performance systems where their characteristics are appropriate.

  19. High strain rate fracture behavior of fused silica

    NASA Astrophysics Data System (ADS)

    Ruggiero, Andrew; Iannitti, Gianluca; Testa, Gabriel; Limido, Jerome; Lacome, Jean; Olovsson, Lars; Ferraro, Mario; Bonora, Nicola

    2013-06-01

    Fused silica is a high purity synthetic amorphous silicon dioxide characterized by low thermal expansion coefficient, excellent optical qualities and exceptional transmittance over a wide spectral range. Because of its wide use in the military industry as window material, it may be subjected to high-energy ballistic impacts. Under such dynamic conditions, post-yield response of the ceramic as well as the strain rate related effects become significant and should be accounted for in the constitutive modeling. In this study, the procedure for constitutive model validation and model parameters identification, is presented. Taylor impact tests and drop weight tests were designed and performed at different impact velocities, from 1 to 100 m/s, and strain rates, from 102 up to 104 s-1. Numerical simulation of both tests was performed with IMPETUS-FEA, a general non-linear finite element software which offers NURBS finite element technology for the simulation of large deformation and fracture in materials. Model parameters were identified by optimization using multiple validation metrics. The validity of the parameters set determined with the proposed procedure was verified comparing numerical predictions and experimental results for an independent designed test consisting in a fused silica tile impacted at prescribed velocity by a steel sphere.

  20. Measurement and Interpretation of High Strain Rates Near Bishkek, Kyrgyzstan

    NASA Astrophysics Data System (ADS)

    Hager, B. H.; Herring, T. A.; Bragin, V. D.; Zubovicz, A. V.; Molnar, P.; Hamburger, M. A.

    2001-12-01

    In 1996, scientists at the IVTRAN Poligon began frequent measurement of a 25-site GPS network around Bishkek, the capital of Kyrgyzstan. Most of the marks in this network are concentrated near the range front and are measured in campaigns ~6 times/year . Two continuously operating sites spanning the Chu basin to the north and south of Bishkek anchor the network. Outcrop is sparse within the network and most of the campaign sites are mounted on boulders in alluvium. The frequent measurements and dense spacing of the network allow us to judge the stability of the marks, which appears to be, in general, surprisingly good. The geodetic velocity field is dominated by north-south convergence of 3 mm/yr across the network. Most of the convergence occurs over a distance of about 10 km at the southern edge of the basin, resulting in a strain rate of ~0.3 microstrain/yr. This strain rate is high - comparable to that across the San Andreas fault in southern California. Interpretation of this high strain rate in terms of a conventional model using a dislocation in a uniform elastic halfspace would require a shallow locking depth, leading to an inference of relatively low moment release from the earthquakes expected to release the accumulated strain. An alternative explanation is that the strain concentration near the range front results not from a shallow locking depth but from the low elastic modulus of the sediments in the basin. If this model is correct, the rupture area, moment release, and seismic hazard are greater. The network is located just to the west of the surface rupture of the 1911 M ~8 Chon Kemin earthquake, which demonstrated that major earthquakes do occur in this region.