Science.gov

Sample records for high error rates

  1. A forward error correction technique using a high-speed, high-rate single chip codec

    NASA Technical Reports Server (NTRS)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    1989-01-01

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  2. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors.

    PubMed

    Bányai, László; Patthy, László

    2016-08-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation.

  3. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors

    PubMed Central

    Bányai, László; Patthy, László

    2016-01-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation. PMID:27476717

  4. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  5. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    PubMed

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  6. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities

    PubMed Central

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-01-01

    Introduction Sound is among the significant environmental factors for people’s health, and it has an important role in both physical and psychological injuries, and it also affects individuals’ performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. Methods This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Results Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant’s performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). Conclusion This study found that a sound level of 110 dB had an important effect on the individuals’ performances, i.e., the performances were decreased. PMID:27123216

  7. Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.

    1987-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  8. Bit-error-rate testing of high-power 30-GHz traveling wave tubes for ground-terminal applications

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Fujikawa, Gene

    1986-01-01

    Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30 GHz, 200 W, coupled-cavity traveling wave tubes (TWTs). The transmission effects of each TWT were investigated on a band-limited, 220 Mb/sec SMSK signal. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20 GHz technology development program. The approach taken to test the 30 GHz tubes is described and the resultant test data are discussed. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.

  9. ERROR CORRECTION IN HIGH SPEED ARITHMETIC,

    DTIC Science & Technology

    The errors due to a faulty high speed multiplier are shown to be iterative in nature. These errors are analyzed in various aspects. The arithmetic coding technique is suggested for the improvement of high speed multiplier reliability. Through a number theoretic investigation, a large class of arithmetic codes for single iterative error correction are developed. The codes are shown to have near-optimal rates and to render a simple decoding method. The implementation of these codes seems highly practical. (Author)

  10. Correcting the optimal resampling-based error rate by estimating the error rate of wrapper algorithms.

    PubMed

    Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure

    2013-09-01

    High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.

  11. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  12. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Errormore » rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  13. Monitoring Error Rates In Illumina Sequencing

    PubMed Central

    Manley, Leigh J.; Ma, Duanduan; Levine, Stuart S.

    2016-01-01

    Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR’s unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted. PMID:27672352

  14. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  15. Post-manufacturing, 17-times acceptable raw bit error rate enhancement, dynamic codeword transition ECC scheme for highly reliable solid-state drives, SSDs

    NASA Astrophysics Data System (ADS)

    Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken

    2011-04-01

    A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.

  16. Controlling type-1 error rates in whole effluent toxicity testing

    SciTech Connect

    Smith, R.; Johnson, S.C.

    1995-12-31

    A form of variability, called the dose x test interaction, has been found to affect the variability of the mean differences from control in the statistical tests used to evaluate Whole Effluent Toxicity Tests for compliance purposes. Since the dose x test interaction is not included in these statistical tests, the assumed type-1 and type-2 error rates can be incorrect. The accepted type-1 error rate for these tests is 5%. Analysis of over 100 Ceriodaphnia, fathead minnow and sea urchin fertilization tests showed that when the test x dose interaction term was not included in the calculations the type-1 error rate was inflated to as high as 20%. In a compliance setting, this problem may lead to incorrect regulatory decisions. Statistical tests are proposed that properly incorporate the dose x test interaction variance.

  17. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  18. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  19. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  20. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this...

  1. Errors in particle tracking velocimetry with high-speed cameras.

    PubMed

    Feng, Yan; Goree, J; Liu, Bin

    2011-05-01

    Velocity errors in particle tracking velocimetry (PTV) are studied. When using high-speed video cameras, the velocity error may increase at a high camera frame rate. This increase in velocity error is due to particle-position uncertainty, which is one of the two sources of velocity errors studied here. The other source of error is particle acceleration, which has the opposite trend of diminishing at higher frame rates. Both kinds of errors can propagate into quantities calculated from velocity, such as the kinetic temperature of particles or correlation functions. As demonstrated in a dusty plasma experiment, the kinetic temperature of particles has no unique value when measured using PTV, but depends on the sampling time interval or frame rate. It is also shown that an artifact appears in an autocorrelation function computed from particle positions and velocities, and it becomes more severe when a small sampling-time interval is used. Schemes to reduce these errors are demonstrated.

  2. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    PubMed

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  3. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  4. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  5. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  6. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia...

  7. Approximation of Bit Error Rates in Digital Communications

    DTIC Science & Technology

    2007-06-01

    and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase

  8. Technological Advancements and Error Rates in Radiation Therapy Delivery

    SciTech Connect

    Margalit, Danielle N.

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There

  9. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Johnson, Sarah J.; Lance, Andrew M.; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Ralph, T. C.; Symul, Thomas

    2017-02-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates.

  10. Approximate Minimum Bit Error Rate Equalization for Fading Channels

    NASA Astrophysics Data System (ADS)

    Kovacs, Lorant; Levendovszky, Janos; Olah, Andras; Treplan, Gergely

    2010-12-01

    A novel channel equalizer algorithm is introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithm is based on minimizing the bit error rate (BER) using a fast approximation of its gradient with respect to the equalizer coefficients. This approximation is obtained by estimating the exponential summation in the gradient with only some carefully chosen dominant terms. The paper derives an algorithm to calculate these dominant terms in real-time. Summing only these dominant terms provides a highly accurate approximation of the true gradient. Combined with a fast adaptive channel state estimator, the new equalization algorithm yields better performance than the traditional zero forcing (ZF) or minimum mean square error (MMSE) equalizers. The performance of the new method is tested by simulations performed on standard wireless channels. From the performance analysis one can infer that the new equalizer is capable of efficient channel equalization and maintaining a relatively low bit error probability in the case of channels corrupted by frequency selectivity. Hence, the new algorithm can contribute to ensuring QoS communication over highly distorted channels.

  11. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  12. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  13. Hypercorrection of High Confidence Errors in Children

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2012-01-01

    Three experiments investigated whether the hypercorrection effect--the finding that errors committed with high confidence are easier, rather than more difficult, to correct than are errors committed with low confidence--occurs in grade school children as it does in young adults. All three experiments showed that Grade 3-6 children hypercorrected…

  14. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  15. Dose error from deviation of dwell time and source position for high dose-rate 192Ir in remote afterloading system

    PubMed Central

    Okamoto, Hiroyuki; Aikawa, Ako; Wakita, Akihisa; Yoshio, Kotaro; Murakami, Naoya; Nakamura, Satoshi; Hamada, Minoru; Abe, Yoshihisa; Itami, Jun

    2014-01-01

    The influence of deviations in dwell times and source positions for 192Ir HDR-RALS was investigated. The potential dose errors for various kinds of brachytherapy procedures were evaluated. The deviations of dwell time ΔT of a 192Ir HDR source for the various dwell times were measured with a well-type ionization chamber. The deviations of source position ΔP were measured with two methods. One is to measure actual source position using a check ruler device. The other is to analyze peak distances from radiographic film irradiated with 20 mm gap between the dwell positions. The composite dose errors were calculated using Gaussian distribution with ΔT and ΔP as 1σ of the measurements. Dose errors depend on dwell time and distance from the point of interest to the dwell position. To evaluate the dose error in clinical practice, dwell times and point of interest distances were obtained from actual treatment plans involving cylinder, tandem-ovoid, tandem-ovoid with interstitial needles, multiple interstitial needles, and surface-mold applicators. The ΔT and ΔP were 32 ms (maximum for various dwell times) and 0.12 mm (ruler), 0.11 mm (radiographic film). The multiple interstitial needles represent the highest dose error of 2%, while the others represent less than approximately 1%. Potential dose error due to dwell time and source position deviation can depend on kinds of brachytherapy techniques. In all cases, the multiple interstitial needles is most susceptible. PMID:24566719

  16. On quaternary DPSK error rates due to noise and interferences

    NASA Astrophysics Data System (ADS)

    Lye, K. M.; Tjhung, T. T.

    A method for computing the error rates of a quaternary, differentially encoded and detected, phase shift keyed (DPSK) system with Gaussian noise, intersymbol and adjacent channel interferences is presented. In the calculations, intersymbol effects due to the band-limiting IF filter were assumed to have come only from immediately adjacent symbols. Similarly, only immediately adjacent channels were assumed to have contributed toward interchannel interferences. Noise effects were handled by using a probability density formula for corrupted phase differences derived recently by Paula (1981). An experimental system was set up, and error rates measured to verify the analytical results. From the results, optimum receiver bandwidth and channel separation for quaternary DPSK systems can be determined.

  17. Calculate bit error rate for digital radio signal transmission

    NASA Astrophysics Data System (ADS)

    Sandberg, Jorgen

    1987-06-01

    A method for estimating symbol error rate caused by imperfect transmission channels is proposed. The method relates the symbol error rate to peak-to-peak amplitude and phase ripple, maximum gain slope, and maximum group delay distortion. The performance degradation of QPSK, offset QPSK (OQPSK), M-ary PSK (MSK) signals transmitted over a wideband channel, exhibiting either sinusoidal amplitude or phase ripples are evaluated using the proposed method. The transmission channel model, which is a single filter with transfer characteristics, for modeling the frequency of response of a system is described. Consideration is given to signal detection and system degradation. The calculations reveal that the QPSK modulated carrier degrades less then the OQPSK and MSK carriers for peak-to-peak amplitude ripple values less than 6 dB and peak-to-peak phase ripple values less than 45 deg.

  18. Coevolution of Quasispecies: B-Cell Mutation Rates Maximize Viral Error Catastrophes

    NASA Astrophysics Data System (ADS)

    Kamp, Christel; Bornholdt, Stefan

    2002-02-01

    Coevolution of two coupled quasispecies is studied, motivated by the competition between viral evolution and adapting immune response. In this coadaptive model, besides the classical error catastrophe for high virus mutation rates, a second ``adaptation'' catastrophe occurs, when virus mutation rates are too small to escape immune attack. Maximizing both regimes of viral error catastrophes is a possible strategy for an optimal immune response, reducing the range of allowed viral mutation rates to a minimum. From this requirement, one obtains constraints on B-cell mutation rates and receptor lengths, yielding an estimate of somatic hypermutation rates in the germinal center in accordance with observation.

  19. Theoretical Accuracy for ESTL Bit Error Rate Tests

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin

    1998-01-01

    "Bit error rate" [BER] for the purposes of this paper is the fraction of binary bits which are inverted by passage through a communication system. BER can be measured for a block of sample bits by comparing a received block with the transmitted block and counting the erroneous bits. Bit Error Rate [BER] tests are the most common type of test used by the ESTL for evaluating system-level performance. The resolution of the test is obvious: the measurement cannot be resolved more finely than 1/N, the number of bits tested. The tolerance is not. This paper examines the measurement accuracy of the bit error rate test. It is intended that this information will be useful in analyzing data taken in the ESTL. This paper is divided into four sections and follows a logically ordered presentation, with results developed before they are evaluated. However, first-time readers will derive the greatest benefit from this paper by skipping the lengthy section devoted to analysis, and treating it as reference material. The analysis performed in this paper is based on a Probability Density Function [PDF] which is developed with greater detail in a past paper, Theoretical Accuracy for ESTL Probability of Acquisition Tests, EV4-98-609.

  20. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  1. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  2. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    SciTech Connect

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  3. Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.

    PubMed

    Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda

    2016-09-24

    We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe.

  4. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  5. National suicide rates a century after Durkheim: do we know enough to estimate error?

    PubMed

    Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W

    2010-06-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.

  6. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  7. Error Rates and Channel Capacities in Multipulse PPM

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Moision, Bruce

    2007-01-01

    A method of computing channel capacities and error rates in multipulse pulse-position modulation (multipulse PPM) has been developed. The method makes it possible, when designing an optical PPM communication system, to determine whether and under what conditions a given multipulse PPM scheme would be more or less advantageous, relative to other candidate modulation schemes. In conventional M-ary PPM, each symbol is transmitted in a time frame that is divided into M time slots (where M is an integer >1), defining an M-symbol alphabet. A symbol is represented by transmitting a pulse (representing 1) during one of the time slots and no pulse (representing 0 ) during the other M 1 time slots. Multipulse PPM is a generalization of PPM in which pulses are transmitted during two or more of the M time slots.

  8. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    SciTech Connect

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  9. Error Rates in Users of Automatic Face Recognition Software.

    PubMed

    White, David; Dunn, James D; Schmid, Alexandra C; Kemp, Richard I

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated 'candidate lists' selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers-who use the system in their daily work-and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced "facial examiners" outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems-potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.

  10. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  11. A simple calculation method for heavy ion induced soft error rate in space environment

    NASA Astrophysics Data System (ADS)

    Galimov, A. M.; Elushov, I. V.; Zebrev, G. I.

    2016-12-01

    In this paper based on the new parameterization shape, an alternative heavy ion induced soft errors characterization approach is proposed and validated. The method provides an unambiguous calculation procedure to predict an upset rate in highly-scaled memory in a space environment.

  12. Error effects in anterior cingulate cortex reverse when error likelihood is high

    PubMed Central

    Jessup, Ryan K.; Busemeyer, Jerome R.; Brown, Joshua W.

    2010-01-01

    Strong error-related activity in medial prefrontal cortex (mPFC) has been shown repeatedly with neuroimaging and event-related potential studies for the last several decades. Multiple theories have been proposed to account for error effects, including comparator models and conflict detection models, but the neural mechanisms that generate error signals remain in dispute. Typical studies use relatively low error rates, confounding the expectedness and the desirability of an error. Here we show with a gambling task and fMRI that when losses are more frequent than wins, the mPFC error effect disappears, and moreover, exhibits the opposite pattern by responding more strongly to unexpected wins than losses. These findings provide perspective on recent ERP studies and suggest that mPFC error effects result from a comparison between actual and expected outcomes. PMID:20203206

  13. Estimating the annotation error rate of curated GO database sequence annotations

    PubMed Central

    Jones, Craig E; Brown, Alfred L; Baumann, Ute

    2007-01-01

    Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO) sequence database (GOSeqLite). This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006) at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS) had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS) had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information. PMID:17519041

  14. Bit error rate measurement above and below bit rate tracking threshold

    NASA Technical Reports Server (NTRS)

    Kobayaski, H. S.; Fowler, J.; Kurple, W. (Inventor)

    1978-01-01

    Bit error rate is measured by sending a pseudo-random noise (PRN) code test signal simulating digital data through digital equipment to be tested. An incoming signal representing the response of the equipment being tested, together with any added noise, is received and tracked by being compared with a locally generated PRN code. Once the locally generated PRN code matches the incoming signal a tracking lock is obtained. The incoming signal is then integrated and compared bit-by-bit against the locally generated PRN code and differences between bits being compared are counted as bit errors.

  15. High Data Rate Quantum Cryptography

    NASA Astrophysics Data System (ADS)

    Kwiat, Paul; Christensen, Bradley; McCusker, Kevin; Kumor, Daniel; Gauthier, Daniel

    2015-05-01

    While quantum key distribution (QKD) systems are now commercially available, the data rate is a limiting factor for some desired applications (e.g., secure video transmission). Most QKD systems receive at most a single random bit per detection event, causing the data rate to be limited by the saturation of the single-photon detectors. Recent experiments have begun to explore using larger degree of freedoms, i.e., temporal or spatial qubits, to optimize the data rate. Here, we continue this exploration using entanglement in multiple degrees of freedom. That is, we use simultaneous temporal and polarization entanglement to reach up to 8.3 bits of randomness per coincident detection. Due to current technology, we are unable to fully secure the temporal degree of freedom against all possible future attacks; however, by assuming a technologically-limited eavesdropper, we are able to obtain 23.4 MB/s secure key rate across an optical table, after error reconciliation and privacy amplification. In this talk, we will describe our high-rate QKD experiment, with a short discussion on our work towards extending this system to ship-to-ship and ship-to-shore communication, aiming to secure the temporal degree of freedom and to implement a 30-km free-space link over a marine environment.

  16. Rates of computational errors for scoring the SIRS primary scales.

    PubMed

    Tyner, Elizabeth A; Frederick, Richard I

    2013-12-01

    We entered item scores for the Structured Interview of Reported Symptoms (SIRS; Rogers, Bagby, & Dickens, 1991) into a spreadsheet and compared computed scores with those hand-tallied by examiners. We found that about 35% of the tests had at least 1 scoring error. Of SIRS scale scores tallied by examiners, about 8% were incorrectly summed. When the errors were corrected, only 1 SIRS classification was reclassified in the fourfold scheme used by the SIRS. We note that mistallied scores on psychological tests are common, and we review some strategies for reducing scale score errors on the SIRS.

  17. Effect of Electronic Editing on Error Rate of Newspaper.

    ERIC Educational Resources Information Center

    Randall, Starr D.

    1979-01-01

    A study of a North Carolina newspaper indicates that newspapers using fully integrated electronic editing systems have fewer errors in spelling, punctuation, sentence construction, hyphenation, and typography than newspapers not using electronic editing. (GT)

  18. The effects of digitizing rate and phase distortion errors on the shock response spectrum

    NASA Technical Reports Server (NTRS)

    Wise, J. H.

    1983-01-01

    Some of the methods used for acquisition and digitization of high-frequency transients in the analysis of pyrotechnic events, such as explosive bolts for spacecraft separation, are discussed with respect to the reduction of errors in the computed shock response spectrum. Equations are given for maximum error as a function of the sampling rate, phase distortion, and slew rate, and the effects of the characteristics of the filter used are analyzed. A filter is noted to exhibit good passband amplitude, phase response, and response to a step function is a compromise between the flat passband of the elliptic filter and the phase response of the Bessel filter; it is suggested that it be used with a sampling rate of 10f (5 percent).

  19. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  20. Avoiding ambiguity with the Type I error rate in noninferiority trials.

    PubMed

    Kang, Seung-Ho

    2016-01-01

    This review article sets out to examine the Type I error rates used in noninferiority trials. Most papers regarding noninferiority trials only state Type I error rate without mentioning clearly which Type I error rate is evaluated. Therefore, the Type I error rate in one paper is often different from the Type I error rate in another paper, which can confuse readers and makes it difficult to understand papers. Which Type I error rate should be evaluated is related directly to which paradigm is employed in the analysis of noninferiority trial, and to how the historical data are treated. This article reviews the characteristics of the within-trial Type I error rate and the unconditional across-trial Type I error rate which have frequently been examined in noninferiority trials. The conditional across-trial Type I error rate is also briefly discussed. In noninferiority trials comparing a new treatment with an active control without a placebo arm, it is argued that the within-trial Type I error rate should be controlled in order to obtain approval of the new treatment from the regulatory agencies. I hope that this article can help readers understand the difference between two paradigms employed in noninferiority trials.

  1. Simultaneous control of error rates in fMRI data analysis.

    PubMed

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-12-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain.

  2. An Examination of Negative Halo Error in Ratings.

    ERIC Educational Resources Information Center

    Lance, Charles E.; And Others

    1990-01-01

    A causal model of halo error (HE) is derived. Three hypotheses are formulated to explain findings of negative HE. It is suggested that apparent negative HE may have been misinferred from existing correlational measures of HE, and that positive HE is more prevalent than had previously been thought. (SLD)

  3. Study of bit error rate (BER) for multicarrier OFDM

    NASA Astrophysics Data System (ADS)

    Alshammari, Ahmed; Albdran, Saleh; Matin, Mohammad

    2012-10-01

    Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technique that is being used more and more in recent wideband digital communications. It is known for its ability to handle severe channel conditions, the efficiency of spectral usage and the high data rate. Therefore, It has been used in many wired and wireless communication systems such as DSL, wireless networks and 4G mobile communications. Data streams are modulated and sent over multiple subcarriers using either M-QAM or M-PSK. OFDM has lower inter simple interference (ISI) levels because of the of the low data rates of carriers resulting in long symbol periods. In this paper, BER performance of OFDM with respect to signal to noise ratio (SNR) is evaluated. BPSK Modulation is used in s Simulation based system in order to get the BER over different wireless channels. These channels include additive white Gaussian Noise (AWGN) and fading channels that are based on Doppler spread and Delay spread. Plots of the results are compared with each other after varying some of the key parameters of the system such as the IFFT, number of carriers, SNR. The results of the simulation give visualization of what kind of BER to expect when the signal goes through those channels.

  4. A long lifetime, low error rate RRAM design with self-repair module

    NASA Astrophysics Data System (ADS)

    Zhiqiang, You; Fei, Hu; Liming, Huang; Peng, Liu; Jishun, Kuang; Shiying, Li

    2016-11-01

    Resistive random access memory (RRAM) is one of the promising candidates for future universal memory. However, it suffers from serious error rate and endurance problems. Therefore, exploring a technical solution is greatly demanded to enhance endurance and reduce error rate. In this paper, we propose a reliable RRAM architecture that includes two reliability modules: error correction code (ECC) and self-repair modules. The ECC module is used to detect errors and decrease error rate. The self-repair module, which is proposed for the first time for RRAM, can get the information of error bits and repair wear-out cells by a repair voltage. Simulation results show that the proposed architecture can achieve lowest error rate and longest lifetime compared to previous reliable designs. Project supported by the New Century Excellent Talents in University (No. NCET-12-0165) and the National Natural Science Foundation of China (Nos. 61472123, 61272396).

  5. Finding the right coverage: the impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates.

    PubMed

    Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah

    2016-07-01

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown. Here, we estimated genotyping error rates in SNPs genotyped with double digest RAD sequencing from Mendelian incompatibilities in known mother-offspring dyads of Hoffman's two-toed sloth (Choloepus hoffmanni) across a range of coverage and sequence quality criteria, for both reference-aligned and de novo-assembled data sets. Genotyping error rates were more sensitive to coverage than sequence quality and low coverage yielded high error rates, particularly in de novo-assembled data sets. For example, coverage ≥5 yielded median genotyping error rates of ≥0.03 and ≥0.11 in reference-aligned and de novo-assembled data sets, respectively. Genotyping error rates declined to ≤0.01 in reference-aligned data sets with a coverage ≥30, but remained ≥0.04 in the de novo-assembled data sets. We observed approximately 10- and 13-fold declines in the number of loci sampled in the reference-aligned and de novo-assembled data sets when coverage was increased from ≥5 to ≥30 at quality score ≥30, respectively. Finally, we assessed the effects of genotyping coverage on a common population genetic application, parentage assignments, and showed that the proportion of incorrectly assigned maternities was relatively high at low coverage. Overall, our results suggest that the trade-off between sample size and genotyping error rates be considered prior to building sequencing libraries, reporting genotyping error rates become standard practice, and that effects of genotyping errors on inference be evaluated in restriction-enzyme-based SNP studies.

  6. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Astrophysics Data System (ADS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-06-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  7. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  8. Controlling Type I Error Rate in Evaluating Differential Item Functioning for Four DIF Methods: Use of Three Procedures for Adjustment of Multiple Item Testing

    ERIC Educational Resources Information Center

    Kim, Jihye

    2010-01-01

    In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…

  9. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    ERIC Educational Resources Information Center

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  10. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    ERIC Educational Resources Information Center

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  11. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  12. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  13. Reduction of LNG operator error and equipment failure rates. Topical report, 20 April 1990

    SciTech Connect

    Atallah, S.; Shah, J.N.; Betti, M.

    1990-04-01

    Tables summarizing human error rates and equipment failure frequencies applicable to the LNG industry are presented. Improved training, better supervision, emergency response drills and improved panel design were methods recommended for reducing human error rates. Outright scheduled replacement of critical components, regular inspection and maintenance, and the use of redundant components were reviewed as means for reducing equipment failure rates. The effect of reducing human error and equipment failure rates on the frequency of overfilling an LNG tank were examined. In addition, guidelines for estimating the cost and benefits of these mitigation measures were considered.

  14. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    PubMed

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data.

  15. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    PubMed

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments.

  16. Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis.

    PubMed

    Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E

    2013-12-01

    In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.

  17. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    PubMed

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  18. The Tukey Honestly Significant Difference Procedure and Its Control of the Type I Error-Rate.

    ERIC Educational Resources Information Center

    Barnette, J. Jackson; McLean, James E.

    Tukey's Honestly Significant Difference (HSD) procedure (J. Tukey, 1953) is probably the most recommended and used procedure for controlling Type I error rate when making multiple pairwise comparisons as follow-ups to a significant omnibus F test. This study compared observed Type I errors with nominal alphas of 0.01, 0.05, and 0.10 compared for…

  19. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  20. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  1. Bit-Error-Rate Performance of a Gigabit Ethernet O-CDMA Technology Demonstrator (TD)

    SciTech Connect

    Hernandez, V J; Mendez, A J; Bennett, C V; Lennon, W J

    2004-07-09

    An O-CDMA TD based on 2-D (wavelength/time) codes is described, with bit-error-rate (BER) and eye-diagram measurements given for eight users. Simulations indicate that the TD can support 32 asynchronous users.

  2. An improved lane detection algorithm and the definition of the error rate standard

    NASA Astrophysics Data System (ADS)

    Yu, Chung-Hsien; Su, Chung-Yen

    2012-04-01

    In this paper, we propose a method to improve the problem that the assistant lane marks caused by pulse. We also define a method to distinguish the assistant lane marks' error rate objectively. To improve the problem, we mainly use the Sobel edge detection to replace the Canny edge detection. Also, we make use of the Gaussian filter to filter noise. Finally, we improve the ellipse ROI size in tracking part and the performance of the FPS (frame per second) from 32 to 39. In the past, we distinguished the assistant lane marks' error rate very subjectively. To avoid judging subjectively, we propose an objective method to define the assistant lane marks' error rate as a standard. We use the performance and the error rate to choose the ellipse ROI parameter.

  3. Sensitivity to Error Fields in NSTX High Beta Plasmas

    SciTech Connect

    Park, Jong-Kyu; Menard, Jonathan E.; Gerhardt, Stefan P.; Buttery, Richard J.; Sabbagh, Steve A.; Bell, Steve E.; LeBlanc, Benoit P.

    2011-11-07

    It was found that error field threshold decreases for high β in NSTX, although the density correlation in conventional threshold scaling implies the threshold would increase since higher β plasmas in our study have higher plasma density. This greater sensitivity to error field in higher β plasmas is due to error field amplification by plasmas. When the effect of amplification is included with ideal plasma response calculations, the conventional density correlation can be restored and threshold scaling becomes more consistent with low β plasmas. However, it was also found that the threshold can be significantly changed depending on plasma rotation. When plasma rotation was reduced by non-resonant magnetic braking, the further increase of sensitivity to error field was observed.

  4. Conjunction error rates on a continuous recognition memory test: little evidence for recollection.

    PubMed

    Jones, Todd C; Atchley, Paul

    2002-03-01

    Two experiments examined conjunction memory errors on a continuous recognition task where the lag between parent words (e.g., blackmail, jailbird) and later conjunction lures (blackbird) was manipulated. In Experiment 1, contrary to expectations, the conjunction error rate was highest at the shortest lag (1 word) and decreased as the lag increased. In Experiment 2 the conjunction error rate increased significantly from a 0- to a 1-word lag, then decreased slightly from a 1- to a 5-word lag. The results provide mixed support for simple familiarity and dual-process accounts of recognition. Paradoxically, searching for an item in memory does not appear to be a good encoding task.

  5. Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates

    DOE PAGES

    Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.; ...

    2015-12-01

    Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.

  6. High Rate Digital Demodulator ASIC

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder; Sheikh, Salman; Koubek, Steve; Hoy, Scott; Gray, Andrew

    1998-01-01

    The architecture of High Rate (600 Mega-bits per second) Digital Demodulator (HRDD) ASIC capable of demodulating BPSK and QPSK modulated data is presented in this paper. The advantages of all-digital processing include increased flexibility and reliability with reduced reproduction costs. Conventional serial digital processing would require high processing rates necessitating a hardware implementation in other than CMOS technology such as Gallium Arsenide (GaAs) which has high cost and power requirements. It is more desirable to use CMOS technology with its lower power requirements and higher gate density. However, digital demodulation of high data rates in CMOS requires parallel algorithms to process the sampled data at a rate lower than the data rate. The parallel processing algorithms described here were developed jointly by NASA's Goddard Space Flight Center (GSFC) and the Jet Propulsion Laboratory (JPL). The resulting all-digital receiver has the capability to demodulate BPSK, QPSK, OQPSK, and DQPSK at data rates in excess of 300 Mega-bits per second (Mbps) per channel. This paper will provide an overview of the parallel architecture and features of the HRDR ASIC. In addition, this paper will provide an over-view of the implementation of the hardware architectures used to create flexibility over conventional high rate analog or hybrid receivers. This flexibility includes a wide range of data rates, modulation schemes, and operating environments. In conclusion it will be shown how this high rate digital demodulator can be used with an off-the-shelf A/D and a flexible analog front end, both of which are numerically computer controlled, to produce a very flexible, low cost high rate digital receiver.

  7. Threshold-Based Bit Error Rate for Stopping Iterative Turbo Decoding in a Varying SNR Environment

    NASA Astrophysics Data System (ADS)

    Mohamad, Roslina; Harun, Harlisya; Mokhtar, Makhfudzah; Adnan, Wan Azizun Wan; Dimyati, Kaharudin

    2017-01-01

    Online bit error rate (BER) estimation (OBE) has been used as a stopping iterative turbo decoding criterion. However, the stopping criteria only work at high signal-to-noise ratios (SNRs), and fail to have early termination at low SNRs, which contributes to an additional iteration number and an increase in computational complexity. The failure of the stopping criteria is caused by the unsuitable BER threshold, which is obtained by estimating the expected BER performance at high SNRs, and this threshold does not indicate the correct termination according to convergence and non-convergence outputs (CNCO). Hence, in this paper, the threshold computation based on the BER of CNCO is proposed for an OBE stopping criterion (OBEsc). From the results, OBEsc is capable of terminating early in a varying SNR environment. The optimum number of iterations achieved by the OBEsc allows huge savings in decoding iteration number and decreasing the delay of turbo iterative decoding.

  8. High Rate GPS on Volcanoes

    NASA Astrophysics Data System (ADS)

    Mattia, M.

    2005-12-01

    The high rate GPS data processing can be considered as the "new deal" in geodetic monitoring of active volcanoes. Before an eruption, infact, transient episodes of ground displacements related to the dynamics of magmatic fluids can be revealed through a careful analysis of high rate GPS data. In the very first phases of an eruption the real time processing of high rate GPS data can be used by the authorities of Civil Protection to follow the opening of fractures field on the slopes of the volcanoes. During an eruption large explosions, opening of vents, migration of fractures fields, landslides and other dangerous phenomena can be followed and their potential of damage estimated by authorities. Examples from the recent eruption of Stromboli volcano and from the current activities of high rate GPS monitoring on Mt. Etna are reported, with the aim to show the great potential and the perspectives of this technique.

  9. Mean and Random Errors of Visual Roll Rate Perception from Central and Peripheral Visual Displays

    NASA Technical Reports Server (NTRS)

    Vandervaart, J. C.; Hosman, R. J. A. W.

    1984-01-01

    A large number of roll rate stimuli, covering rates from zero to plus or minus 25 deg/sec, were presented to subjects in random order at 2 sec intervals. Subjects were to make estimates of magnitude of perceived roll rate stimuli presented on either a central display, on displays in the peripheral ield of vision, or on all displays simultaneously. Response was by way of a digital keyboard device, stimulus exposition times were varied. The present experiment differs from earlier perception tasks by the same authors in that mean rate perception error (and standard deviation) was obtained as a function of rate stimulus magnitude, whereas the earlier experiments only yielded mean absolute error magnitude. Moreover, in the present experiment, all stimulus rates had an equal probability of occurrence, whereas the earlier tests featured a Gaussian stimulus probability density function. Results yield a ood illustration of the nonlinear functions relating rate presented to rate perceived by human observers or operators.

  10. The effect of sampling on estimates of lexical specificity and error rates.

    PubMed

    Rowland, Caroline F; Fletcher, Sarah L

    2006-11-01

    Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.

  11. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    PubMed

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  12. Bit error rate investigation of spin-transfer-switched magnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Wang, Zihui; Zhou, Yuchen; Zhang, Jing; Huai, Yiming

    2012-10-01

    A method is developed to enable a fast bit error rate (BER) characterization of spin-transfer-torque magnetic random access memory magnetic tunnel junction (MTJ) cells without integrating with complementary metal-oxide semiconductor circuit. By utilizing the reflected signal from the devices under test, the measurement setup allows a fast measurement of bit error rates at >106, writing events per second. It is further shown that this method provides a time domain capability to examine the MTJ resistance states during a switching event, which can assist write error analysis in great detail. BER of a set of spin-transfer-torque MTJ cells has been evaluated by using this method, and bit error free operation (down to 10-8) for optimized in-plane MTJ cells has been demonstrated.

  13. Measuring radiation induced changes in the error rate of fiber optic data links

    NASA Astrophysics Data System (ADS)

    Decusatis, Casimer; Benedict, Mel

    1996-12-01

    The purpose of this work is to investigate the effects of ionizing (gamma) radiation exposure on the bit error rate (BER) of an optical fiber data communication link. While it is known that exposure to high radiation dose rates will darken optical fiber permanently, comparatively little work has been done to evaluate modern dose rates. The resulting increase in fiber attenuation over time represents an additional penalty in the link optical power budget, which can degrade the BER if it is not accounted for in the link design. Modeling the link to predict this penalty is difficult, and it requires detailed information about the fiber composition that may not be available to the link designer. We describe a laboratory method for evaluating the effects of moderate dose rates on both single-mode and multimode fiber. Once a sample of fiber has been measured, the data can be fit to a simple model for predicting (at least to first order) BER as a function of radiation dose for fibers of similar composition.

  14. Compensatory and Noncompensatory Information Integration and Halo Error in Performance Rating Judgments.

    ERIC Educational Resources Information Center

    Kishor, Nand

    1992-01-01

    The relationship between compensatory and noncompensatory information integration and the intensity of the halo effect in performance rating was studied. Seventy University of British Columbia (Canada) students rated 27 teacher profiles. That the way performance information is mentally integrated affects the intensity of halo error was supported.…

  15. Asymptotic error-rate analysis of FSO links using transmit laser selection over gamma-gamma atmospheric turbulence channels with pointing errors.

    PubMed

    García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen

    2012-01-30

    Since free-space optical (FSO) systems are usually installed on high buildings and building sway may cause vibrations in the transmitted beam, an unsuitable alignment between transmitter and receiver together with fluctuations in the irradiance of the transmitted optical beam due to the atmospheric turbulence can severely degrade the performance of optical wireless communication systems. In this paper, asymptotic bit error-rate (BER) performance for FSO communication systems using transmit laser selection over atmospheric turbulence channels with pointing errors is analyzed. Novel closed-form asymptotic expressions are derived when the irradiance of the transmitted optical beam is susceptible to either a wide range of turbulence conditions (weak to strong), following a gamma-gamma distribution of parameters α and β, or pointing errors, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results provide significant insight into the impact of various system and channel parameters, showing that the diversity order is independent of the pointing error when the equivalent beam radius at the receiver is at least 2(min{α,β})(1/2) times the value of the pointing error displacement standard deviation at the receiver. Moreover, since proper FSO transmission requires transmitters with accurate control of their beamwidth, asymptotic expressions are used to find the optimum beamwidth that minimizes the BER at different turbulence conditions. Simulation results are further demonstrated to confirm the accuracy and usefulness of the derived results, showing that asymptotic expressions here obtained lead to simple bounds on the bit error probability that get tighter over a wider range of signal-to-noise ratio (SNR) as the turbulence strength increases.

  16. The effect of voice recognition software on comparative error rates in radiology reports.

    PubMed

    McGurk, S; Brauer, K; Macfarlane, T V; Duncan, K A

    2008-10-01

    This study sought to confirm whether reports generated in a department of radiology contain more errors if generated using voice recognition (VR) software than if traditional dictation-transcription (DT) is used. All radiology reports generated over a 1-week period in a British teaching hospital were assessed. The presence of errors and their impact on the report were assessed. Data collected included the type of report, site of dictation, the experience of the operator, and whether English was the first language of the operator. 1887 reports were reviewed. 1160 (61.5%) were dictated using VR and 727 reports (38.5%) were generated by DT. 71 errors (3.8% of all reports) were identified. 56 errors were made using VR (4.8% of VR reports), whereas 15 errors were identified in DT reports (2.1% of transcribed reports). The difference in report errors between these two dictation methods was statistically significant (p = 0.002). Of the 71 reports containing errors, 37 (52.1%) had errors that affecting understanding. Other factors were also identified that significantly increased the likelihood of errors in a VR-generated report, such as working in a busy inpatient environment (p<0.001) and having a language other than English as a first language (p = 0.034). Operator grade was not significantly associated with increased errors. In conclusion, using VR significantly increases the number of reports containing errors. Errors using VR are significantly more likely to occur in noisy areas with a high workload and are more likely to be made by radiologists for whom English is not their first language.

  17. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-01-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  18. High performance interconnection between high data rate networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.

    1992-01-01

    The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.

  19. A stochastic node-failure network with individual tolerable error rate at multiple sinks

    NASA Astrophysics Data System (ADS)

    Huang, Cheng-Fu; Lin, Yi-Kuei

    2014-05-01

    Many enterprises consider several criteria during data transmission such as availability, delay, loss, and out-of-order packets from the service level agreements (SLAs) point of view. Hence internet service providers and customers are gradually focusing on tolerable error rate in transmission process. The internet service provider should provide the specific demand and keep a certain transmission error rate by their SLAs to each customer. This paper is mainly to evaluate the system reliability that the demand can be fulfilled under the tolerable error rate at all sinks by addressing a stochastic node-failure network (SNFN), in which each component (edge or node) has several capacities and a transmission error rate. An efficient algorithm is first proposed to generate all lower boundary points, the minimal capacity vectors satisfying demand and tolerable error rate for all sinks. Then the system reliability can be computed in terms of such points by applying recursive sum of disjoint products. A benchmark network and a practical network in the United States are demonstrated to illustrate the utility of the proposed algorithm. The computational complexity of the proposed algorithm is also analyzed.

  20. Greater Heart Rate Responses to Acute Stress Are Associated with Better Post-Error Adjustment in Special Police Cadets

    PubMed Central

    Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W.; Zhang, Kan; Zhang, Liang; Wu, Jianhui

    2016-01-01

    High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280

  1. Estimation of the minimum mRNA splicing error rate in vertebrates.

    PubMed

    Skandalis, A

    2016-01-01

    The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons.

  2. Minimum attainable RMS attitude error using co-located rate sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1989-01-01

    A closed form analytical expression for the minimum attainable attitude error (as well as the error rate) in a flexible beam by feedback control using co-located rate sensors is announced. For simplicity, researchers consider a beam clamped at one end with an offset mass (antenna) at the other end where the controls and sensors are located. Both control moment generators and force actuators are provided. The results apply to any beam-like lattice-type truss, and provide the kind of performance criteria needed under CSI - Controls-Stuctures-Integrated optimization.

  3. High Data Rate Instrument Study

    NASA Technical Reports Server (NTRS)

    Schober, Wayne; Lansing, Faiza; Wilson, Keith; Webb, Evan

    1999-01-01

    The High Data Rate Instrument Study was a joint effort between the Jet Propulsion Laboratory (JPL) and the Goddard Space Flight Center (GSFC). The objectives were to assess the characteristics of future high data rate Earth observing science instruments and then to assess the feasibility of developing data processing systems and communications systems required to meet those data rates. Instruments and technology were assessed for technology readiness dates of 2000, 2003, and 2006. The highest data rate instruments are hyperspectral and synthetic aperture radar instruments which are capable of generating 3.2 Gigabits per second (Gbps) and 1.3 Gbps, respectively, with a technology readiness date of 2003. These instruments would require storage of 16.2 Terebits (Tb) of information (RF communications case of two orbits of data) or 40.5 Tb of information (optical communications case of five orbits of data) with a technology readiness date of 2003. Onboard storage capability in 2003 is estimated at 4 Tb; therefore, all the data created cannot be stored without processing or compression. Of the 4 Tb of stored data, RF communications can only send about one third of the data to the ground, while optical communications is estimated at 6.4 Tb across all three technology readiness dates of 2000, 2003, and 2006 which were used in the study. The study includes analysis of the onboard processing and communications technologies at these three dates and potential systems to meet the high data rate requirements. In the 2003 case, 7.8% of the data can be stored and downlinked by RF communications while 10% of the data can be stored and downlinked with optical communications. The study conclusion is that only 1 to 10% of the data generated by high data rate instruments will be sent to the ground from now through 2006 unless revolutionary changes in spacecraft design and operations such as intelligent data extraction are developed.

  4. Parallel Transmission Pulse Design with Explicit Control for the Specific Absorption Rate in the Presence of Radiofrequency Errors

    PubMed Central

    Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L.; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L.; Guerin, Bastien

    2016-01-01

    Purpose A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. Methods The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors (“worst-case SAR”) is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Results Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled “worst-case SAR” in the presence of errors of this magnitude at minor cost of the excitation profile quality. Conclusion Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. PMID:26147916

  5. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  6. Bit error rate testing of a proof-of-concept model baseband processor

    NASA Technical Reports Server (NTRS)

    Stover, J. B.; Fujikawa, G.

    1986-01-01

    Bit-error-rate tests were performed on a proof-of-concept baseband processor. The BBP, which operates at an intermediate frequency in the C-Band, demodulates, demultiplexes, routes, remultiplexes, and remodulates digital message segments received from one ground station for retransmission to another. Test methods are discussed and test results are compared with the Contractor's test results.

  7. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    2001-01-01

    Submitted the Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and identify characteristics predictive of score reliability variations. Results for 67 analyses generally support the internal…

  8. Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.

    2010-01-01

    We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.

  9. Kurzweil Reading Machine: A Partial Evaluation of Its Optical Character Recognition Error Rate.

    ERIC Educational Resources Information Center

    Goodrich, Gregory L.; And Others

    1979-01-01

    A study designed to assess the ability of the Kurzweil reading machine (a speech reading device for the visually handicapped) to read three different type styles produced by five different means indicated that the machines tested had different error rates depending upon the means of producing the copy and upon the type style used. (Author/CL)

  10. Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; Capraro, Robert M.; Henson, Robin K.

    The Mathematics Anxiety Rating Scale (MARS) (F. Richardson and R. Suinn, 1972) was submitted to a reliability generalization analysis to characterize the variability of measurement error in MARS scores across administrations and to identify possible study characteristics that are predictive of reliability variation. The meta-analysis was performed…

  11. Type I Error Rate and Power of Some Alternative Methods to the Independent Samples "t" Test.

    ERIC Educational Resources Information Center

    Nthangeni, Mbulaheni; Algina, James

    2001-01-01

    Examined Type I error rates and power for four tests for treatment control studies in which a larger treatment mean may be accompanied by a larger treatment variance and examined these aspects of the independent samples "t" test and the Welch test. Evaluated each test and suggested conditions for the use of each approach. (SLD)

  12. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate.

  13. Rate-distortion optimal video transport over IP allowing packets with bit errors.

    PubMed

    Harmanci, Oztan; Tekalp, A Murat

    2007-05-01

    We propose new models and methods for rate-distortion (RD) optimal video delivery over IP, when packets with bit errors are also delivered. In particular, we propose RD optimal methods for slicing and unequal error protection (UEP) of packets over IP allowing transmission of packets with bit errors. The proposed framework can be employed in a classical independent-layer transport model for optimal slicing, as well as in a cross-layer transport model for optimal slicing and UEP, where the forward error correction (FEC) coding is performed at the link layer, but the application controls the FEC code rate with the constraint that a given IP packet is subject to constant channel protection. The proposed method uses a novel dynamic programming approach to determine the optimal slicing and UEP configuration for each video frame in a practical manner, that is compliant with the AVC/H.264 standard. We also propose new rate and distortion estimation techniques at the encoder side in order to efficiently evaluate the objective function for a slice configuration. The cross-layer formulation option effectively determines which regions of a frame should be protected better; hence, it can be considered as a spatial UEP scheme. We successfully demonstrate, by means of experimental results, that each component of the proposed system provides significant gains, up to 2.0 dB, compared to competitive methods.

  14. The Impact of Sex of the Speaker, Sex of the Rater and Profanity Type of Language Trait Errors in Speech Evaluation: A Test of the Rating Error Paradigm.

    ERIC Educational Resources Information Center

    Bock, Douglas G.; And Others

    1984-01-01

    This study (1) demonstrates the negative impact of profanity in a public speech and (2) sheds light on the conceptualization of the term "rating error." Implications for classroom teaching are discussed. (PD)

  15. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  16. Pupillary response predicts multiple object tracking load, error rate, and conscientiousness, but not inattentional blindness.

    PubMed

    Wright, Timothy J; Boot, Walter R; Morgan, Chelsea S

    2013-09-01

    Research on inattentional blindness (IB) has uncovered few individual difference measures that predict failures to detect an unexpected event. Notably, no clear relationship exists between primary task performance and IB. This is perplexing as better task performance is typically associated with increased effort and should result in fewer spare resources to process the unexpected event. We utilized a psychophysiological measure of effort (pupillary response) to explore whether differences in effort devoted to the primary task (multiple object tracking) are related to IB. Pupillary response was sensitive to tracking load and differences in primary task error rates. Furthermore, pupillary response was a better predictor of conscientiousness than primary task errors; errors were uncorrelated with conscientiousness. Despite being sensitive to task load, individual differences in performance and conscientiousness, pupillary response did not distinguish between those who noticed the unexpected event and those who did not. Results provide converging evidence that effort and primary task engagement may be unrelated to IB.

  17. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    PubMed

    Kugelman, Jeffrey R; Wiley, Michael R; Nagle, Elyse R; Reyes, Daniel; Pfeffer, Brad P; Kuhn, Jens H; Sanchez-Lockhart, Mariano; Palacios, Gustavo F

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA) as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5) of all compared methods.

  18. Bit Error Rate Performance Limitations Due to Raman Amplifier Induced Crosstalk in a WDM Transmission System

    NASA Astrophysics Data System (ADS)

    Tithi, F. H.; Majumder, S. P.

    2017-03-01

    Analysis is carried out for a single span wavelength division multiplexing (WDM) transmission system with distributed Raman amplification to find the effect of amplifier induced crosstalk on the bit error rate (BER) with different system parameters. The results are evaluated in terms of crosstalk power induced in a WDM channel due to Raman amplification, optical signal to crosstalk ratio (OSCR) and BER at any distance for different pump power and number of WDM channels. The results show that the WDM system suffers power penalty due to crosstalk which is significant at higher pump power, higher channel separation and number of WDM channel. It is noticed that at a BER 10-9, the power penalty is 8.7 dB and 10.5 dB for the length of 180 km and number of WDM channel N=32 and 64 respectively when the pump power is 20 mW and is higher at high pump power. Analytical results are validated by simulation.

  19. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  20. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  1. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  2. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Astrophysics Data System (ADS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  3. Safety Aspects of Pulsed Dose Rate Brachytherapy: Analysis of Errors in 1,300 Treatment Sessions

    SciTech Connect

    Koedooder, Kees Wieringen, Niek van; Grient, Hans N.B. van der; Herten, Yvonne R.J. van; Pieters, Bradley R.; Blank, Leo

    2008-03-01

    Purpose: To determine the safety of pulsed-dose-rate (PDR) brachytherapy by analyzing errors and technical failures during treatment. Methods and Materials: More than 1,300 patients underwent treatment with PDR brachytherapy, using five PDR remote afterloaders. Most patients were treated with consecutive pulse schemes, also outside regular office hours. Tumors were located in the breast, esophagus, prostate, bladder, gynecology, anus/rectum, orbit, head/neck, with a miscellaneous group of small numbers, such as the lip, nose, and bile duct. Errors and technical failures were analyzed for 1,300 treatment sessions, for which nearly 20,000 pulses were delivered. For each tumor localization, the number and type of occurring errors were determined, as were which localizations were more error prone than others. Results: By routinely using the built-in dummy check source, only 0.2% of all pulses showed an error during the phase of the pulse when the active source was outside the afterloader. Localizations treated using flexible catheters had greater error frequencies than those treated with straight needles or rigid applicators. Disturbed pulse frequencies were in the range of 0.6% for the anus/rectum on a classic version 1 afterloader to 14.9% for orbital tumors using a version 2 afterloader. Exceeding the planned overall treatment time by >10% was observed in only 1% of all treatments. Patients received their dose as originally planned in 98% of all treatments. Conclusions: According to the experience in our institute with 1,300 PDR treatments, we found that PDR is a safe brachytherapy treatment modality, both during and outside of office hours.

  4. High Resolution, High Frame Rate Video Technology

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Papers and working group summaries presented at the High Resolution, High Frame Rate Video (HHV) Workshop are compiled. HHV system is intended for future use on the Space Shuttle and Space Station Freedom. The Workshop was held for the dual purpose of: (1) allowing potential scientific users to assess the utility of the proposed system for monitoring microgravity science experiments; and (2) letting technical experts from industry recommend improvements to the proposed near-term HHV system. The following topics are covered: (1) State of the art in the video system performance; (2) Development plan for the HHV system; (3) Advanced technology for image gathering, coding, and processing; (4) Data compression applied to HHV; (5) Data transmission networks; and (6) Results of the users' requirements survey conducted by NASA.

  5. Rate-Distortion Optimization for Stereoscopic Video Streaming with Unequal Error Protection

    NASA Astrophysics Data System (ADS)

    Tan, A. Serdar; Aksay, Anil; Akar, Gozde Bozdagi; Arikan, Erdal

    2008-12-01

    We consider an error-resilient stereoscopic streaming system that uses an H.264-based multiview video codec and a rateless Raptor code for recovery from packet losses. One aim of the present work is to suggest a heuristic methodology for modeling the end-to-end rate-distortion (RD) characteristic of such a system. Another aim is to show how to make use of such a model to optimally select the parameters of the video codec and the Raptor code to minimize the overall distortion. Specifically, the proposed system models the RD curve of video encoder and performance of channel codec to jointly derive the optimal encoder bit rates and unequal error protection (UEP) rates specific to the layered stereoscopic video streaming. We define analytical RD curve modeling for each layer that includes the interdependency of these layers. A heuristic analytical model of the performance of Raptor codes is also defined. Furthermore, the distortion on the stereoscopic video quality caused by packet losses is estimated. Finally, analytical models and estimated single-packet loss distortions are used to minimize the end-to-end distortion and to obtain optimal encoder bit rates and UEP rates. The simulation results clearly demonstrate the significant quality gain against the nonoptimized schemes.

  6. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    PubMed Central

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  7. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.

  8. Effect of Vertical Rate Error on Recovery from Loss of Well Clear Between UAS and Non-Cooperative Intruders

    NASA Technical Reports Server (NTRS)

    Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2016-01-01

    are suppressed, for all vertical error rate thresholds examined. However, results also show that in roughly 35 of the encounters where a vertical maneuver was selected, forcing the UAS to do a horizontal maneuver instead increased the severity of the loss of well-clear for that encounter. Finally, results showed a small reduction in the number of severe losses of well-clear when the high performance UAS (2000 fpm climb and descent rate) was allowed to maneuver vertically, and the vertical rate error was below 500 fpm. Overall, the results show that using a single vertical rate threshold is not advisable, and that limiting a UAS to horizontal maneuvers when vertical rate errors are above 175 fpm can make a UAS less safe about a third of the time. It is suggested that the hard limit be removed, and system manufacturers instructed to account for their own UAS performance, as well as vertical rate error and encounter geometry, when determining whether or not to provide vertical guidance to regain well-clear.

  9. Analysis of bit error rate for modified T-APPM under weak atmospheric turbulence channel

    NASA Astrophysics Data System (ADS)

    Liu, Zhe; Zhang, Qi; Wang, Yong-jun; Liu, Bo; Zhang, Li-jia; Wang, Kai-min; Xiao, Fei; Deng, Chao-gong

    2013-12-01

    T-APPM is combined of TCM (trellis-coded modulation) and APPM (Amplitude-Pulse-position modulation) and has broad application prospects in space optical communication. Set partitioning in standard T-APPM algorithm has the optimal performance in a multi-carrier system, but whether this method has the optimal performance in APPM which is a single-carrier system is unknown. To solve this problem, we first research the atmospheric channel model with weak turbulence; then a modified T-APPM algorithm was proposed, compared to the standard T-APPM algorithm, modified algorithm uses Gray code mapping instead of set partitioning mapping; finally, simulate the two algorithms with Monte-Carlo method. Simulation results showed that, when bit error rate at 10-4, the modified T-APPM algorithm achieved 0.4dB in SNR, effectively improve the system error performance.

  10. Preliminary error budget for an optical ranging system: Range, range rate, and differenced range observables

    NASA Technical Reports Server (NTRS)

    Folkner, W. M.; Finger, M. H.

    1990-01-01

    Future missions to the outer solar system or human exploration of Mars may use telemetry systems based on optical rather than radio transmitters. Pulsed laser transmission can be used to deliver telemetry rates of about 100 kbits/sec with an efficiency of several bits for each detected photon. Navigational observables that can be derived from timing pulsed laser signals are discussed. Error budgets are presented based on nominal ground stations and spacecraft-transceiver designs. Assuming a pulsed optical uplink signal, two-way range accuracy may approach the few centimeter level imposed by the troposphere uncertainty. Angular information can be achieved from differenced one-way range using two ground stations with the accuracy limited by the length of the available baseline and by clock synchronization and troposphere errors. A method of synchronizing the ground station clocks using optical ranging measurements is presented. This could allow differenced range accuracy to reach the few centimeter troposphere limit.

  11. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  12. Study of flow rate induced measurement error in flow-through nano-hole plasmonic sensor

    PubMed Central

    Tu, Long; Huang, Liang; Wang, Tianyi; Wang, Wenhui

    2015-01-01

    Flow-through gold film perforated with periodically arrayed sub-wavelength nano-holes can cause extraordinary optical transmission (EOT), which has recently emerged as a label-free surface plasmon resonance sensor in biochemical detection by measuring the transmission spectral shift. This paper describes a systematic study of the effect of microfluidic field on the spectrum of EOT associated with the porous gold film. To detect biochemical molecules, the sub-micron-thick film is free-standing in a microfluidic field and thus subject to hydrodynamic deformation. The film deformation alone may cause spectral shift as measurement error, which is coupled with the spectral shift as real signal associated with the molecules. However, this microfluid-induced measurement error has long been overlooked in the field and needs to be identified in order to improve the measurement accuracy. Therefore, we have conducted simulation and analytic analysis to investigate how the microfluidic flow rate affects the EOT spectrum and verified the effect through experiment with a sandwiched device combining Au/Cr/Si3N4 nano-hole film and polydimethylsiloxane microchannels. We found significant spectral blue shift associated with even small flow rates, for example, 12.60 nm for 4.2 μl/min. This measurement error corresponds to 90 times the optical resolution of the current state-of-the-art commercially available spectrometer or 8400 times the limit of detection. This really severe measurement error suggests that we should pay attention to the microfluidic parameter setting for EOT-based flow-through nano-hole sensors and adopt right scheme to improve the measurement accuracy. PMID:26649131

  13. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  14. Error-prone DnaE2 Balances the Genome Mutation Rates in Myxococcus xanthus DK1622.

    PubMed

    Peng, Ran; Chen, Jiang-He; Feng, Wan-Wan; Zhang, Zheng; Yin, Jun; Li, Ze-Shuo; Li, Yue-Zhong

    2017-01-01

    dnaE is an alpha subunit of the tripartite protein complex of DNA polymerase III that is responsible for the replication of bacterial genome. The dnaE gene is often duplicated in many bacteria, and the duplicated dnaE gene was reported dispensable for cell survivals and error-prone in DNA replication in a mystery. In this study, we found that all sequenced myxobacterial genomes possessed two dnaE genes. The duplicate dnaE genes were both highly conserved but evolved divergently, suggesting their importance in myxobacteria. Using Myxococcus xanthus DK1622 as a model, we confirmed that dnaE1 (MXAN_5844) was essential for cell survival, while dnaE2 (MXAN_3982) was dispensable and encoded an error-prone enzyme for replication. The deletion of dnaE2 had small effects on cellular growth and social motility, but significantly decreased the development and sporulation abilities, which could be recovered by the complementation of dnaE2. The expression of dnaE1 was always greatly higher than that of dnaE2 in either the growth or developmental stage. However, overexpression of dnaE2 could not make dnaE1 deletable, probably due to their protein structural and functional divergences. The dnaE2 overexpression not only improved the growth, development and sporulation abilities, but also raised the genome mutation rate of M. xanthus. We argued that the low-expressed error-prone DnaE2 played as a balancer for the genome mutation rates, ensuring low mutation rates for cell adaptation in new environments but avoiding damages from high mutation rates to cells.

  15. Error-prone DnaE2 Balances the Genome Mutation Rates in Myxococcus xanthus DK1622

    PubMed Central

    Peng, Ran; Chen, Jiang-he; Feng, Wan-wan; Zhang, Zheng; Yin, Jun; Li, Ze-shuo; Li, Yue-zhong

    2017-01-01

    dnaE is an alpha subunit of the tripartite protein complex of DNA polymerase III that is responsible for the replication of bacterial genome. The dnaE gene is often duplicated in many bacteria, and the duplicated dnaE gene was reported dispensable for cell survivals and error-prone in DNA replication in a mystery. In this study, we found that all sequenced myxobacterial genomes possessed two dnaE genes. The duplicate dnaE genes were both highly conserved but evolved divergently, suggesting their importance in myxobacteria. Using Myxococcus xanthus DK1622 as a model, we confirmed that dnaE1 (MXAN_5844) was essential for cell survival, while dnaE2 (MXAN_3982) was dispensable and encoded an error-prone enzyme for replication. The deletion of dnaE2 had small effects on cellular growth and social motility, but significantly decreased the development and sporulation abilities, which could be recovered by the complementation of dnaE2. The expression of dnaE1 was always greatly higher than that of dnaE2 in either the growth or developmental stage. However, overexpression of dnaE2 could not make dnaE1 deletable, probably due to their protein structural and functional divergences. The dnaE2 overexpression not only improved the growth, development and sporulation abilities, but also raised the genome mutation rate of M. xanthus. We argued that the low-expressed error-prone DnaE2 played as a balancer for the genome mutation rates, ensuring low mutation rates for cell adaptation in new environments but avoiding damages from high mutation rates to cells. PMID:28203231

  16. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.

  17. Theoretical Bit Error Rate Performance of the Kalman Filter Excisor for FM Interference

    DTIC Science & Technology

    1992-12-01

    un filtre de Kalman as- servi numeriquement par verrouillage de phase et s’avere quasi-optimum quant a la demodulation d’une interference de type MF...Puisqu’on presuppose que l’interftrence est plus forte le signal ou que le bruit, le filtre de Kalman se verrouille sur l’interfdrence et permet...AD-A263 018 THEORETICAL BIT ERROR RATE PERFORMANCE OF THE KALMAN FILTER EXCISOR FOR FM INTERFERENCE by Brian RKominchuk APR 19 1993 DEFENCE RESEARCH

  18. Digitally modulated bit error rate measurement system for microwave component evaluation

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary Jo W.; Budinger, James M.

    1989-01-01

    The NASA Lewis Research Center has developed a unique capability for evaluation of the microwave components of a digital communication system. This digitally modulated bit-error-rate (BER) measurement system (DMBERMS) features a continuous data digital BER test set, a data processor, a serial minimum shift keying (SMSK) modem, noise generation, and computer automation. Application of the DMBERMS has provided useful information for the evaluation of existing microwave components and of design goals for future components. The design and applications of this system for digitally modulated BER measurements are discussed.

  19. Creation and implementation of department-wide structured reports: an analysis of the impact on error rate in radiology reports.

    PubMed

    Hawkins, C Matthew; Hall, Seth; Zhang, Bin; Towbin, Alexander J

    2014-10-01

    The purpose of this study was to evaluate and compare textual error rates and subtypes in radiology reports before and after implementation of department-wide structured reports. Randomly selected radiology reports that were generated following the implementation of department-wide structured reports were evaluated for textual errors by two radiologists. For each report, the text was compared to the corresponding audio file. Errors in each report were tabulated and classified. Error rates were compared to results from a prior study performed prior to implementation of structured reports. Calculated error rates included the average number of errors per report, average number of nongrammatical errors per report, the percentage of reports with an error, and the percentage of reports with a nongrammatical error. Identical versions of voice-recognition software were used for both studies. A total of 644 radiology reports were randomly evaluated as part of this study. There was a statistically significant reduction in the percentage of reports with nongrammatical errors (33 to 26%; p = 0.024). The likelihood of at least one missense omission error (omission errors that changed the meaning of a phrase or sentence) occurring in a report was significantly reduced from 3.5 to 1.2% (p = 0.0175). A statistically significant reduction in the likelihood of at least one comission error (retained statements from a standardized report that contradict the dictated findings or impression) occurring in a report was also observed (3.9 to 0.8%; p = 0.0007). Carefully constructed structured reports can help to reduce certain error types in radiology reports.

  20. Equilibrating errors: reliable estimation of information transmission rates in biological systems with spectral analysis-based methods.

    PubMed

    Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

    2014-06-01

    Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.

  1. Effect of a misspecification of response rates on type I and type II errors, in a phase II Simon design.

    PubMed

    Baey, Charlotte; Le Deley, Marie-Cécile

    2011-07-01

    Phase-II trials are a key stage in the clinical development of a new treatment. Their main objective is to provide the required information for a go/no-go decision regarding a subsequent phase-III trial. In single arm phase-II trials, widely used in oncology, this decision relies on the comparison of efficacy outcomes observed in the trial to historical controls. The false positive rate generally accepted in phase-II trials, around 10%, contrasts with the very high attrition rate of new compounds tested in phase-III trials, estimated at about 60%. We assumed that this gap could partly be explained by the misspecification of the response rate expected with standard treatment, leading to erroneous hypotheses tested in the phase-II trial. We computed the false positive probability of a defined design under various hypotheses of expected efficacy probability. Similarly we calculated the power of the trial to detect the efficacy of a new compound for different expected efficacy rates. Calculations were done considering a binary outcome, such as the response rate, with a decision rule based on a Simon two-stage design. When analysing a single-arm phase-II trial, based on a design with a pre-specified null hypothesis, a 5% absolute error in the expected response rate leads to a false positive rate of about 30% when it is supposed to be 10%. This inflation of type-I error varies only slightly according to the hypotheses of the initial design. Single-arm phase-II trials poorly control for the false positive rate. Randomised phase-II trials should, therefore, be more often considered.

  2. Data-driven region-of-interest selection without inflating Type I error rate.

    PubMed

    Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard

    2017-01-01

    In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies.

  3. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    NASA Astrophysics Data System (ADS)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  4. Non-detection errors in a survey of persistent, highly-detectable vegetation species.

    PubMed

    Clarke, Kenneth D; Lewis, Megan; Brandle, Robert; Ostendorf, Bertram

    2012-01-01

    Rare, small or annual vegetation species are widely known to be imperfectly detected with single site surveys by most conventional vegetation survey methods. However, the detectability of common, persistent vegetation species is assumed to be high, but without supporting research. In this study, we evaluate the extent of false-negative errors of perennial vegetation species in a systematic vegetation survey in arid South Australia. Analysis was limited to the seven most easily detected persistent vegetation species and controlled for observer skill. By comparison of methodologies, we then predict the magnitude of non-detection error rates in a second survey. The analysis revealed that all but one highly detectable perennial vegetation species was imperfectly detected (detection probabilities ranged from 0.22 to 0.83). While focussed in the Australian rangelands, the implications of this study are far reaching. Inferences drawn from systematic vegetation surveys that fail to identify and account for non-detection errors should be considered potentially flawed. The identification of this problem in vegetation surveying is long overdue. By comparison, non-detection has been a widely acknowledged, and dealt with, problem in fauna surveying for decades. We recommend that, where necessary, vegetation survey methodology adopt the methods developed in fauna surveying to cope with non-detection errors.

  5. Analytical Evaluation of Bit Error Rate Performance of a Free-Space Optical Communication System with Receive Diversity Impaired by Pointing Error

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2015-06-01

    Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.

  6. Critical error rate of quantum-key-distribution protocols versus the size and dimensionality of the quantum alphabet

    NASA Astrophysics Data System (ADS)

    Sych, Denis V.; Grishanin, Boris A.; Zadkov, Victor N.

    2004-11-01

    A quantum-information analysis of how the size and dimensionality of the quantum alphabet affect the critical error rate of the quantum-key-distribution (QKD) protocols is given on an example of two QKD protocols—the six-state and ∞-state (i.e., a protocol with continuous alphabet) ones. In the case of a two-dimensional Hilbert space, it is shown that, under certain assumptions, increasing the number of letters in the quantum alphabet up to infinity slightly increases the critical error rate. Increasing additionally the dimensionality of the Hilbert space leads to a further increase in the critical error rate.

  7. TCP Flow Level Performance Evaluation on Error Rate Aware Scheduling Algorithms in Evolved UTRA and UTRAN Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Uchida, Masato; Tsuru, Masato; Oie, Yuji

    We present a TCP flow level performance evaluation on error rate aware scheduling algorithms in Evolved UTRA and UTRAN networks. With the introduction of the error rate, which is the probability of transmission failure under a given wireless condition and the instantaneous transmission rate, the transmission efficiency can be improved without sacrificing the balance between system performance and user fairness. The performance comparison with and without error rate awareness is carried out dependant on various TCP traffic models, user channel conditions, schedulers with different fairness constraints, and automatic repeat request (ARQ) types. The results indicate that error rate awareness can make the resource allocation more reasonable and effectively improve the system and individual performance, especially for poor channel condition users.

  8. Carbon and sediment accumulation in the Everglades (USA) during the past 4000 years: rates, drivers, and sources of error

    USGS Publications Warehouse

    Glaser, Paul H.; Volin, John C.; Givnish, Thomas J.; Hansen, Barbara C. S.; Stricker, Craig A.

    2012-01-01

    Tropical and sub-tropical wetlands are considered to be globally important sources for greenhouse gases but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida, to assess these problems and determine the factors that could govern carbon accumulation in this large sub-tropical wetland. Accelerator mass spectroscopy dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion ( 0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1 ) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands.

  9. Bit Error Rate Performance of Partially Coherent Dual-Branch SSC Receiver over Composite Fading Channels

    NASA Astrophysics Data System (ADS)

    Milić, Dejan N.; Đorđević, Goran T.

    2013-01-01

    In this paper, we study the effects of imperfect reference signal recovery on the bit error rate (BER) performance of dual-branch switch and stay combining receiver over Nakagami-m fading/gamma shadowing channels with arbitrary parameters. The average BER of quaternary phase shift keying is evaluated under the assumption that the reference carrier signal is extracted from the received modulated signal. We compute numerical results illustrating simultaneous influence of average signal-to-noise ratio per bit, fading severity, shadowing, phase-locked loop bandwidth-bit duration (BLTb) product, and switching threshold on BER performance. The effects of BLTb on receiver performance under different channel conditions are emphasized. Optimal switching threshold is determined which minimizes BER performance under given channel and receiver parameters.

  10. Accuracy of High-Rate GPS for Seismology

    NASA Technical Reports Server (NTRS)

    Elosegui, P.; Davis, J. L.; Oberlander, D.; Baena, R.; Ekstrom, G.

    2006-01-01

    We built a device for translating a GPS antenna on a positioning table to simulate the ground motions caused by an earthquake. The earthquake simulator is accurate to better than 0.1 mm in position, and provides the "ground truth" displacements for assessing the technique of high-rate GPS. We found that the root-mean-square error of the 1-Hz GPS position estimates over the 15-min duration of the simulated seismic event was 2.5 mm, with approximately 96% of the observations in error by less than 5 mm, and is independent of GPS antenna motion. The error spectrum of the GPS estimates is approximately flicker noise, with a 50% decorrelation time for the position error of approx.1.6 s. We that, for the particular event simulated, the spectrum of dependent error in the GPS measurements. surface deformations exceeds the GPS error spectrum within a finite band. More studies are required to determine whether a generally optimal bandwidth exists for a target group of seismic events.

  11. A high rate proportional chamber

    SciTech Connect

    Henderson, R.; Fraszer, W.; Openshaw, R.; Sheffer, G.; Salomon, M.; Dew, S.; Marans, J.; Wilson, P.

    1987-02-01

    Gas mixtures with high specific ionization allow the use of small interelectrode distances while still maintaining full efficiency. With the short electron drift distances the timing resolution is also improved. The authors have built and operated two 25 cm/sup 2/ chambers with small interelectrode distances. Also single wire detector cells have been built to test gas mixture lifetimes. Various admixtures of CF/sub 4/, DME, Isobutane, Ethane and Argon have been tested. Possible applications of such chambers are as beam profile monitors, position tagging of rare events and front end chambers in spectrometers.

  12. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    SciTech Connect

    Sterling, D; Ehler, E

    2015-06-15

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing.

  13. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    PubMed

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-02-10

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of FST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of FST. In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  14. Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error

    ERIC Educational Resources Information Center

    Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju

    2009-01-01

    Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…

  15. The Visual Motor Integration Test: High Interjudge Reliability, High Potential For Diagnostic Error.

    ERIC Educational Resources Information Center

    Snyder, Peggy P.; And Others

    1981-01-01

    Investigated scoring agreement among three different training levels of Visual Motor Integration Test (VMI) diagnosticians. Correlational data demonstrated high interexaminer reliabilities; however, there were gross errors in precision after raw scores had been converted into VMI age equivalent scores. (Author/RC)

  16. Time-resolved in vivo luminescence dosimetry for online error detection in pulsed dose-rate brachytherapy

    SciTech Connect

    Andersen, Claus E.; Nielsen, Soeren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari

    2009-11-15

    Purpose: The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Methods: Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with {sup 192}Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from {+-}5 to {+-}15 mm) were simulated in software in order to assess the ability of the system to detect errors. Results: For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when

  17. Measuring error rates in genomic perturbation screens: gold standards for human functional genomics

    PubMed Central

    Hart, Traver; Brown, Kevin R; Sircoulomb, Fabrice; Rottapel, Robert; Moffat, Jason

    2014-01-01

    Technological advancement has opened the door to systematic genetics in mammalian cells. Genome-scale loss-of-function screens can assay fitness defects induced by partial gene knockdown, using RNA interference, or complete gene knockout, using new CRISPR techniques. These screens can reveal the basic blueprint required for cellular proliferation. Moreover, comparing healthy to cancerous tissue can uncover genes that are essential only in the tumor; these genes are targets for the development of specific anticancer therapies. Unfortunately, progress in this field has been hampered by off-target effects of perturbation reagents and poorly quantified error rates in large-scale screens. To improve the quality of information derived from these screens, and to provide a framework for understanding the capabilities and limitations of CRISPR technology, we derive gold-standard reference sets of essential and nonessential genes, and provide a Bayesian classifier of gene essentiality that outperforms current methods on both RNAi and CRISPR screens. Our results indicate that CRISPR technology is more sensitive than RNAi and that both techniques have nontrivial false discovery rates that can be mitigated by rigorous analytical methods. PMID:24987113

  18. Detecting trends in raptor counts: power and type I error rates of various statistical tests

    USGS Publications Warehouse

    Hatfield, J.S.; Gould, W.R.; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.

    1996-01-01

    We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.

  19. A call for more transparent reporting of error rates: the quality of AFLP data in ecological and evolutionary research.

    PubMed

    Crawford, Lindsay A; Koscinski, Daria; Keyghobadi, Nusha

    2012-12-01

    Despite much discussion of the importance of quantifying and reporting genotyping error in molecular studies, it is still not standard practice in the literature. This is particularly a concern for amplified fragment length polymorphism (AFLP) studies, where differences in laboratory, peak-calling and locus-selection protocols can generate data sets varying widely in genotyping error rate, the number of loci used and potentially estimates of genetic diversity or differentiation. In our experience, papers rarely provide adequate information on AFLP reproducibility, making meaningful comparisons among studies difficult. To quantify the extent of this problem, we reviewed the current molecular ecology literature (470 recent AFLP articles) to determine the proportion of studies that report an error rate and follow established guidelines for assessing error. Fifty-four per cent of recent articles do not report any assessment of data set reproducibility. Of those studies that do claim to have assessed reproducibility, the majority (~90%) either do not report a specific error rate or do not provide sufficient details to allow the reader to judge whether error was assessed correctly. Even of the papers that do report an error rate and provide details, many (≥23%) do not follow recommended standards for quantifying error. These issues also exist for other marker types such as microsatellites, and next-generation sequencing techniques, particularly those which use restriction enzymes for fragment generation. Therefore, we urge all researchers conducting genotyping studies to estimate and more transparently report genotyping error using existing guidelines and encourage journals to enforce stricter standards for the publication of genotyping studies.

  20. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    PubMed

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  1. Bit error rate analysis of free-space optical system with spatial diversity over strong atmospheric turbulence channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Krishnan, Prabu; Sriram Kumar, D.

    2014-12-01

    Free-space optical communication (FSO) is emerging as a captivating alternative to work out the hindrances in the connectivity problems. It can be used for transmitting signals over common lands and properties that the sender or receiver may not own. The performance of an FSO system depends on the random environmental conditions. The bit error rate (BER) performance of differential phase shift keying FSO system is investigated. A distributed strong atmospheric turbulence channel with pointing error is considered for the BER analysis. Here, the system models are developed for single-input, single-output-FSO (SISO-FSO) and single-input, multiple-output-FSO (SIMO-FSO) systems. The closed-form mathematical expressions are derived for the average BER with various combining schemes in terms of the Meijer's G function.

  2. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, W. S.; Burkhart, J. F.; Kylling, A.

    2015-08-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  3. Research on controlling middle spatial frequency error of high gradient precise aspheric by pitch tool

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan; Zhong, Xianyun

    2016-09-01

    Extreme optical fabrication projects known as EUV and X-ray optic systems, which are representative of today's advanced optical manufacturing technology level, have special requirements for the optical surface quality. In synchroton radiation (SR) beamlines, mirrors of high shape accuracy is always used in grazing incidence. In nanolithograph systems, middle spatial frequency errors always lead to small-angle scattering or flare that reduces the contrast of the image. The slope error is defined for a given horizontal length, the increase or decrease in form error at the end point relative to the starting point is measured. The quality of reflective optical elements can be described by their deviation from ideal shape at different spatial frequencies. Usually one distinguishes between the figure error, the low spatial error part ranging from aperture length to 1mm frequencies, and the mid-high spatial error part from 1mm to 1 μm and from1 μm to some 10 nm spatial frequencies, respectively. Firstly, this paper will disscuss the relationship between slope error and middle spatial frequency error, which both describe the optical surface error along with the form profile. Then, experimental researches will be conducted on a high gradient precise aspheric with pitch tool, which aim to restraining the middle spatial frequency error.

  4. Modelling non-linear redshift-space distortions in the galaxy clustering pattern: systematic errors on the growth rate parameter

    NASA Astrophysics Data System (ADS)

    de la Torre, Sylvain; Guzzo, Luigi

    2012-11-01

    We investigate the ability of state-of-the-art redshift-space distortion models for the galaxy anisotropic two-point correlation function, ξ(r⊥, r∥), to recover precise and unbiased estimates of the linear growth rate of structure f, when applied to catalogues of galaxies characterized by a realistic bias relation. To this aim, we make use of a set of simulated catalogues at z = 0.1 and 1 with different luminosity thresholds, obtained by populating dark matter haloes from a large N-body simulation using halo occupation prescriptions. We examine the most recent developments in redshift-space distortion modelling, which account for non-linearities on both small and intermediate scales produced, respectively, by randomized motions in virialized structures and non-linear coupling between the density and velocity fields. We consider the possibility of including the linear component of galaxy bias as a free parameter and directly estimate the growth rate of structure f. Results are compared to those obtained using the standard dispersion model, over different ranges of scales. We find that the model of Taruya et al., the most sophisticated one considered in this analysis, provides in general the most unbiased estimates of the growth rate of structure, with systematic errors within ±4 per cent over a wide range of galaxy populations spanning luminosities between L > L* and L > 3L*. The scale dependence of galaxy bias plays a role on recovering unbiased estimates of f when fitting quasi-non-linear scales. Its effect is particularly severe for most luminous galaxies, for which systematic effects in the modelling might be more difficult to mitigate and have to be further investigated. Finally, we also test the impact of neglecting the presence of non-negligible velocity bias with respect to mass in the galaxy catalogues. This can produce an additional systematic error of the order of 1-3 per cent depending on the redshift, comparable to the statistical errors the we

  5. Serialized quantum error correction protocol for high-bandwidth quantum repeaters

    NASA Astrophysics Data System (ADS)

    Glaudell, A. N.; Waks, E.; Taylor, J. M.

    2016-09-01

    Advances in single-photon creation, transmission, and detection suggest that sending quantum information over optical fibers may have losses low enough to be correctable using a quantum error correcting code (QECC). Such error-corrected communication is equivalent to a novel quantum repeater scheme, but crucial questions regarding implementation and system requirements remain open. Here we show that long-range entangled bit generation with rates approaching 108 entangled bits per second may be possible using a completely serialized protocol, in which photons are generated, entangled, and error corrected via sequential, one-way interactions with as few matter qubits as possible. Provided loss and error rates of the required elements are below the threshold for quantum error correction, this scheme demonstrates improved performance over transmission of single photons. We find improvement in entangled bit rates at large distances using this serial protocol and various QECCs. In particular, at a total distance of 500 km with fiber loss rates of 0.3 dB km-1, logical gate failure probabilities of 10-5, photon creation and measurement error rates of 10-5, and a gate speed of 80 ps, we find the maximum single repeater chain entangled bit rates of 51 Hz at a 20 m node spacing and 190 000 Hz at a 43 m node spacing for the {[[3,1,2

  6. Estimation of genotyping error rate from repeat genotyping, unintentional recaptures and known parent-offspring comparisons in 16 microsatellite loci for brown rockfish (Sebastes auriculatus).

    PubMed

    Hess, Maureen A; Rhydderch, James G; LeClair, Larry L; Buckley, Raymond M; Kawase, Mitsuhiro; Hauser, Lorenz

    2012-11-01

    Genotyping errors are present in almost all genetic data and can affect biological conclusions of a study, particularly for studies based on individual identification and parentage. Many statistical approaches can incorporate genotyping errors, but usually need accurate estimates of error rates. Here, we used a new microsatellite data set developed for brown rockfish (Sebastes auriculatus) to estimate genotyping error using three approaches: (i) repeat genotyping 5% of samples, (ii) comparing unintentionally recaptured individuals and (iii) Mendelian inheritance error checking for known parent-offspring pairs. In each data set, we quantified genotyping error rate per allele due to allele drop-out and false alleles. Genotyping error rate per locus revealed an average overall genotyping error rate by direct count of 0.3%, 1.5% and 1.7% (0.002, 0.007 and 0.008 per allele error rate) from replicate genotypes, known parent-offspring pairs and unintentionally recaptured individuals, respectively. By direct-count error estimates, the recapture and known parent-offspring data sets revealed an error rate four times greater than estimated using repeat genotypes. There was no evidence of correlation between error rates and locus variability for all three data sets, and errors appeared to occur randomly over loci in the repeat genotypes, but not in recaptures and parent-offspring comparisons. Furthermore, there was no correlation in locus-specific error rates between any two of the three data sets. Our data suggest that repeat genotyping may underestimate true error rates and may not estimate locus-specific error rates accurately. We therefore suggest using methods for error estimation that correspond to the overall aim of the study (e.g. known parent-offspring comparisons in parentage studies).

  7. Structure of turbulence at high shear rate

    NASA Technical Reports Server (NTRS)

    Lee, Moon Joo; Kim, John; Moin, Parviz

    1990-01-01

    The structure of homogeneous turbulence subject to high shear rate has been investigated by using three-dimensional, time-dependent numerical simulations of the Navier-Stokes equations. This study indicates that high shear rate alone is sufficient for generation of the streaky structures, and that the presence of a solid boundary is not necessary. Evolution of the statistical correlations is examined to determine the effect of high shear rate on the development of anisotropy in turbulence. It is shown that the streamwise fluctuating motions are enhanced so profoundly that a highly anisotropic turbulence state with a 'one-component' velocity field and 'two-component' vorticity field develops asymptotically as total shear increases. Because of high-shear rate, rapid distortion theory predicts remarkably well the anisotropic behavior of the structural quantities.

  8. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.

  9. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    SciTech Connect

    Chau, H.F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1{radical}(5){approx_equal}27.6%, thereby making it the most error resistant scheme known to date.

  10. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    NASA Astrophysics Data System (ADS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-11-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  11. Deconvolution of high rate flicker electroretinograms.

    PubMed

    Alokaily, A; Bóhorquez, J; Özdamar, Ö

    2014-01-01

    Flicker electroretinograms are steady-state electroretinograms (ERGs) generated by high rate flash stimuli that produce overlapping periodic responses. When a flash stimulus is delivered at low rates, a transient response named flash ERG (FERG) representing the activation of neural structures within the outer retina is obtained. Although FERGs and flicker ERGs are used in the diagnosis of many retinal diseases, their waveform relationships have not been investigated in detail. This study examines this relationship by extracting transient FERGs from specially generated quasi steady-state flicker and ERGs at stimulation rates above 10 Hz and similarly generated conventional flicker ERGs. The ability to extract the transient FERG responses by deconvolving flicker responses to temporally jittered stimuli at high rates is investigated at varying rates. FERGs were obtained from seven normal subjects stimulated with LED-based displays, delivering steady-state and low jittered quasi steady-state responses at five rates (10, 15, 32, 50, 68 Hz). The deconvolution method enabled a successful extraction of "per stimulus" unit transient ERG responses for all high stimulation rates. The deconvolved FERGs were used successfully to synthesize flicker ERGs obtained at the same high stimulation rates.

  12. Highly porous thermal protection materials: Modelling and prediction of the methodical experimental errors

    NASA Astrophysics Data System (ADS)

    Cherepanov, Valery V.; Alifanov, Oleg M.; Morzhukhina, Alena V.; Budnik, Sergey A.

    2016-11-01

    The formation mechanisms and the main factors affecting the systematic error of thermocouples were investigated. According to the results of experimental studies and mathematical modelling it was established that in highly porous heat resistant materials for aerospace application the thermocouple errors are determined by two competing mechanisms provided correlation between the errors and the difference between radiation and conduction heat fluxes. The comparative analysis was carried out and some features of the methodical error formation related to the distances from the heated surface were established.

  13. ISS Update: High Rate Communications System

    NASA Video Gallery

    ISS Update Commentator Pat Ryan interviews Diego Serna, Communications and Tracking Officer, about the High Rate Communications System. Questions? Ask us on Twitter @NASA_Johnson and include the ha...

  14. [Hopes of high dose-rate radiotherapy].

    PubMed

    Fouillade, Charles; Favaudon, Vincent; Vozenin, Marie-Catherine; Romeo, Paul-Henri; Bourhis, Jean; Verrelle, Pierre; Devauchelle, Patrick; Patriarca, Annalisa; Heinrich, Sophie; Mazal, Alejandro; Dutreix, Marie

    2017-03-07

    In this review, we present the synthesis of the newly acquired knowledge concerning high dose-rate irradiations and the hopes that these new radiotherapy modalities give rise to. The results were presented at a recent symposium on the subject.

  15. People's Hypercorrection of High-Confidence Errors: Did They Know It All Along?

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2011-01-01

    This study investigated the "knew it all along" explanation of the hypercorrection effect. The hypercorrection effect refers to the finding that when people are given corrective feedback, errors that are committed with high confidence are easier to correct than low-confidence errors. Experiment 1 showed that people were more likely to…

  16. Error-rate estimation in discriminant analysis of non-linear longitudinal data: A comparison of resampling methods.

    PubMed

    de la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Núñez-Antón, Vicente

    2016-07-08

    Consider longitudinal observations across different subjects such that the underlying distribution is determined by a non-linear mixed-effects model. In this context, we look at the misclassification error rate for allocating future subjects using cross-validation, bootstrap algorithms (parametric bootstrap, leave-one-out, .632 and [Formula: see text]), and bootstrap cross-validation (which combines the first two approaches), and conduct a numerical study to compare the performance of the different methods. The simulation and comparisons in this study are motivated by real observations from a pregnancy study in which one of the main objectives is to predict normal versus abnormal pregnancy outcomes based on information gathered at early stages. Since in this type of studies it is not uncommon to have insufficient data to simultaneously solve the classification problem and estimate the misclassification error rate, we put special attention to situations when only a small sample size is available. We discuss how the misclassification error rate estimates may be affected by the sample size in terms of variability and bias, and examine conditions under which the misclassification error rate estimates perform reasonably well.

  17. Comparison of Self-Scoring Error Rate for SDS (Self Directed Search) (1970) and the Revised SDS (1977).

    ERIC Educational Resources Information Center

    Price, Gary E.; And Others

    A comparison of Self-Scoring Error Rate for Self Directed Search (SDS) and the revised SDS is presented. The subjects were college freshmen and sophomores who participated in career planning as a part of their orientation program, and a career workshop. Subjects, N=190 on first study and N=84 on second study, were then randomly assigned to the SDS…

  18. Bit-error rate performance of coherent optical M-ary PSK/QAM using decision-aided maximum likelihood phase estimation.

    PubMed

    Yu, Changyuan; Zhang, Shaoliang; Kam, Pooi Yuen; Chen, Jian

    2010-06-07

    The bit-error rate (BER) expressions of 16- phase-shift keying (PSK) and 16- quadrature amplitude modulation (QAM) are analytically obtained in the presence of a phase error. By averaging over the statistics of the phase error, the performance penalty can be analytically examined as a function of the phase error variance. The phase error variances leading to a 1-dB signal-to-noise ratio per bit penalty at BER=10(-4) have been found to be 8.7 x 10(-2) rad(2), 1.2 x 10(-2) rad(2), 2.4 x 10(-3) rad(2), 6.0 x 10(-4) rad(2) and 2.3 x 10(-3) rad(2) for binary, quadrature, 8-, and 16-PSK and 16QAM, respectively. With the knowledge of the allowable phase error variance, the corresponding laser linewidth tolerance can be predicted. We extend the phase error variance analysis of decision-aided maximum likelihood carrier phase estimation in M-ary PSK to 16QAM, and successfully predict the laser linewidth tolerance in different modulation formats, which agrees well with the Monte Carlo simulations. Finally, approximate BER expressions for different modulation formats are introduced to allow a quick estimation of the BER performance as a function of the phase error variance. Further, the BER approximations give a lower bound on the laser linewidth requirements in M-ary PSK and 16QAM. It is shown that as far as laser linewidth tolerance is concerned, 16QAM outperforms 16PSK which has the same spectral efficiency (SE), and has nearly the same performance as 8PSK which has lower SE. Thus, 16-QAM is a promising modulation format for high SE coherent optical communications.

  19. Research on high-precision laser displacement sensor-based error compensation model

    NASA Astrophysics Data System (ADS)

    Zhang, Zhifeng; Zhai, Yusheng; Su, Zhan; Qiao, Lin; Tang, Yiming; Wang, Xinjie; Su, Yuling; Song, Zhijun

    2015-08-01

    The triangulation measurement is a kind of active vision measurement. The laser triangulation displacement is widely used with advantages of non-contact, high precision, high sensitivity. The measuring error will increase with the nonlinear and noise disturbance when sensors work in large distance. The paper introduces the principle of laser triangulation measurement and analyzes the measuring error and establishes the compensation error. Spot centroid is extracted with digital image processing technology to increase noise-signal ratio. Results of simulation and experiment show the method can meet requirement of large distance and high precision.

  20. Compensation of spectral and RF errors in swept-source OCT for high extinction complex demodulation

    PubMed Central

    Siddiqui, Meena; Tozburun, Serhat; Zhang, Ellen Ziyi; Vakoc, Benjamin J.

    2015-01-01

    We provide a framework for compensating errors within passive optical quadrature demodulation circuits used in swept-source optical coherence tomography (OCT). Quadrature demodulation allows for detection of both the real and imaginary components of an interference fringe, and this information separates signals from positive and negative depth spaces. To achieve a high extinction (∼60 dB) between these positive and negative signals, the demodulation error must be less than 0.1% in amplitude and phase. It is difficult to construct a system that achieves this low error across the wide spectral and RF bandwidths of high-speed swept-source systems. In a prior work, post-processing methods for removing residual spectral errors were described. Here, we identify the importance of a second class of errors originating in the RF domain, and present a comprehensive framework for compensating both spectral and RF errors. Using this framework, extinctions >60 dB are demonstrated. A stability analysis shows that calibration parameters associated with RF errors are accurate for many days, while those associated with spectral errors must be updated prior to each imaging session. Empirical procedures to derive both RF and spectral calibration parameters simultaneously and to update spectral calibration parameters are presented. These algorithms provide the basis for using passive optical quadrature demodulation circuits with high speed and wide-bandwidth swept-source OCT systems. PMID:25836784

  1. Internal pressure gradient errors in σ-coordinate ocean models in high resolution fjord studies

    NASA Astrophysics Data System (ADS)

    Berntsen, Jarle; Thiem, Øyvind; Avlesen, Helge

    2015-08-01

    Terrain following ocean models are today applied in coastal areas and fjords where the topography may be very steep. Recent advances in high performance computing facilitate model studies with very high spatial resolution. In general, numerical discretization errors tend to zero with the grid size. However, in fjords and near the coast the slopes may be very steep, and the internal pressure gradient errors associated with σ-models may be significant even in high resolution studies. The internal pressure gradient errors are due to errors when estimating the density gradients in σ-models, and these errors are investigated for two idealized test cases and for the Hardanger fjord in Norway. The methods considered are the standard second order method and a recently proposed method that is balanced such that the density gradients are zero for the case ρ = ρ(z) where ρ is the density and z is the vertical coordinate. The results show that by using the balanced method, the errors may be reduced considerably also for slope parameters larger than the maximum suggested value of 0.2. For the Hardanger fjord case initialized with ρ = ρ(z) , the errors in the results produced with the balanced method are orders of magnitude smaller than the corresponding errors in the results produced with the second order method.

  2. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  3. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    PubMed

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  4. People's hypercorrection of high-confidence errors: did they know it all along?

    PubMed

    Metcalfe, Janet; Finn, Bridgid

    2011-03-01

    This study investigated the "knew it all along" explanation of the hypercorrection effect. The hypercorrection effect refers to the finding that when people are given corrective feedback, errors that are committed with high confidence are easier to correct than low-confidence errors. Experiment 1 showed that people were more likely to claim that they knew it all along when they were given the answers to high-confidence errors as compared with low-confidence errors. Experiments 2 and 3 investigated whether people really did know the correct answers before being told or whether the claim in Experiment 1 was mere hindsight bias. Experiment 2 showed that (a) participants were more likely to choose the correct answer in a 2nd guess multiple-choice test when they had expressed an error with high rather than low confidence and (b) that they were more likely to generate the correct answers to high-confidence as compared with low-confidence errors after being told they were wrong and to try again. Experiment 3 showed that (c) people were more likely to produce the correct answer when given a 2-letter cue to high- rather than low-confidence errors and that (d) when feedback was scaffolded by presenting the target letters 1 by 1, people needed fewer such letter prompts to reach the correct answers when they had committed high- rather than low-confidence errors. These results converge on the conclusion that when people said that they knew it all along, they were right. This knowledge, no doubt, contributes to why they are able to correct those high-confidence errors so easily.

  5. Turbulence structure at high shear rate

    NASA Technical Reports Server (NTRS)

    Lee, Moon Joo; Kim, John; Moin, Parviz

    1987-01-01

    The structure of homogeneous turbulence in the presence of a high shear rate is studied using results obtained from three-dimensional time-dependent numerical simulations of the Navier-Stokes equations on a grid of 512 x 128 x 128 node points. It is shown that high shear rate enhances the streamwise fluctuating motion to such an extent that a highly anisotropic turbulence state with a one-dimensional velocity field and two-dimensional small-scale turbulence develops asymptotically as total shear increases. Instantaneous velocity fields show that high shear rate in homogeneous turbulent shear flow produces structures which are similar to the streaks present in the viscous sublayer of turbulent boundary layers.

  6. Figures deduction method for mast valuating interpolation errors of encoder with high precision

    NASA Astrophysics Data System (ADS)

    Yi, Jie; An, Li-min; Liu, Chun-xia

    2011-08-01

    With the development of technology, especially the need of fast accurately running after and orientating the aim of horizon and air, the photoelectrical rotary encoder with high precision has become the research hotspot in the fields of international spaceflight and avigation, the errors evaluation of encoder with high precision is the one of the key technology that must to be resolved. For the encoder with high precision, the interpolation errors is the main factor which affects its precision. Existing interpolation errors detection adopts accurate apparatus such as little angle measurement apparatus and optics polyhedron, requesting under the strict laboratory condition to carry on. The detection method is also time-consuming, hard to tackle and easy to introduce detect errors. This paper mainly studies the fast evaluation method of interpolation errors of encoder with high precision which is applied to the working field. Taking the Lissajou's figure produced by moiré fringe as foundation, the paper sets up the radius vector's mathematical model to represent figure's form deviation, analyses the implied parameters information of moiré fringe, the relation of the radius vector deviation and interpolation errors in the figures and puts forward the method of interpolation errors figures evaluation. Adopting figure deduction method, and directly from harmonic component of radius vector deviation toward harmonic component of interpolation errors, the interpolation errors can be gotten in the paper. Through data collecting card, the Moiré fringe signal is transmitted into the computer, then, the computer storages the data, using figures evaluation method to analyses the data, drawing the curve of interpolation errors. Comparing with interpolation errors drawing from traditional detect method, the change trend of the interpolation errors curve is similar, peak-peak value is almost equality. The result of experiment indicates: the method of the paper can be applied to

  7. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  8. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  9. Error rates, PCR recombination, and sampling depth in HIV-1 whole genome deep sequencing.

    PubMed

    Zanini, Fabio; Brodin, Johanna; Albert, Jan; Neher, Richard A

    2016-12-27

    Deep sequencing is a powerful and cost-effective tool to characterize the genetic diversity and evolution of virus populations. While modern sequencing instruments readily cover viral genomes many thousand fold and very rare variants can in principle be detected, sequencing errors, amplification biases, and other artifacts can limit sensitivity and complicate data interpretation. For this reason, the number of studies using whole genome deep sequencing to characterize viral quasi-species in clinical samples is still limited. We have previously undertaken a large scale whole genome deep sequencing study of HIV-1 populations. Here we discuss the challenges, error profiles, control experiments, and computational test we developed to quantify the accuracy of variant frequency estimation.

  10. High Bit Rate Experiments Over ACTS

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A.; Gary, J. Patrick; Edelsen, Burt; Helm, Neil; Cohen, Judith; Shopbell, Patrick; Mechoso, C. Roberto; Chung-Chun; Farrara, M.; Spahr, Joseph

    1996-01-01

    This paper describes two high data rate experiments chat are being developed for the gigabit NASA Advanced Communications Technology Satellite (ACTS). The first is a telescience experiment that remotely acquires image data at the Keck telescope from the Caltech campus. The second is a distributed global climate application that is run between two supercomputer centers interconnected by ACTS. The implementation approach for each is described along with the expected results. Also. the ACTS high data rate (HDR) ground station is also described in detail.

  11. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  12. TMF ultra-high rate discharge performance

    SciTech Connect

    Nelson, B.

    1997-12-01

    BOLDER Technologies Corporation has developed a valve-regulated lead-acid product line termed Thin Metal Film (TMF{trademark}) technology. It is characterized by extremely thin plates and close plate spacing that facilitate high rates of charge and discharge with minimal temperature increases, at levels unachievable with other commercially-available battery technologies. This ultra-high rate performance makes TMF technology ideal for such applications as various types of engine start, high drain rate portable devices and high-current pulsing. Data are presented on very high current continuous and pulse discharges. Power and energy relationships at various discharge rates are explored and the fast-response characteristics of the BOLDER{reg_sign} cell are qualitatively defined. Short-duration recharge experiments will show that devices powered by BOLDER batteries can be in operation for more than 90% of an extended usage period with multiple fast recharges. The BOLDER cell is ideal for applications such as engine-start, a wide range of portable devices including power tools, hybrid electric vehicles and pulse-power devices. Applications such as this are very attractive, and are well served by TMF technology, but an area of great interest and excitement is ultrahigh power delivery in excess of 1 kW/kg.

  13. Dynamic evaluation system for interpolation errors in the encoder of high precision

    NASA Astrophysics Data System (ADS)

    Wan, Qiu-hua; Wu, Yong-zhi; Zhao, Chang-hai; Liang, Li-hui; Sun, Ying; Jiang, Yong

    2009-05-01

    In order to measure dynamic interpolation errors of photoelectric encoder of high precision, the dynamic evaluation system of interpolation errors is introduced. Firstly, the fine Moiré signal of encoder which is collected with the high-speed data gathering card into the computer is treated to equiangular data with the method of linear interpolation. Then, the analysis of harmonic wave with the FFT is processed. Compared with the standard signal, the dynamic interpolation errors of the encoder are calculated. Experimental results show that the precision of the dynamic evaluation system of interpolation errors is +/-0.1 %( pitch). The evaluation system is simple, fast, high precision, and can be used in the working field of the encoder.

  14. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  15. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  16. A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware.

    PubMed

    Shi, Haixiang; Schmidt, Bertil; Liu, Weiguo; Müller-Wittig, Wolfgang

    2010-04-01

    Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this article, we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data so that error-free reads can be available before DNA fragment assembly, which is of high importance to many graph-based short-read assembly tools. The algorithm is based on spectral alignment and uses the Compute Unified Device Architecture (CUDA) programming model. To gain efficiency we are taking advantage of the CUDA texture memory using a space-efficient Bloom filter data structure for spectrum membership queries. We have tested the runtime and accuracy of our algorithm using real and simulated Illumina data for different read lengths, error rates, input sizes, and algorithmic parameters. Using a CUDA-enabled mass-produced GPU (available for less than US$400 at any local computer outlet), this results in speedups of 12-84 times for the parallelized error correction, and speedups of 3-63 times for both sequential preprocessing and parallelized error correction compared to the publicly available Euler-SR program. Our implementation is freely available for download from http://cuda-ec.sourceforge.net .

  17. High Resolution Measurement of the Glycolytic Rate

    PubMed Central

    Bittner, Carla X.; Loaiza, Anitsi; Ruminot, Iván; Larenas, Valeria; Sotelo-Hitschfeld, Tamara; Gutiérrez, Robin; Córdova, Alex; Valdebenito, Rocío; Frommer, Wolf B.; Barros, L. Felipe

    2010-01-01

    The glycolytic rate is sensitive to physiological activity, hormones, stress, aging, and malignant transformation. Standard techniques to measure the glycolytic rate are based on radioactive isotopes, are not able to resolve single cells and have poor temporal resolution, limitations that hamper the study of energy metabolism in the brain and other organs. A new method is described in this article, which makes use of a recently developed FRET glucose nanosensor to measure the rate of glycolysis in single cells with high temporal resolution. Used in cultured astrocytes, the method showed for the first time that glycolysis can be activated within seconds by a combination of glutamate and K+, supporting a role for astrocytes in neurometabolic and neurovascular coupling in the brain. It was also possible to make a direct comparison of metabolism in neurons and astrocytes lying in close proximity, paving the way to a high-resolution characterization of brain energy metabolism. Single-cell glycolytic rates were also measured in fibroblasts, adipocytes, myoblasts, and tumor cells, showing higher rates for undifferentiated cells and significant metabolic heterogeneity within cell types. This method should facilitate the investigation of tissue metabolism at the single-cell level and is readily adaptable for high-throughput analysis. PMID:20890447

  18. High Rate for Type IC Supernovae

    SciTech Connect

    Muller, R.A.; Marvin-Newberg, H.J.; Pennypacker, Carl R.; Perlmutter, S.; Sasseen, T.P.; Smith, C.K.

    1991-09-01

    Using an automated telescope we have detected 20 supernovae in carefully documented observations of nearby galaxies. The supernova rates for late spiral (Sbc, Sc, Scd, and Sd) galaxies, normalized to a blue luminosity of 10{sup 10} L{sub Bsun}, are 0.4 h{sup 2}, 1.6 h{sup 2}, and 1.1 h{sup 2} per 100 years for SNe type la, Ic, and II. The rate for type Ic supernovae is significantly higher than found in previous surveys. The rates are not corrected for detection inefficiencies, and do not take into account the indications that the Ic supernovae are fainter on the average than the previous estimates; therefore the true rates are probably higher. The rates are not strongly dependent on the galaxy inclination, in contradiction to previous compilations. If the Milky Way is a late spiral, then the rate of Galactic supernovae is greater than 1 per 30 {+-} 7 years, assuming h = 0.75. This high rate has encouraging consequences for future neutrino and gravitational wave observatories.

  19. High rate, high reliability Li/SO2 cells

    NASA Astrophysics Data System (ADS)

    Chireau, R.

    1982-03-01

    The use of the lithium/sulfur dioxide system for aerospace applications is discussed. The high rate density in the system is compared to some primary systems: mercury zinc, silver zinc, and magnesium oxide. Estimates are provided of the storage life and shelf life of typical lithium sulfur batteries. The design of lithium cells is presented and criteria are given for improving the output of cells in order to achieve high rate and high reliability.

  20. Baltimore District Tackles High Suspension Rates

    ERIC Educational Resources Information Center

    Maxwell, Lesli A.

    2007-01-01

    This article reports on how the Baltimore District tackles its high suspension rates. Driven by an increasing belief that zero-tolerance disciplinary policies are ineffective, more educators are embracing strategies that do not exclude misbehaving students from school for offenses such as insubordination, disrespect, cutting class, tardiness, and…

  1. Bit-Error-Rate-Minimizing Channel Shortening Using Post-FEQ Diversity Combining and a Genetic Algorithm

    DTIC Science & Technology

    2009-03-10

    AFIT/GE/ENG/09-01 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio APPROVED FOR...the United States Air Force, Department of Defense, or the United States Government. AFIT/GE/ENG/09-01 Bit-Error-Rate-Minimizing Channel Shortening...School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the

  2. Single Event Test Methodologies and System Error Rate Analysis for Triple Modular Redundant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael

    2010-01-01

    We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.

  3. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  4. Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results

    SciTech Connect

    Clark, E.L.

    1994-07-01

    Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for five fundamental aerodynamic ratios which relate free-steam test conditions to a reference condition.

  5. A Framework for Interpreting Type I Error Rates from a Product‐Term Model of Interaction Applied to Quantitative Traits

    PubMed Central

    Province, Michael A.

    2015-01-01

    ABSTRACT Adequate control of type I error rates will be necessary in the increasing genome‐wide search for interactive effects on complex traits. After observing unexpected variability in type I error rates from SNP‐by‐genome interaction scans, we sought to characterize this variability and test the ability of heteroskedasticity‐consistent standard errors to correct it. We performed 81 SNP‐by‐genome interaction scans using a product‐term model on quantitative traits in a sample of 1,053 unrelated European Americans from the NHLBI Family Heart Study, and additional scans on five simulated datasets. We found that the interaction‐term genomic inflation factor (lambda) showed inflation and deflation that varied with sample size and allele frequency; that similar lambda variation occurred in the absence of population substructure; and that lambda was strongly related to heteroskedasticity but not to minor non‐normality of phenotypes. Heteroskedasticity‐consistent standard errors narrowed the range of lambda, with HC3 outperforming HC0, but in individual scans tended to create new P‐value outliers related to sparse two‐locus genotype classes. We explain the lambda variation as a result of non‐independence of test statistics coupled with stochastic biases in test statistics due to a failure of the test to reach asymptotic properties. We propose that one way to interpret lambda is by comparison to an empirical distribution generated from data simulated under the null hypothesis and without population substructure. We further conclude that the interaction‐term lambda should not be used to adjust test statistics and that heteroskedasticity‐consistent standard errors come with limitations that may outweigh their benefits in this setting. PMID:26659945

  6. High strain rate behaviour of polypropylene microfoams

    NASA Astrophysics Data System (ADS)

    Gómez-del Río, T.; Garrido, M. A.; Rodríguez, J.; Arencón, D.; Martínez, A. B.

    2012-08-01

    Microcellular materials such as polypropylene foams are often used in protective applications and passive safety for packaging (electronic components, aeronautical structures, food, etc.) or personal safety (helmets, knee-pads, etc.). In such applications the foams which are used are often designed to absorb the maximum energy and are generally subjected to severe loadings involving high strain rates. The manufacture process to obtain polymeric microcellular foams is based on the polymer saturation with a supercritical gas, at high temperature and pressure. This method presents several advantages over the conventional injection moulding techniques which make it industrially feasible. However, the effect of processing conditions such as blowing agent, concentration and microfoaming time and/or temperature on the microstructure of the resulting microcellular polymer (density, cell size and geometry) is not yet set up. The compressive mechanical behaviour of several microcellular polypropylene foams has been investigated over a wide range of strain rates (0.001 to 3000 s-1) in order to show the effects of the processing parameters and strain rate on the mechanical properties. High strain rate tests were performed using a Split Hopkinson Pressure Bar apparatus (SHPB). Polypropylene and polyethylene-ethylene block copolymer foams of various densities were considered.

  7. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  8. Orifice-induced pressure error studies in Langley 7- by 10-foot high-speed tunnel

    NASA Technical Reports Server (NTRS)

    Plentovich, E. B.; Gloss, B. B.

    1986-01-01

    For some time it has been known that the presence of a static pressure measuring hole will disturb the local flow field in such a way that the sensed static pressure will be in error. The results of previous studies aimed at studying the error induced by the pressure orifice were for relatively low Reynolds number flows. Because of the advent of high Reynolds number transonic wind tunnels, a study was undertaken to assess the magnitude of this error at high Reynolds numbers than previously published and to study a possible method of eliminating this pressure error. This study was conducted in the Langley 7- by 10-Foot High-Speed Tunnel on a flat plate. The model was tested at Mach numbers from 0.40 to 0.72 and at Reynolds numbers from 7.7 x 1,000,000 to 11 x 1,000,000 per meter (2.3 x 1,000,000 to 3.4 x 1,000,000 per foot), respectively. The results indicated that as orifice size increased, the pressure error also increased but that a porous metal (sintered metal) plug inserted in an orifice could greatly reduce the pressure error induced by the orifice.

  9. Children with High Functioning Autism show increased prefrontal and temporal cortex activity during error monitoring

    PubMed Central

    Goldberg, Melissa C.; Spinelli, Simona; Joel, Suresh; Pekar, James J.; Denckla, Martha B.; Mostofsky, Stewart H.

    2010-01-01

    Evidence exists for deficits in error monitoring in autism. These deficits may be particularly important because they may contribute to excessive perseveration and repetitive behavior in autism. We examined the neural correlates of error monitoring using fMRI in 8–12-year-old children with high-functioning autism (HFA, n=11) and typically developing children (TD, n=15) during performance of a Go/No-Go task by comparing the neural correlates of commission errors versus correct response inhibition trials. Compared to TD children, children with HFA showed increased BOLD fMRI signal in the anterior medial prefrontal cortex (amPFC) and the left superior temporal gyrus (STempG) during commission error (versus correct inhibition) trials. A follow-up region-of-interest analysis also showed increased BOLD signal in the right insula in HFA compared to TD controls. Our findings of increased amPFC and STempG activity in HFA, together with the increased activity in the insula, suggest a greater attention towards the internally-driven emotional state associated with making an error in children with HFA. Since error monitoring occurs across different cognitive tasks throughout daily life, an increased emotional reaction to errors may have important consequences for early learning processes. PMID:21151713

  10. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  11. Highly stable high-rate discriminator for nuclear counting

    NASA Technical Reports Server (NTRS)

    English, J. J.; Howard, R. H.; Rudnick, S. J.

    1969-01-01

    Pulse amplitude discriminator is specially designed for nuclear counting applications. At very high rates, the threshold is stable. The output-pulse width and the dead time change negligibly. The unit incorporates a provision for automatic dead-time correction.

  12. Phosphor thermometry at high repetition rates

    NASA Astrophysics Data System (ADS)

    Fuhrmann, N.; Brübach, J.; Dreizler, A.

    2013-09-01

    Phosphor thermometry is a semi-invasive surface temperature measurement technique utilizing the luminescence properties of thermographic phosphors. Typically these ceramic materials are coated onto the object of interest and are excited by a short UV laser pulse. Photomultipliers and high-speed camera systems are used to transiently detect the subsequently emitted luminescence decay point wise or two-dimensionally resolved. Based on appropriate calibration measurements, the luminescence lifetime is converted to temperature. Up to now, primarily Q-switched laser systems with repetition rates of 10 Hz were employed for excitation. Accordingly, this diagnostic tool was not applicable to resolve correlated temperature transients at time scales shorter than 100 ms. For the first time, the authors realized a high-speed phosphor thermometry system combining a highly repetitive laser in the kHz regime and a fast decaying phosphor. A suitable material was characterized regarding its temperature lifetime characteristic and precision. Additionally, the influence of laser power on the phosphor coating in terms of heating effects has been investigated. A demonstration of this high-speed technique has been conducted inside the thermally highly transient system of an optically accessible internal combustion engine. Temperatures have been measured with a repetition rate of one sample per crank angle degree at an engine speed of 1000 rpm. This experiment has proven that high-speed phosphor thermometry is a promising diagnostic tool for the resolution of surface temperature transients.

  13. Error Rate Improvement in Underwater MIMO Communications Using Sparse Partial Response Equalization

    DTIC Science & Technology

    2006-09-01

    λn−kvi(k) vHi (k) (13) θi(n) = n∑ k=1 λn−kvi(k)x (s)H i (k) (14) are the (time averaged) output correlation matrix and the input-output cross...error vector [5] and Ki(n) is the RLS gain defined as αi(n) = x (s) i (n)− cHi (n− 1)vi(n) (17) Ki(n) = Pi(n− 1)vi(n) λi + vHi (n)Pi(n− 1)vi(n) · (18...Using equations 13, 14, and the matrix inversion lemma [5], the inverse correlation matrix Pi(n) can be updated as Pi(n) = [ I−Ki(n) vHi (n) ] Pi(n− 1

  14. High strain rate characterization of polymers

    NASA Astrophysics Data System (ADS)

    Siviour, Clive R.

    2017-01-01

    This paper reviews the literature on the response of polymers to high strain rate deformation. The main focus is on the experimental techniques used to characterize this response. The paper includes a small number of examples as well as references to experimental data over a wide range of rates, which illustrate the key features of rate dependence in these materials; however this is by no means an exhaustive list. The aim of the paper is to give the reader unfamiliar with the subject an overview of the techniques available with sufficient references from which further information can be obtained. In addition to the `well established' techniques of the Hopkinson bar, Taylor Impact and Transverse impact, a discussion of the use of time-temperature superposition in interpreting and experimentally replicating high rate response is given, as is a description of new techniques in which mechanical parameters are derived by directly measuring wave propagation in specimens; these are particularly appropriate for polymers with low wave speeds. The vast topic of constitutive modelling is deliberately excluded from this review.

  15. High temperature electrochemical corrosion rate probes

    SciTech Connect

    Bullard, Sophie J.; Covino, Bernard S., Jr.; Holcomb, Gordon R.; Ziomek-Moroz, M.

    2005-09-01

    Corrosion occurs in the high temperature sections of energy production plants due to a number of factors: ash deposition, coal composition, thermal gradients, and low NOx conditions, among others. Electrochemical corrosion rate (ECR) probes have been shown to operate in high temperature gaseous environments that are similar to those found in fossil fuel combustors. ECR probes are rarely used in energy production plants at the present time, but if they were more fully understood, corrosion could become a process variable at the control of plant operators. Research is being conducted to understand the nature of these probes. Factors being considered are values selected for the Stern-Geary constant, the effect of internal corrosion, and the presence of conductive corrosion scales and ash deposits. The nature of ECR probes will be explored in a number of different atmospheres and with different electrolytes (ash and corrosion product). Corrosion rates measured using an electrochemical multi-technique capabilities instrument will be compared to those measured using the linear polarization resistance (LPR) technique. In future experiments, electrochemical corrosion rates will be compared to penetration corrosion rates determined using optical profilometry measurements.

  16. The effect of administrative boundaries and geocoding error on cancer rates in California.

    PubMed

    Goldberg, Daniel W; Cockburn, Myles G

    2012-04-01

    Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods.

  17. Analytical Modeling of High Rate Processes.

    DTIC Science & Technology

    2007-11-02

    TYPE AND DATES COVERED 1 13 Apr 98 Final (01 Sep 94 - 31 Aug 97) 4. TITLE AND SUBTITLE 5 . FUNDING NUMBERS Analytical Modeling of High Rate Processes...20332- 8050 FROM: S. E. Jones, University Research Professor Department of Aerospace Engineering and Mechanics University of Alabama SUBJECT: Final...Mr. Sandor Augustus and Mr. Jeffrey A. Drinkard. There are no outstanding commitments. The balance in the account, as of July 31 , 1997, was $102,916.42

  18. HIGH ENERGY RATE EXTRUSION OF URANIUM

    DOEpatents

    Lewis, L.

    1963-07-23

    A method of extruding uranium at a high energy rate is described. Conditions during the extrusion are such that the temperature of the metal during extrusion reaches a point above the normal alpha to beta transition, but the metal nevertheless remains in the alpha phase in accordance with the Clausius- Clapeyron equation. Upon exiting from the die, the metal automatically enters the beta phase, after which the metal is permitted to cool. (AEC)

  19. High-rate systematic recursive convolutional encoders: minimal trellis and code search

    NASA Astrophysics Data System (ADS)

    Benchimol, Isaac; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.

    2012-12-01

    We consider high-rate systematic recursive convolutional encoders to be adopted as constituent encoders in turbo schemes. Douillard and Berrou showed that, despite its complexity, the construction of high-rate turbo codes by means of high-rate constituent encoders is advantageous over the construction based on puncturing rate-1/2 constituent encoders. To reduce the decoding complexity of high-rate codes, we introduce the construction of the minimal trellis for a systematic recursive convolutional encoding matrix. A code search is conducted and examples are provided which indicate that a more finely grained decoding complexity-error performance trade-off is obtained.

  20. Reserve, flowing electrolyte, high rate lithium battery

    NASA Astrophysics Data System (ADS)

    Puskar, M.; Harris, P.

    Flowing electrolyte Li/SOCl2 tests in single cell and multicell bipolar fixtures have been conducted, and measurements are presented for electrolyte flow rates, inlet and outlet temperatures, fixture temperatures at several points, and the pressure drop across the fixture. Reserve lithium batteries with flowing thionyl-chloride electrolytes are found to be capable of very high energy densities with usable voltages and capacities at current densities as high as 500 mA/sq cm. At this current density, a battery stack 10 inches in diameter is shown to produce over 60 kW of power while maintaining a safe operating temperature.

  1. Resident physicians' clinical training and error rate: the roles of autonomy, consultation, and familiarity with the literature.

    PubMed

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-03-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores the relationships between residents' error rates and three clinical training methods (1) progressive independence or level of autonomy, (2) consulting the physician on call, and (3) familiarity with up-to-date medical literature, and whether these relationships vary among the specialties of surgery and internal medicine and between novice and experienced residents. 142 Residents in 22 medical departments from two hospitals participated in the study. Results of hierarchical linear model analysis indicated that lower levels of autonomy, higher levels of consultation with the physician on call, and higher levels of familiarity with up-to-date medical literature were associated with lower levels of resident's error rates. The associations varied between internal and surgery specializations and novice and experienced residents. In conclusion, the study results suggested that the implicit curriculum that residents should be afforded autonomy and progressive independence with nominal supervision in accordance with their relevant skills and experience must be applied cautiously depending on specialization and experience. In addition, it is necessary to create a supportive and judgment free climate within the department that may reduce a resident's hesitation to consult the attending physician.

  2. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    PubMed

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Theoretical computation of trace gases retrieval random error from measurements of high spectral resolution infrared sounder

    NASA Technical Reports Server (NTRS)

    Huang, Hung-Lung; Smith, William L.; Woolf, Harold M.; Theriault, J. M.

    1991-01-01

    The purpose of this paper is to demonstrate the trace gas profiling capabilities of future passive high spectral resolution (1 cm(exp -1) or better) infrared (600 to 2700 cm(exp -1)) satellite tropospheric sounders. These sounders, such as the grating spectrometer, Atmospheric InfRared Sounders (AIRS) (Chahine et al., 1990) and the interferometer, GOES High Resolution Interferometer Sounder (GHIS), (Smith et al., 1991) can provide these unique infrared spectra which enable us to conduct this analysis. In this calculation only the total random retrieval error component is presented. The systematic error components contributed by the forward and inverse model error are not considered (subject of further studies). The total random errors, which are composed of null space error (vertical resolution component error) and measurement error (instrument noise component error), are computed by assuming one wavenumber spectral resolution with wavenumber span from 1100 cm(exp -1) to 2300 cm(exp -1) (the band 600 cm(exp -1) to 1100 cm(exp -1) is not used since there is no major absorption of our three gases here) and measurement noise of 0.25 degree at reference temperature of 260 degree K. Temperature, water vapor, ozone and mixing ratio profiles of nitrous oxide, carbon monoxide and methane are taken from 1976 US Standard Atmosphere conditions (a FASCODE model). Covariance matrices of the gases are 'subjectively' generated by assuming 50 percent standard deviation of gaussian perturbation with respect to their US Standard model profiles. Minimum information and maximum likelihood retrieval solutions are used.

  4. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  5. Results of error correction techniques applied on two high accuracy coordinate measuring machines

    SciTech Connect

    Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R.; National Inst. of Standards and Technology, Gaithersburg, MD )

    1990-01-01

    The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.

  6. Can the Misinterpretation Amendment Rate Be Used as a Measure of Interpretive Error in Anatomic Pathology?: Implications of a Survey of the Directors of Anatomic and Surgical Pathology.

    PubMed

    Parkash, Vinita; Fadare, Oluwole; Dewar, Rajan; Nakhleh, Raouf; Cooper, Kumarasen

    2017-03-01

    A repeat survey of the Association of the Directors of Anatomic and Surgical Pathology, done 10 years after the original was used to assess trends and variability in classifying scenarios as errors, and the preferred post signout report modification for correcting error by the membership of the Association of the Directors of Anatomic and Surgical Pathology. The results were analyzed to inform on whether interpretive amendment rates might act as surrogate measures of interpretive error in pathology. An analyses of the responses indicated that primary level misinterpretations (benign to malignant and vice versa) were universally qualified as error; secondary-level misinterpretations or misclassifications were inconsistently labeled error. There was added variability in the preferred post signout report modification used to correct report alterations. The classification of a scenario as error appeared to correlate with severity of potential harm of the missed call, the perceived subjectivity of the diagnosis, and ambiguity of reporting terminology. Substantial differences in policies for error detection and optimal reporting format were documented between departments. In conclusion, the inconsistency in labeling scenarios as error, disagreement about the optimal post signout report modification for the correction of the error, and variability in error detection policies preclude the use of the misinterpretation amendment rate as a surrogate measure for error in anatomic pathology. There is little change in uniformity of definition, attitudes and perception of interpretive error in anatomic pathology in the last 10 years.

  7. Civilian residential fire fatality rates: Six high-rate states versus six low-rate states

    NASA Astrophysics Data System (ADS)

    Hall, J. R., Jr.; Helzer, S. G.

    1983-08-01

    Results of an analysis of 1,600 fire fatalities occurring in six states with high fire-death rates and six states with low fire-death rates are presented. Reasons for the differences in rates are explored, with special attention to victim age, sex, race, and condition at time of ignition. Fire cause patterns are touched on only lightly but are addressed more extensively in the companion piece to this report, "Rural and Non-Rural Civilian Residential Fire Fatalities in Twelve States', NBSIR 82-2519.

  8. High rate pulse processing algorithms for microcalorimeters

    SciTech Connect

    Rabin, Michael; Hoover, Andrew S; Bacrania, Mnesh K; Tan, Hui; Breus, Dimitry; Henning, Wolfgang; Sabourov, Konstantin; Collins, Jeff; Warburton, William K; Dorise, Bertrand; Ullom, Joel N

    2009-01-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensor can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small to maintain good energy resolution, and pulse decay times are normally in the order of milliseconds due to slow thermal relaxation. Consequently, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. Large arrays, however, require as much pulse processing as possible to be performed at the front end of the readout electronics to avoid transferring large amounts of waveform data to a host computer for processing. In this paper, they present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in the readout electronics that they are also currently developing, is to achieve sufficiently good energy resolution for most applications while being (a) simple enough to be implemented in the readout electronics and (b) capable of processing overlapping pulses and thus achieving much higher output count rates than the rates that existing algorithms are currently achieving. Details of these algorithms are presented, and their performance was compared to that of the 'optimal filter' that is the dominant pulse processing algorithm in the cryogenic-detector community.

  9. High Strain Rate Behavior of Polyurea Compositions

    NASA Astrophysics Data System (ADS)

    Joshi, Vasant; Milby, Christopher

    2011-06-01

    Polyurea has been gaining importance in recent years due to its impact resistance properties. The actual compositions of this viscoelastic material must be tailored for specific use. It is therefore imperative to study the effect of variations in composition on the properties of the material. High-strain-rate response of three polyurea compositions with varying molecular weights has been investigated using a Split Hopkinson Pressure Bar arrangement equipped with titanium bars. The polyurea compositions were synthesized from polyamines (Versalink, Air Products) with a multi-functional isocyanate (Isonate 143L, Dow Chemical). Amines with molecular weights of 1000, 650, and a blend of 250/1000 have been used in the current investigation. The materials have been tested up to strain rates of 6000/s. Results from these tests have shown interesting trends on the high rate behavior. While higher molecular weight composition show lower yield, they do not show dominant hardening behavior. On the other hand, the blend of 250/1000 show higher load bearing capability but lower strain hardening effects than the 600 and 1000 molecular weight amine based materials. Refinement in experimental methods and comparison of results using aluminum Split Hopkinson Bar is presented.

  10. High strain rate behavior of polyurea compositions

    NASA Astrophysics Data System (ADS)

    Joshi, Vasant S.; Milby, Christopher

    2012-03-01

    High-strain-rate response of three polyurea compositions with varying molecular weights has been investigated using a Split Hopkinson Pressure Bar arrangement equipped with aluminum bars. Three polyurea compositions were synthesized from polyamines (Versalink, Air Products) with a multi-functional isocyanate (Isonate 143L, Dow Chemical). Amines with molecular weights of 1000, 650, and a blend of 250/1000 have been used in the current investigation. These materials have been tested to strain rates of over 6000/s. High strain rate results from these tests have shown varying trends as a function of increasing strain. While higher molecular weight composition show lower yield, they do not show dominant hardening behavior at lower strain. On the other hand, the blend of 250/1000 show higher load bearing capability but lower strain hardening effects than the 600 and 1000 molecular weight amine based materials. Results indicate that the initial increase in the modulus of the blend of 250/1000 may lead to the loss of strain hardening characteristics as the material is compressed to 50% strain, compared to 1000 molecular weight amine based material.

  11. High strain-rate magnetoelasticity in Galfenol

    NASA Astrophysics Data System (ADS)

    Domann, J. P.; Loeffler, C. M.; Martin, B. E.; Carman, G. P.

    2015-09-01

    This paper presents the experimental measurements of a highly magnetoelastic material (Galfenol) under impact loading. A Split-Hopkinson Pressure Bar was used to generate compressive stress up to 275 MPa at strain rates of either 20/s or 33/s while measuring the stress-strain response and change in magnetic flux density due to magnetoelastic coupling. The average Young's modulus (44.85 GPa) was invariant to strain rate, with instantaneous stiffness ranging from 25 to 55 GPa. A lumped parameters model simulated the measured pickup coil voltages in response to an applied stress pulse. Fitting the model to the experimental data provided the average piezomagnetic coefficient and relative permeability as functions of field strength. The model suggests magnetoelastic coupling is primarily insensitive to strain rates as high as 33/s. Additionally, the lumped parameters model was used to investigate magnetoelastic transducers as potential pulsed power sources. Results show that Galfenol can generate large quantities of instantaneous power (80 MW/m3 ), comparable to explosively driven ferromagnetic pulse generators (500 MW/m3 ). However, this process is much more efficient and can be cyclically carried out in the linear elastic range of the material, in stark contrast with explosively driven pulsed power generators.

  12. High strain rate deformation of layered nanocomposites.

    PubMed

    Lee, Jae-Hwang; Veysset, David; Singer, Jonathan P; Retsch, Markus; Saini, Gagan; Pezeril, Thomas; Nelson, Keith A; Thomas, Edwin L

    2012-01-01

    Insight into the mechanical behaviour of nanomaterials under the extreme condition of very high deformation rates and to very large strains is needed to provide improved understanding for the development of new protective materials. Applications include protection against bullets for body armour, micrometeorites for satellites, and high-speed particle impact for jet engine turbine blades. Here we use a microscopic ballistic test to report the responses of periodic glassy-rubbery layered block-copolymer nanostructures to impact from hypervelocity micron-sized silica spheres. Entire deformation fields are experimentally visualized at an exceptionally high resolution (below 10 nm) and we discover how the microstructure dissipates the impact energy via layer kinking, layer compression, extreme chain conformational flattening, domain fragmentation and segmental mixing to form a liquid phase. Orientation-dependent experiments show that the dissipation can be enhanced by 30% by proper orientation of the layers.

  13. High frame-rate digital radiographic videography

    SciTech Connect

    King, N.S.P.; Cverna, F.H.; Albright, K.L.; Jaramillo, S.A.; Yates, G.J.; McDonald, T.E.; Flynn, M.J.; Tashman, S.

    1994-09-01

    High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100-microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.

  14. Denoising DNA deep sequencing data-high-throughput sequencing errors and their correction.

    PubMed

    Laehnemann, David; Borkhardt, Arndt; McHardy, Alice Carolyn

    2016-01-01

    Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here.

  15. Numerical errors in the real-height analysis of ionograms at high latitudes

    SciTech Connect

    Titheridge, J.E.

    1987-10-01

    A simple dual-range integration method for maintaining accuracy in the analysis of real-height ionograms at high latitudes up to a dip angle of 89 deg is presented. Numerical errors are reduced to zero for the start and valley calculations at all dip angles up to 89.9 deg. It is noted that the extreme errors which occur at high latitudes can be alternatively reduced by using a decreased value for the dip angle. An expression for the optimun dip angle for different integration orders and frequency intervals is given. 17 references.

  16. Rate Constants for Fine-structure Excitations in O–H Collisions with Error Bars Obtained by Machine Learning

    NASA Astrophysics Data System (ADS)

    Vieira, Daniel; Krems, Roman V.

    2017-02-01

    We present an approach using a combination of coupled channel scattering calculations with a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate constants for non-adiabatic transitions in inelastic atomic collisions to variations of the underlying adiabatic interaction potentials. Using this approach, we improve the previous computations of the rate constants for the fine-structure transitions in collisions of O({}3{P}j) with atomic H. We compute the error bars of the rate constants corresponding to 20% variations of the ab initio potentials and show that this method can be used to determine which of the individual adiabatic potentials are more or less important for the outcome of different fine-structure changing collisions.

  17. Microalgal separation from high-rate ponds

    SciTech Connect

    Nurdogan, Y.

    1988-01-01

    High rate ponding (HRP) processes are playing an increasing role in the treatment of organic wastewaters in sunbelt communities. Photosynthetic oxygenation by algae has proved to cost only one-seventh as much as mechanical aeration for activated sludge systems. During this study, an advanced HRP, which produces an effluent equivalent to tertiary treatment has been studied. It emphasizes not only waste oxidation but also algal separation and nutrient removal. This new system is herein called advanced tertiary high rate ponding (ATHRP). Phosphorus removal in HRP systems is normally low because algal uptake of phosphorus is about one percent of their 200-300 mg/L dry weights. Precipitation of calcium phosphates by autofluocculation also occurs in HRP at high pH levels, but it is generally not complete due to insufficient calcium concentration in the pond. In the case of Richmond where the studies were conducted, the sewage is very low in calcium. Therefore, enhancement of natural autoflocculation was studied by adding small amounts of lime to the pond. Through this simple procedure phosphorus and nitrogen removals were virtually complete justifying the terminology ATHRP.

  18. A survey of computational methods and error rate estimation procedures for peptide and protein identification in shotgun proteomics

    PubMed Central

    Nesvizhskii, Alexey I.

    2010-01-01

    This manuscript provides a comprehensive review of the peptide and protein identification process using tandem mass spectrometry (MS/MS) data generated in shotgun proteomic experiments. The commonly used methods for assigning peptide sequences to MS/MS spectra are critically discussed and compared, from basic strategies to advanced multi-stage approaches. A particular attention is paid to the problem of false-positive identifications. Existing statistical approaches for assessing the significance of peptide to spectrum matches are surveyed, ranging from single-spectrum approaches such as expectation values to global error rate estimation procedures such as false discovery rates and posterior probabilities. The importance of using auxiliary discriminant information (mass accuracy, peptide separation coordinates, digestion properties, and etc.) is discussed, and advanced computational approaches for joint modeling of multiple sources of information are presented. This review also includes a detailed analysis of the issues affecting the interpretation of data at the protein level, including the amplification of error rates when going from peptide to protein level, and the ambiguities in inferring the identifies of sample proteins in the presence of shared peptides. Commonly used methods for computing protein-level confidence scores are discussed in detail. The review concludes with a discussion of several outstanding computational issues. PMID:20816881

  19. Innovations in high rate condensate polishing systems

    SciTech Connect

    O`Brien, M.

    1995-01-01

    Test work is being conducted at two major east coast utilities to evaluate flow distribution in high flow rate condensate polishing service vessels. The work includes core sample data used to map the flow distribution in vessels as originally manufactured. Underdrain modifications for improved flow distribution are discussed with data that indicates performance increases of the service vessel following the modifications. The test work is on going, with preliminary data indicating that significant improvements in cycle run length are possible with underdrain modifications. The economic benefits of the above modifications are discussed.

  20. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the

  1. Accurate Bit-Error Rate Evaluation for TH-PPM Systems in Nakagami Fading Channels Using Moment Generating Functions

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Gunawan, Erry; Law, Choi Look; Teh, Kah Chan

    Analytical expressions based on the Gauss-Chebyshev quadrature (GCQ) rule technique are derived to evaluate the bit-error rate (BER) for the time-hopping pulse position modulation (TH-PPM) ultra-wide band (UWB) systems under a Nakagami-m fading channel. The analyses are validated by the simulation results and adopted to assess the accuracy of the commonly used Gaussian approximation (GA) method. The influence of the fading severity on the BER performance of TH-PPM UWB system is investigated.

  2. Evaluation of write error rate for voltage-driven dynamic magnetization switching in magnetic tunnel junctions with perpendicular magnetization

    NASA Astrophysics Data System (ADS)

    Shiota, Yoichi; Nozaki, Takayuki; Tamaru, Shingo; Yakushiji, Kay; Kubota, Hitoshi; Fukushima, Akio; Yuasa, Shinji; Suzuki, Yoshishige

    2016-01-01

    We investigated the write error rate (WER) for voltage-driven dynamic switching in magnetic tunnel junctions with perpendicular magnetization. We observed a clear oscillatory behavior of the switching probability with respect to the duration of pulse voltage, which reveals the precessional motion of magnetization during voltage application. We experimentally demonstrated WER as low as 4 × 10-3 at the pulse duration corresponding to a half precession period (˜1 ns). The comparison between the results of the experiment and simulation based on a macrospin model shows a possibility of ultralow WER (<10-15) under optimum conditions. This study provides a guideline for developing practical voltage-driven spintronic devices.

  3. Influence of beam wander on bit-error rate in a ground-to-satellite laser uplink communication system.

    PubMed

    Ma, Jing; Jiang, Yijun; Tan, Liying; Yu, Siyuan; Du, Wenhe

    2008-11-15

    Based on weak fluctuation theory and the beam-wander model, the bit-error rate of a ground-to-satellite laser uplink communication system is analyzed, in comparison with the condition in which beam wander is not taken into account. Considering the combined effect of scintillation and beam wander, optimum divergence angle and transmitter beam radius for a communication system are researched. Numerical results show that both of them increase with the increment of total link margin and transmitted wavelength. This work can benefit the ground-to-satellite laser uplink communication system design.

  4. Packet error rate analysis of OOK, DPIM, and PPM modulation schemes for ground-to-satellite laser uplink communications.

    PubMed

    Jiang, Yijun; Tao, Kunyu; Song, Yiwei; Fu, Sen

    2014-03-01

    Performance of on-off keying (OOK), digital pulse interval modulation (DPIM), and pulse position modulation (PPM) schemes are researched for ground-to-satellite laser uplink communications. Packet error rates of these modulation systems are compared, with consideration of the combined effect of intensity fluctuation and beam wander. Based on the numerical results, performances of different modulation systems are discussed. Optimum divergence angle and transmitted beam radius of different modulation systems are indicated and the relations of the transmitted laser power to them are analyzed. This work can be helpful for modulation scheme selection and system design in ground-to-satellite laser uplink communications.

  5. Cervix cancer brachytherapy: high dose rate.

    PubMed

    Miglierini, P; Malhaire, J-P; Goasduff, G; Miranda, O; Pradier, O

    2014-10-01

    Cervical cancer, although less common in industrialized countries, is the fourth most common cancer affecting women worldwide and the fourth leading cause of cancer death. In developing countries, these cancers are often discovered at a later stage in the form of locally advanced tumour with a poor prognosis. Depending on the stage of the disease, treatment is mainly based on a chemoradiotherapy followed by uterovaginal brachytherapy ending by a potential remaining tumour surgery or in principle for some teams. The role of irradiation is crucial to ensure a better local control. It has been shown that the more the delivered dose is important, the better the local results are. In order to preserve the maximum of organs at risk and to allow this dose escalation, brachytherapy (intracavitary and/or interstitial) has been progressively introduced. Its evolution and its progressive improvement have led to the development of high dose rate brachytherapy, the advantages of which are especially based on the possibility of outpatient treatment while maintaining the effectiveness of other brachytherapy forms (i.e., low dose rate or pulsed dose rate). Numerous innovations have also been completed in the field of imaging, leading to a progress in treatment planning systems by switching from two-dimensional form to a three-dimensional one. Image-guided brachytherapy allows more precise target volume delineation as well as an optimized dosimetry permitting a better coverage of target volumes.

  6. The safety of electronic prescribing: manifestations, mechanisms, and rates of system-related errors associated with two commercial systems in hospitals

    PubMed Central

    Westbrook, Johanna I; Baysari, Melissa T; Li, Ling; Burke, Rosemary; Richardson, Katrina L; Day, Richard O

    2013-01-01

    Objectives To compare the manifestations, mechanisms, and rates of system-related errors associated with two electronic prescribing systems (e-PS). To determine if the rate of system-related prescribing errors is greater than the rate of errors prevented. Methods Audit of 629 inpatient admissions at two hospitals in Sydney, Australia using the CSC MedChart and Cerner Millennium e-PS. System related errors were classified by manifestation (eg, wrong dose), mechanism, and severity. A mechanism typology comprised errors made: selecting items from drop-down menus; constructing orders; editing orders; or failing to complete new e-PS tasks. Proportions and rates of errors by manifestation, mechanism, and e-PS were calculated. Results 42.4% (n=493) of 1164 prescribing errors were system-related (78/100 admissions). This result did not differ by e-PS (MedChart 42.6% (95% CI 39.1 to 46.1); Cerner 41.9% (37.1 to 46.8)). For 13.4% (n=66) of system-related errors there was evidence that the error was detected prior to study audit. 27.4% (n=135) of system-related errors manifested as timing errors and 22.5% (n=111) wrong drug strength errors. Selection errors accounted for 43.4% (34.2/100 admissions), editing errors 21.1% (16.5/100 admissions), and failure to complete new e-PS tasks 32.0% (32.0/100 admissions). MedChart generated more selection errors (OR=4.17; p=0.00002) but fewer new task failures (OR=0.37; p=0.003) relative to the Cerner e-PS. The two systems prevented significantly more errors than they generated (220/100 admissions (95% CI 180 to 261) vs 78 (95% CI 66 to 91)). Conclusions System-related errors are frequent, yet few are detected. e-PS require new tasks of prescribers, creating additional cognitive load and error opportunities. Dual classification, by manifestation and mechanism, allowed identification of design features which increase risk and potential solutions. e-PS designs with fewer drop-down menu selections may reduce error risk. PMID:23721982

  7. Adjoint-field errors in high fidelity compressible turbulence simulations for sound control

    NASA Astrophysics Data System (ADS)

    Vishnampet, Ramanathan; Bodony, Daniel; Freund, Jonathan

    2013-11-01

    A consistent discrete adjoint for high-fidelity discretization of the three-dimensional Navier-Stokes equations is used to quantify the error in the sensitivity gradient predicted by the continuous adjoint method, and examine the aeroacoustic flow-control problem for free-shear-flow turbulence. A particular quadrature scheme for approximating the cost functional makes our discrete adjoint formulation for a fourth-order Runge-Kutta scheme with high-order finite differences practical and efficient. The continuous adjoint-based sensitivity gradient is shown to to be inconsistent due to discretization truncation errors, grid stretching and filtering near boundaries. These errors cannot be eliminated by increasing the spatial or temporal resolution since chaotic interactions lead them to become O (1) at the time of control actuation. Although this is a known behavior for chaotic systems, its effect on noise control is much harder to anticipate, especially given the different resolution needs of different parts of the turbulence and acoustic spectra. A comparison of energy spectra of the adjoint pressure fields shows significant error in the continuous adjoint at all wavenumbers, even though they are well-resolved. The effect of this error on the noise control mechanism is analyzed.

  8. High-Rate Digital Receiver Board

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder; Bialas, Thomas; Brambora, Clifford; Fisher, David

    2004-01-01

    A high-rate digital receiver (HRDR) implemented as a peripheral component interface (PCI) board has been developed as a prototype of compact, general-purpose, inexpensive, potentially mass-producible data-acquisition interfaces between telemetry systems and personal computers. The installation of this board in a personal computer together with an analog preprocessor enables the computer to function as a versatile, highrate telemetry-data-acquisition and demodulator system. The prototype HRDR PCI board can handle data at rates as high as 600 megabits per second, in a variety of telemetry formats, transmitted by diverse phase-modulation schemes that include binary phase-shift keying and various forms of quadrature phaseshift keying. Costing less than $25,000 (as of year 2003), the prototype HRDR PCI board supplants multiple racks of older equipment that, when new, cost over $500,000. Just as the development of standard network-interface chips has contributed to the proliferation of networked computers, it is anticipated that the development of standard chips based on the HRDR could contribute to reductions in size and cost and increases in performance of telemetry systems.

  9. Slow-growing cells within isogenic populations have increased RNA polymerase error rates and DNA damage

    PubMed Central

    van Dijk, David; Dhar, Riddhiman; Missarova, Alsu M.; Espinar, Lorena; Blevins, William R.; Lehner, Ben; Carey, Lucas B.

    2015-01-01

    Isogenic cells show a large degree of variability in growth rate, even when cultured in the same environment. Such cell-to-cell variability in growth can alter sensitivity to antibiotics, chemotherapy and environmental stress. To characterize transcriptional differences associated with this variability, we have developed a method—FitFlow—that enables the sorting of subpopulations by growth rate. The slow-growing subpopulation shows a transcriptional stress response, but, more surprisingly, these cells have reduced RNA polymerase fidelity and exhibit a DNA damage response. As DNA damage is often caused by oxidative stress, we test the addition of an antioxidant, and find that it reduces the size of the slow-growing population. More generally, we find a significantly altered transcriptome in the slow-growing subpopulation that only partially resembles that of cells growing slowly due to environmental and culture conditions. Slow-growing cells upregulate transposons and express more chromosomal, viral and plasmid-borne transcripts, and thus explore a larger genotypic—and so phenotypic — space. PMID:26268986

  10. Understanding High Rate Behavior Through Low Rate Analog

    DTIC Science & Technology

    2014-04-28

    Transition Temperature[°C] 82.4 -20 Melting Point [°C] 100-260 40-50 Thermal conductivity [W·m -1 ·K -1 ] 0.14–0.28 0.14–0.17 Table 2. The four PVC...13) Here, the thermal diffusivity, α, is first calculated from the conductivity , k, density, ρ, and specific heat capacity, C; alternatively the...chapter. Furthermore, the low thermal conductivity means that specimen heating also occurs at lower strain rates than for PVC. Before performing

  11. Error-estimation-guided rebuilding of de novo models increases the success rate of ab initio phasing.

    PubMed

    Shrestha, Rojan; Simoncini, David; Zhang, Kam Y J

    2012-11-01

    Recent advancements in computational methods for protein-structure prediction have made it possible to generate the high-quality de novo models required for ab initio phasing of crystallographic diffraction data using molecular replacement. Despite those encouraging achievements in ab initio phasing using de novo models, its success is limited only to those targets for which high-quality de novo models can be generated. In order to increase the scope of targets to which ab initio phasing with de novo models can be successfully applied, it is necessary to reduce the errors in the de novo models that are used as templates for molecular replacement. Here, an approach is introduced that can identify and rebuild the residues with larger errors, which subsequently reduces the overall C(α) root-mean-square deviation (CA-RMSD) from the native protein structure. The error in a predicted model is estimated from the average pairwise geometric distance per residue computed among selected lowest energy coarse-grained models. This score is subsequently employed to guide a rebuilding process that focuses on more error-prone residues in the coarse-grained models. This rebuilding methodology has been tested on ten protein targets that were unsuccessful using previous methods. The average CA-RMSD of the coarse-grained models was improved from 4.93 to 4.06 Å. For those models with CA-RMSD less than 3.0 Å, the average CA-RMSD was improved from 3.38 to 2.60 Å. These rebuilt coarse-grained models were then converted into all-atom models and refined to produce improved de novo models for molecular replacement. Seven diffraction data sets were successfully phased using rebuilt de novo models, indicating the improved quality of these rebuilt de novo models and the effectiveness of the rebuilding process. Software implementing this method, called MORPHEUS, can be downloaded from http://www.riken.jp/zhangiru/software.html.

  12. Senior High School Students' Errors on the Use of Relative Words

    ERIC Educational Resources Information Center

    Bao, Xiaoli

    2015-01-01

    Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…

  13. Errors in administrative-reported ventilator-associated pneumonia rates: are never events really so?

    PubMed

    Thomas, Bradley W; Maxwell, Robert A; Dart, Benjamin W; Hartmann, Elizabeth H; Bates, Dustin L; Mejia, Vicente A; Smith, Philip W; Barker, Donald E

    2011-08-01

    Ventilator-associated pneumonia (VAP) is a common problem in an intensive care unit (ICU), although the incidence is not well established. This study aims to compare the VAP incidence as determined by the treating surgical intensivist with that detected by the hospital Infection Control Service (ICS). Trauma and surgical patients admitted to the surgical critical care service were prospectively evaluated for VAP during a 5-month time period. Collected data included the surgical intensivist's clinical VAP (SIS-VAP) assessment using Centers for Disease Control and Prevention (CDC) VAP criteria. As part of the hospital's VAP surveillance program, these patients' medical records were also reviewed by the ICS for VAP (ICS-VAP) using the same CDC VAP criteria. All patients suspected of having VAP underwent bronchioalveolar lavage (BAL). The SIS-VAP and ICS-VAP were then compared with BAL-VAP. Three hundred twenty-nine patients were admitted to the ICU during the study period. One hundred thirty-three were intubated longer than 48 hours and comprised our study population. Sixty-two patients underwent BAL evaluation for the presence of VAP on 89 occasions. SIS-VAP was diagnosed in 38 (28.5%) patients. ICS-VAP was identified in 11 (8.3%) patients (P < 0.001). The incidence of VAP by BAL criteria was 23.3 per cent. When compared with BAL, SIS-VAP had 61.3 per cent sensitivity and ICS-VAP had 29 per cent sensitivity. VAP rates reported by hospital administrative sources are significantly less accurate than physician-reported rates and dramatically underestimate the incidence of VAP. Proclaiming VAP as a never event for critically ill for surgical and trauma patients appears to be a fallacy.

  14. Between‐Batch Pharmacokinetic Variability Inflates Type I Error Rate in Conventional Bioequivalence Trials: A Randomized Advair Diskus Clinical Trial

    PubMed Central

    Carroll, KJ; Mielke, J; Benet, LZ; Jones, B

    2016-01-01

    We previously demonstrated pharmacokinetic differences among manufacturing batches of a US Food and Drug Administration (FDA)‐approved dry powder inhalation product (Advair Diskus 100/50) large enough to establish between‐batch bio‐inequivalence. Here, we provide independent confirmation of pharmacokinetic bio‐inequivalence among Advair Diskus 100/50 batches, and quantify residual and between‐batch variance component magnitudes. These variance estimates are used to consider the type I error rate of the FDA's current two‐way crossover design recommendation. When between‐batch pharmacokinetic variability is substantial, the conventional two‐way crossover design cannot accomplish the objectives of FDA's statistical bioequivalence test (i.e., cannot accurately estimate the test/reference ratio and associated confidence interval). The two‐way crossover, which ignores between‐batch pharmacokinetic variability, yields an artificially narrow confidence interval on the product comparison. The unavoidable consequence is type I error rate inflation, to ∼25%, when between‐batch pharmacokinetic variability is nonzero. This risk of a false bioequivalence conclusion is substantially higher than asserted by regulators as acceptable consumer risk (5%). PMID:27727445

  15. Movement error rate for evaluation of machine learning methods for sEMG-based hand movement classification.

    PubMed

    Gijsberts, Arjan; Atzori, Manfredo; Castellini, Claudio; Muller, Henning; Caputo, Barbara

    2014-07-01

    There has been increasing interest in applying learning algorithms to improve the dexterity of myoelectric prostheses. In this work, we present a large-scale benchmark evaluation on the second iteration of the publicly released NinaPro database, which contains surface electromyography data for 6 DOF force activations as well as for 40 discrete hand movements. The evaluation involves a modern kernel method and compares performance of three feature representations and three kernel functions. Both the force regression and movement classification problems can be learned successfully when using a nonlinear kernel function, while the exp- χ(2) kernel outperforms the more popular radial basis function kernel in all cases. Furthermore, combining surface electromyography and accelerometry in a multimodal classifier results in significant increases in accuracy as compared to when either modality is used individually. Since window-based classification accuracy should not be considered in isolation to estimate prosthetic controllability, we also provide results in terms of classification mistakes and prediction delay. To this extent, we propose the movement error rate as an alternative to the standard window-based accuracy. This error rate is insensitive to prediction delays and it allows us therefore to quantify mistakes and delays as independent performance characteristics. This type of analysis confirms that the inclusion of accelerometry is superior, as it results in fewer mistakes while at the same time reducing prediction delay.

  16. Phonetic and phonological errors in children with high functioning autism and Asperger syndrome.

    PubMed

    Cleland, Joanne; Gibbon, Fiona E; Peppé, Sue J E; O'Hare, Anne; Rutherford, Marion

    2010-02-01

    This study involved a qualitative analysis of speech errors in children with autism spectrum disorders (ASDs). Participants were 69 children aged 5-13 years; 30 had high functioning autism and 39 had Asperger syndrome. On a standardized test of articulation, the minority (12%) of participants presented with standard scores below the normal range, indicating a speech delay/disorder. Although all the other children had standard scores within the normal range, a sizeable proportion (33% of those with normal standard scores) presented with a small number of errors. Overall 41% of the group produced at least some speech errors. The speech of children with ASD was characterized by mainly developmental phonological processes (gliding, cluster reduction and final consonant deletion most frequently), but non-developmental error types (such as phoneme specific nasal emission and initial consonant deletion) were found both in children identified as performing below the normal range in the standardized speech test and in those who performed within the normal range. Non-developmental distortions occurred relatively frequently in the children with ASD and previous studies of adolescents and adults with ASDs shows similar errors, suggesting that they do not resolve over time. Whether or not speech disorders are related specifically to ASD, their presence adds an additional communication and social barrier and should be diagnosed and treated as early as possible in individual children.

  17. Orbit error correction on the high energy beam transport line at the KHIMA accelerator system

    NASA Astrophysics Data System (ADS)

    Park, Chawon; Yim, Heejoong; Hahn, Garam; An, Dong Hyun

    2016-09-01

    For the purpose of treatment of various cancers and medical research, a synchrotron based medical machine has been developed under the Korea Heavy Ion Medical Accelerator (KHIMA) project and is scheduled for use to treat patient at the beginning of 2018. The KHIMA synchrotron is designed to accelerate and extract carbon ion (proton) beams with various energies from 110 to 430 MeV/u (60 to 230 MeV). Studies on the lattice design and beam optics for the High Energy Beam Transport (HEBT) line at the KHIMA accelerator system have been carried out using the WinAgile and the MAD-X codes. Because magnetic field errors and misalignments introduce deviations from the design parameters, these error sources should be treated explicitly, and the sensitivity of the machine's lattice to different individual error sources should be considered. Various types of errors, both static and dynamic, have been taken into account and have been consequentially corrected with a dedicated correction algorithm by using the MAD-X program. Based on the error analysis, the optimized correction setup is decided, and the specifications for the correcting magnets of the HEBT lines are determined.

  18. Application of high-rate cutting tools

    NASA Astrophysics Data System (ADS)

    Moriarty, John L., Jr.

    1989-03-01

    Widespread application of the newest high-rate cutting tools to the most appropriate jobs is slowed by the sheer magnitude of developments in tool types, materials, workpiece applications, and by the rapid pace of change. Therefore, a study of finishing and roughing sizes of coated carbide inserts having a variety of geometries for single point turning was completed. The cutting tools were tested for tool life, chip quality, and workpiece surface finish at various cutting conditions with medium alloy steel. An empirical wear-life data base was established, and a computer program was developed to facilitate technology transfer, assist selection of carbide insert grades, and provide machine operating parameters. A follow-on test program was implemented suitable for next generation coated carbides, rotary cutting tools, cutting fluids, and ceramic tool materials.

  19. Consideration of wear rates at high velocity

    NASA Astrophysics Data System (ADS)

    Hale, Chad S.

    The development of the research presented here is one in which high velocity relative sliding motion between two bodies in contact has been considered. Overall, the wear environment is truly three-dimensional. The attempt to characterize three-dimensional wear was not economically feasible because it must be analyzed at the micro-mechanical level to get results. Thus, an engineering approximation was carried out. This approximation was based on a metallographic study identifying the need to include viscoplasticity constitutive material models, coefficient of friction, relationships between the normal load and velocity, and the need to understand wave propagation. A sled test run at the Holloman High Speed Test Track (HHSTT) was considered for the determination of high velocity wear rates. In order to adequately characterize high velocity wear, it was necessary to formulate a numerical model that contained all of the physical events present. The experimental results of a VascoMax 300 maraging steel slipper sliding on an AISI 1080 steel rail during a January 2008 sled test mission were analyzed. During this rocket sled test, the slipper traveled 5,816 meters in 8.14 seconds and reached a maximum velocity of 1,530 m/s. This type of environment was never considered previously in terms of wear evaluation. Each of the features of the metallography were obtained through micro-mechanical experimental techniques. The byproduct of this analysis is that it is now possible to formulate a model that contains viscoplasticity, asperity collisions, temperature and frictional features. Based on the observations of the metallographic analysis, these necessary features have been included in the numerical model, which makes use of a time-dynamic program which follows the movement of a slipper during its experimental test run. The resulting velocity and pressure functions of time have been implemented in the explicit finite element code, ABAQUS. Two-dimensional, plane strain models

  20. Performance Evaluation of High-Rate GPS Seismometers

    NASA Astrophysics Data System (ADS)

    Kato, T.; Ebinuma, T.

    2011-12-01

    High-rate GPS observations with higher than once-per-second sampling are getting increasingly important for seismology. Unlike a traditional seismometer which measures short period vibration using accelerometers, the GPS receiver can measure its antenna position directly and record long period seismic wave and permanent displacements as well. The high-rate GPS observations are expected to provide new insights in understanding the whole aspects of earthquake process. In this study, we investigated dynamic characteristics of the high-rate GPS receivers capable of outputting the observations at up to 50Hz. This higher output rate, however, doesn't mean higher dynamics range of the GPS observations. Since many GPS receivers are designed for low dynamics applications, such as static survey, personal and car navigation, the bandwidth of the loop filters tend to be narrower in order to reduce the noise level of the observations. The signal tracking loop works like a low-pass filter. Thus the narrower the bandwidth, the lower the dynamics range. In order to extend this dynamical limit, high-rate GPS receivers might use wider loop bandwidth for phase tracking. In this case, the GPS observations are degraded by higher noise level in return. In addition to the limitation of the loop bandwidth, higher acceleration due to earthquake may cause the steady state error in the signal tracking loop. As a result, kinematic solutions experience undesirable position offsets, or the receiver may lose the GPS signals in an extreme case. In order to examine those effects for the high-rate GPS observations, we made an experiment using a GPS signal simulator and several geodetic GPS receivers, including Trimble Net-R8, NovAtel OEMV, Topcon Net-G3A, and Javad SIGMA-G2T. We set up the zero-baseline simulation scenario in which the rover receiver was vibrating in a periodic motion with the frequency from 1Hz to 10Hz around the reference station. The amplitude of the motion was chosen to provide

  1. Assessment of error rates in acoustic monitoring with the R package monitoR

    USGS Publications Warehouse

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were  for song event detection.

  2. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  3. High Data Rate Architecture (HiDRA)

    NASA Technical Reports Server (NTRS)

    Hylton, Alan; Raible, Daniel

    2016-01-01

    high-rate laser terminals. These must interface with the existing, aging data infrastructure. The High Data Rate Architecture (HiDRA) project is designed to provide networked store, carry, and forward capability to optimize data flow through both the existing radio frequency (RF) and new laser communications terminal. The networking capability is realized through the Delay Tolerant Networking (DTN) protocol, and is used for scheduling data movement as well as optimizing the performance of existing RF channels. HiDRA is realized as a distributed FPGA memory and interface controller that is itself controlled by a local computer running DTN software. Thus HiDRA is applicable to other arenas seeking to employ next-generation communications technologies, e.g. deep space. In this paper, we describe HiDRA and its far-reaching research implications.

  4. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  5. The Influence of Relatives on the Efficiency and Error Rate of Familial Searching

    PubMed Central

    Rohlfs, Rori V.; Murphy, Erin; Song, Yun S.; Slatkin, Montgomery

    2013-01-01

    We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011), for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler® Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability () of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins) there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases. PMID:23967076

  6. Photocathodes for High Repetition Rate Light Sources

    SciTech Connect

    Ben-Zvi, Ilan

    2014-04-20

    This proposal brought together teams at Brookhaven National Laboratory (BNL), Lawrence Berkeley National Laboratory (LBNL) and Stony Brook University (SBU) to study photocathodes for high repetition rate light sources such as Free Electron Lasers (FEL) and Energy Recovery Linacs (ERL). Below details the Principal Investigators and contact information. Each PI submits separately for a budget through his corresponding institute. The work done under this grant comprises a comprehensive program on critical aspects of the production of the electron beams needed for future user facilities. Our program pioneered in situ and in operando diagnostics for alkali antimonide growth. The focus is on development of photocathodes for high repetition rate Free Electron Lasers (FELs) and Energy Recovery Linacs (ERLs), including testing SRF photoguns, both normal-­conducting and superconducting. Teams from BNL, LBNL and Stony Brook University (SBU) led this research, and coordinated their work over a range of topics. The work leveraged a robust infrastructure of existing facilities and the support was used for carrying out the research at these facilities. The program concentrated in three areas: a) Physics and chemistry of alkali-­antimonide cathodes (BNL – LBNL) b) Development and testing of a diamond amplifier for photocathodes (SBU -­ BNL) c) Tests of both cathodes in superconducting RF photoguns (SBU) and copper RF photoguns (LBNL) Our work made extensive use of synchrotron radiation materials science techniques, such as powder-­ and single-­crystal diffraction, x-­ray fluorescence, EXAFS and variable energy XPS. BNL and LBNL have many complementary facilities at the two light sources associated with these laboratories (NSLS and ALS, respectively); use of these will be a major thrust of our program and bring our understanding of these complex materials to a new level. In addition, CHESS at Cornell will be used to continue seamlessly throughout the NSLS dark period and

  7. The Effects of Type I Error Rate and Power of the ANCOVA "F" Test and Selected Alternatives under Nonnormality and Variance Heterogeneity.

    ERIC Educational Resources Information Center

    Rheinheimer, David C.; Penfield, Douglas A.

    2001-01-01

    Studied, through Monte Carlo simulation, the conditions for which analysis of covariance (ANCOVA) does not maintain adequate Type I error rates and power and evaluated some alternative tests. Discusses differences in ANCOVA robustness for balanced and unbalanced designs. (SLD)

  8. Exact error rate analysis of equal gain and selection diversity for coherent free-space optical systems on strong turbulence channels.

    PubMed

    Niu, Mingbo; Cheng, Julian; Holzman, Jonathan F

    2010-06-21

    Exact error rate performances are studied for coherent free-space optical communication systems under strong turbulence with diversity reception. Equal gain and selection diversity are considered as practical schemes to mitigate turbulence. The exact bit-error rate for binary phase-shift keying and outage probability are developed for equal gain diversity. Analytical expressions are obtained for the bit-error rate of differential phase-shift keying and asynchronous frequency-shift keying, as well as for outage probability using selection diversity. Furthermore, we provide the closed-form expressions of diversity order and coding gain with both diversity receptions. The analytical results are verified by computer simulations and are suitable for rapid error rates calculation.

  9. Outage Performance and Average Symbol Error Rate of M-QAM for Maximum Ratio Combining with Multiple Interferers

    NASA Astrophysics Data System (ADS)

    Ahn, Kyung Seung

    In this paper, we investigate the performance of maximum ratio combining (MRC) in the presence of multiple cochannel interferences over a flat Rayleigh fading channel. Closed-form expressions of signal-to-interference-plus-noise ratio (SINK), outage probability, and average symbol error rate (SER) of quadrature amplitude modulation (QAM) with Mary signaling are obtained for unequal-power interference-to-noise ratio (INR). We also provide an upper-bound for the average SER using moment generating function (MGF) of the SINR. Moreover, we quantify the array gain loss between pure MRC (MRC system in the absence of CCI) and MRC system in the presence of CCI. Finally, we verify our analytical results by numerical simulations.

  10. Advanced Communications Technology Satellite (ACTS) Fade Compensation Protocol Impact on Very Small-Aperture Terminal Bit Error Rate Performance

    NASA Technical Reports Server (NTRS)

    Cox, Christina B.; Coney, Thom A.

    1999-01-01

    The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.

  11. Choice of Reference Sequence and Assembler for Alignment of Listeria monocytogenes Short-Read Sequence Data Greatly Influences Rates of Error in SNP Analyses

    PubMed Central

    Pightling, Arthur W.; Petronella, Nicholas; Pagotto, Franco

    2014-01-01

    The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should

  12. Estimation of chromatic errors from broadband images for high contrast imaging

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Belikov, Ruslan

    2015-09-01

    Usage of an internal coronagraph with an adaptive optical system for wavefront correction for direct imaging of exoplanets is currently being considered for many mission concepts, including as an instrument addition to the WFIRST-AFTA mission to follow the James Web Space Telescope. The main technical challenge associated with direct imaging of exoplanets with an internal coronagraph is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, wavefront errors are usually estimated using probes on the DM. To date, most broadband lab demonstrations use narrowband filters to estimate the chromaticity of the wavefront error, but this reduces the photon flux per filter and requires a filter system. Here, we propose a method to estimate the chromaticity of wavefront errors using only a broadband image. This is achieved by using special DM probes that have sufficient chromatic diversity. As a case example, we simulate the retrieval of the spectrum of the central wavelength from broadband images for a simple shaped- pupil coronagraph with a conjugate DM and compute the resulting estimation error.

  13. High rate PLD of diamond-like-carbon utilizing high repetition rate visible lasers

    SciTech Connect

    McLean, W. II; Fehring, E.J.; Dragon, E.P.; Warner, B.E.

    1994-09-15

    Pulsed Laser Deposition (PLD) has been shown to be an effective method for producing a wide variety of thin films of high-value-added materials. The high average powers and high pulse repetition frequencies of lasers under development at LLNL make it possible to scale-up PLD processes that have been demonstrated in small systems in a number of university, government, and private laboratories to industrially meaningful, economically feasible technologies. A copper vapor laser system at LLNL has been utilized to demonstrate high rate PLD of high quality diamond-like-carbon (DLC) from graphite targets. The deposition rates for PLD obtained with a 100 W laser were {approx} 2000 {mu}m{center_dot}cm{sup 2}/h, or roughly 100 times larger than those reported by chemical vapor deposition (CVD) or physical vapor deposition (PVD) methods. Good adhesion of thin (up to 2 pm) films has been achieved on a small number of substrates that include SiO{sub 2} and single crystal Si. Present results indicate that the best quality DLC films can be produced at optimum rates at power levels and wavelengths compatible with fiber optic delivery systems. If this is also true of other desirable coating systems, this PLD technology could become an extremely attractive industrial tool for high value added coatings.

  14. High resolution, high rate x-ray spectrometer

    DOEpatents

    Goulding, F.S.; Landis, D.A.

    1983-07-14

    It is an object of the invention to provide a pulse processing system for use with detected signals of a wide dynamic range which is capable of very high counting rates, with high throughput, with excellent energy resolution and a high signal-to-noise ratio. It is a further object to provide a pulse processing system wherein the fast channel resolving time is quite short and substantially independent of the energy of the detected signals. Another object is to provide a pulse processing system having a pile-up rejector circuit which will allow the maximum number of non-interfering pulses to be passed to the output. It is also an object of the invention to provide new methods for generating substantially symmetrically triangular pulses for use in both the main and fast channels of a pulse processing system.

  15. A Comparative Study of Heavy Ion and Proton Induced Bit Error Sensitivity and Complex Burst Error Modes in Commercially Available High Speed SiGe BiCMOS

    NASA Technical Reports Server (NTRS)

    Marshall, Paul; Carts, Marty; Campbell, Art; Reed, Robert; Ladbury, Ray; Seidleck, Christina; Currie, Steve; Riggs, Pam; Fritz, Karl; Randall, Barb

    2004-01-01

    A viewgraph presentation that reviews recent SiGe bit error test data for different commercially available high speed SiGe BiCMOS chips that were subjected to various levels of heavy ion and proton radiation. Results for the tested chips at different operating speeds are displayed in line graphs.

  16. Every photon counts: improving low, mid, and high-spatial frequency errors on astronomical optics and materials with MRF

    NASA Astrophysics Data System (ADS)

    Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul

    2016-07-01

    fluid, called C30, has been developed to finish surfaces to ultra-low roughness (ULR) and has been used as the low removal rate fluid required for fine figure correction of mid-spatial frequency errors. This novel MRF fluid is able to achieve <4Å RMS on Nickel-plated Aluminum and even <1.5Å RMS roughness on Silicon, Fused Silica and other materials. C30 fluid is best utilized within a fine figure correction process to target mid-spatial frequency errors as well as smooth surface roughness 'for free' all in one step. In this paper we will discuss recent advancements in MRF technology and the ability to meet requirements for precision optics in low, mid and high spatial frequency regimes and how improved MRF performance addresses the need for achieving tight specifications required for astronomical optics.

  17. PS foams at high pressure drop rates

    NASA Astrophysics Data System (ADS)

    Tammaro, Daniele; De Maio, Attilio; Carbone, Maria Giovanna Pastore; Di Maio, Ernesto; Iannace, Salvatore

    2014-05-01

    In this paper, we report data on PS foamed at 100 °C after CO2 saturation at 10 MPa in a new physical foaming batch that achieves pressure drop rates up to 120 MPa/s. Results show how average cell size of the foam nicely fit a linear behavior with the pressure drop rate in a double logarithmic plot. Furthermore, foam density initially decreases with the pressure drop rate, attaining a constant value at pressure drop rates higher than 40 MPa/s. Interestingly, furthermore, we observed that the shape of the pressure release curve has a large effect on the final foam morphology, as observed in tests in which the maximum pressure release rate was kept constant but the shape of the curve changed. These results allow for a fine tuning of the foam density and morphology for specific applications.

  18. High voltage high repetition rate pulse using Marx topology

    NASA Astrophysics Data System (ADS)

    Hakki, A.; Kashapov, N.

    2015-06-01

    The paper describes Marx topology using MOSFET transistors. Marx circuit with 10 stages has been done, to obtain pulses about 5.5KV amplitude, and the width of the pulses was about 30μsec with a high repetition rate (PPS > 100), Vdc = 535VDC is the input voltage for supplying the Marx circuit. Two Ferrite ring core transformers were used to control the MOSFET transistors of the Marx circuit (the first transformer to control the charging MOSFET transistors, the second transformer to control the discharging MOSFET transistors).

  19. High-Rate Compression of Polypropylene

    NASA Astrophysics Data System (ADS)

    Okereke, Michael; Buckley, C. Paul

    2008-08-01

    Three grades of polypropylene were tested in compression at room temperature, across an unusually wide range of strain rate: 10-4 to 104 s-1. The quasi-static testing was done in a Hounsfield machine fitted with a digital image acquisition kit, while tests at the highest strain rates were carried out using a compression split Hopkinson pressure bar. The strain rate dependence of compressive yield stress was compared with the Eyring prediction, and found to be a nonlinear function of log10(strain-rate). The nonlinearity is attributed to the presence of two relaxation processes in polypropylene, with differing activation volumes: the α- and β-processes. According to the Bauwens two-process model this would lead naturally to curved Eyring plots, where the apparent activation volume decreases with increasing strain-rate. Another prominent feature in the experimental results was the increase in magnitude of post-yield strain-softening with increase in strain-rate. This indicates that the dominant structural relaxation time exceeds the experimental time-scale at the highest strain-rates, but lies below it for the quasi-static tests.

  20. The Effects of Positional Stress and Receiver Apprehension on Leniency Errors in Speech Evaluation: A Test of the Rating Error Paradigm.

    ERIC Educational Resources Information Center

    Bock, Douglas G.; Bock, E. Hope

    1984-01-01

    Tested variables that affect how students rate speeches delivered by their classmates. Found, for example, that students who rate the speeches before giving their own are more positively lenient than they are when rating those speeches given after they deliver their own speeches. (PD)

  1. HIgh Rate X-ray Fluorescence Detector

    SciTech Connect

    Grudberg, Peter Matthew

    2013-04-30

    The purpose of this project was to develop a compact, modular multi-channel x-ray detector with integrated electronics. This detector, based upon emerging silicon drift detector (SDD) technology, will be capable of high data rate operation superior to the current state of the art offered by high purity germanium (HPGe) detectors, without the need for liquid nitrogen. In addition, by integrating the processing electronics inside the detector housing, the detector performance will be much less affected by the typically noisy electrical environment of a synchrotron hutch, and will also be much more compact than current systems, which can include a detector involving a large LN2 dewar and multiple racks of electronics. The combined detector/processor system is designed to match or exceed the performance and features of currently available detector systems, at a lower cost and with more ease of use due to the small size of the detector. In addition, the detector system is designed to be modular, so a small system might just have one detector module, while a larger system can have many you can start with one detector module, and add more as needs grow and budget allows. The modular nature also serves to simplify repair. In large part, we were successful in achieving our goals. We did develop a very high performance, large area multi-channel SDD detector, packaged with all associated electronics, which is easy to use and requires minimal external support (a simple power supply module and a closed-loop water cooling system). However, we did fall short of some of our stated goals. We had intended to base the detector on modular, large-area detectors from Ketek GmbH in Munich, Germany; however, these were not available in a suitable time frame for this project, so we worked instead with pnDetector GmbH (also located in Munich). They were able to provide a front-end detector module with six 100 m^2 SDD detectors (two monolithic arrays of three elements each) along with

  2. Bipolar high-repetition-rate high-voltage nanosecond pulser.

    PubMed

    Tian, Fuqiang; Wang, Yi; Shi, Hongsheng; Lei, Qingquan

    2008-06-01

    The pulser designed is mainly used for producing corona plasma in waste water treatment system. Also its application in study of dielectric electrical properties will be discussed. The pulser consists of a variable dc power source for high-voltage supply, two graded capacitors for energy storage, and the rotating spark gap switch. The key part is the multielectrode rotating spark gap switch (MER-SGS), which can ensure wider range modulation of pulse repetition rate, longer pulse width, shorter pulse rise time, remarkable electrical field distortion, and greatly favors recovery of the gap insulation strength, insulation design, the life of the switch, etc. The voltage of the output pulses switched by the MER-SGS is in the order of 3-50 kV with pulse rise time of less than 10 ns and pulse repetition rate of 1-3 kHz. An energy of 1.25-125 J per pulse and an average power of up to 10-50 kW are attainable. The highest pulse repetition rate is determined by the driver motor revolution and the electrode number of MER-SGS. Even higher voltage and energy can be switched by adjusting the gas pressure or employing N(2) as the insulation gas or enlarging the size of MER-SGS to guarantee enough insulation level.

  3. Bipolar high-repetition-rate high-voltage nanosecond pulser

    SciTech Connect

    Tian Fuqiang; Wang Yi; Shi Hongsheng; Lei Qingquan

    2008-06-15

    The pulser designed is mainly used for producing corona plasma in waste water treatment system. Also its application in study of dielectric electrical properties will be discussed. The pulser consists of a variable dc power source for high-voltage supply, two graded capacitors for energy storage, and the rotating spark gap switch. The key part is the multielectrode rotating spark gap switch (MER-SGS), which can ensure wider range modulation of pulse repetition rate, longer pulse width, shorter pulse rise time, remarkable electrical field distortion, and greatly favors recovery of the gap insulation strength, insulation design, the life of the switch, etc. The voltage of the output pulses switched by the MER-SGS is in the order of 3-50 kV with pulse rise time of less than 10 ns and pulse repetition rate of 1-3 kHz. An energy of 1.25-125 J per pulse and an average power of up to 10-50 kW are attainable. The highest pulse repetition rate is determined by the driver motor revolution and the electrode number of MER-SGS. Even higher voltage and energy can be switched by adjusting the gas pressure or employing N{sub 2} as the insulation gas or enlarging the size of MER-SGS to guarantee enough insulation level.

  4. High data rate systems for the future

    NASA Technical Reports Server (NTRS)

    Chitwood, John

    1991-01-01

    Information systems in the next century will transfer data at rates that are much greater than those in use today. Satellite based communication systems will play an important role in networking users. Typical data rates; use of microwave, millimeter wave, or optical systems; millimeter wave communication technology; modulators/exciters; solid state power amplifiers; beam waveguide transmission systems; low noise receiver technology; optical communication technology; and the potential commercial applications of these technologies are discussed.

  5. Consideration of Wear Rates at High Velocities

    DTIC Science & Technology

    2010-03-01

    evaluations were performed for different velocity ranges depending on the interest of the individual researcher. As a result, an inconsistency 4 W L...together will produce heat. The slipper-rail interaction being studied is no different . The amount of heat generated is a function of the frictional...the one which provides the highest wear rate. To correlate specimens from different sources and of varying sizes and shapes, the wear rate, normal

  6. Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1998-01-01

    In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.

  7. Separable and Error-Free Reversible Data Hiding in Encrypted Image with High Payload

    PubMed Central

    Yin, Zhaoxia; Luo, Bin; Hong, Wien

    2014-01-01

    This paper proposes a separable reversible data-hiding scheme in encrypted image which offers high payload and error-free data extraction. The cover image is partitioned into nonoverlapping blocks and multigranularity encryption is applied to obtain the encrypted image. The data hider preprocesses the encrypted image and randomly selects two basic pixels in each block to estimate the block smoothness and indicate peak points. Additional data are embedded into blocks in the sorted order of block smoothness by using local histogram shifting under the guidance of the peak points. At the receiver side, image decryption and data extraction are separable and can be free to choose. Compared to previous approaches, the proposed method is simpler in calculation while offering better performance: larger payload, better embedding quality, and error-free data extraction, as well as image recovery. PMID:24977214

  8. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  9. Quantifying the Representation Error of Land Biosphere Models using High Resolution Footprint Analyses and UAS Observations

    NASA Astrophysics Data System (ADS)

    Hanson, C. V.; Schmidt, A.; Law, B. E.; Moore, W.

    2015-12-01

    The validity of land biosphere model outputs rely on accurate representations of ecosystem processes within the model. Typically, a vegetation or land cover type for a given area (several Km squared or larger resolution), is assumed to have uniform properties. The limited spacial and temporal resolution of models prevents resolving finer scale heterogeneous flux patterns that arise from variations in vegetation. This representation error must be quantified carefully if models are informed through data assimilation in order to assign appropriate weighting of model outputs and measurement data. The representation error is usually only estimated or ignored entirely due to the difficulty in determining reasonable values. UAS based gas sensors allow measurements of atmospheric CO2 concentrations with unprecedented spacial resolution, providing a means of determining the representation error for CO2 fluxes empirically. In this study we use three dimensional CO2 concentration data in combination with high resolution footprint analyses in order to quantify the representation error for modelled CO2 fluxes for typical resolutions of regional land biosphere models. CO2 concentration data were collected using an Atlatl X6A hexa-copter, carrying a highly calibrated closed path infra-red gas analyzer based sampling system with an uncertainty of ≤ ±0.2 ppm CO2. Gas concentration data was mapped in three dimensions using the UAS on-board position data and compared to footprints generated using WRF 3.61. Chad Hanson, Oregon State University, Corvallis, OR Andres Schmidt, Oregon State University, Corvallis, OR Bev Law, Oregon State University, Corvallis, OR

  10. An integrated CMOS high data rate transceiver for video applications

    NASA Astrophysics Data System (ADS)

    Yaping, Liang; Dazhi, Che; Cheng, Liang; Lingling, Sun

    2012-07-01

    This paper presents a 5 GHz CMOS radio frequency (RF) transceiver built with 0.18 μm RF-CMOS technology by using a proprietary protocol, which combines the new IEEE 802.11n features such as multiple-in multiple-out (MIMO) technology with other wireless technologies to provide high data rate robust real-time high definition television (HDTV) distribution within a home environment. The RF frequencies cover from 4.9 to 5.9 GHz: the industrial, scientific and medical (ISM) band. Each RF channel bandwidth is 20 MHz. The transceiver utilizes a direct up transmitter and low-IF receiver architecture. A dual-quadrature direct up conversion mixer is used that achieves better than 35 dB image rejection without any on chip calibration. The measurement shows a 6 dB typical receiver noise figure and a better than 33 dB transmitter error vector magnitude (EVM) at -3 dBm output power.

  11. High strain rate behavior of alloy 800H at high temperatures

    NASA Astrophysics Data System (ADS)

    Shafiei, E.

    2016-05-01

    In this paper, a new model using linear estimation of strain hardening rate vs. stress, has been developed to predict dynamic behavior of alloy 800H at high temperatures. In order to prove the accuracy and competency of the presented model, Johnson-Cook model pertaining modeling of flow stress curves was used. Evaluation of mean error of flow stress at deformation temperatures from 850 °C to 1050 °C and at strain rates of 5 S-1 to 20 S-1 indicates that the predicted results are in a good agreement with experimentally measured ones. This analysis has been done for the stress-strain curves under hot working condition for alloy 800H. However, this model is not dependent on the type of material and can be extended for any similar conditions.

  12. Sub-40nm high-volume manufacturing overlay uncorrectable error evaluation

    NASA Astrophysics Data System (ADS)

    Baluswamy, Pary; Khurana, Ranjan; Orf, Bryan; Keller, Wolfgang

    2013-04-01

    Circuit layout and design rules have continued to shrink to the point where a few nanometers of pattern misalignment can negatively impact process capability and device yields. As wafer processes and film stacks have become more complex, overlay and alignment performance in high-volume manufacturing (HVM) have become increasingly sensitive to process and tool variations experienced by incoming wafers. Current HVM relies on overlay control via advanced process control (APC) feedback, single-exposure tool grid stability, scanner-to-scanner matching, correction models, sampling strategies, overlay mark design, and metrology. However, even with improvements to those methods, a large fraction of the uncorrectable errors (i.e., residuals) still remains. While lower residuals typically lead to increased yield performance, it is difficult to achieve in HVM due to the large combinations of wafer history in terms of prior tools, recipes, and ongoing process conversions. Hence, it is critical to understand the effect of residual errors on measurement sampling and model parameters to enable process control. In this study, we investigate the following: residual errors of sub-40nm processes as a function of correction models, sensitivity of the model parameters to residue, and the impact of data quality.

  13. The Combustion of HMX. [burning rate at high pressures

    NASA Technical Reports Server (NTRS)

    Boggs, T. L.; Price, C. F.; Atwood, A. I.; Zurn, D. E.; Eisel, J. L.

    1980-01-01

    The burn rate of HMX was measured at high pressures (p more than 1000 psi). The self deflagration rate of HMX was determined from 1 atmosphere to 50,000 psi. The burning rate shows no significant slope breaks.

  14. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    ERIC Educational Resources Information Center

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  15. A cascaded coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Shu, L.; Kasami, T.

    1985-01-01

    A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.

  16. Packet error rate analysis of digital pulse interval modulation in intersatellite optical communication systems with diversified wavefront deformation.

    PubMed

    Zhu, Jin; Wang, Dayan; Xie, Wanqing

    2015-02-20

    Diversified wavefront deformation is an inevitable phenomenon in intersatellite optical communication systems, which will decrease system performance. In this paper, we investigate the description of wavefront deformation and its influence on the packet error rate (PER) of digital pulse interval modulation (DPIM). With the wavelet method, the diversified wavefront deformation can be described by wavelet parameters: coefficient, dilation, and shift factors, where the coefficient factor represents the depth, dilation factor represents the area, and shift factor is for location. Based on this, the relationship between PER and wavelet parameters is analyzed from a theoretical viewpoint. Numerical results illustrate the validity of theoretical analysis: PER increases with the depth and area and decreases if location gets farther from the center of the optical antenna. In addition to describing diversified deformation, the advantage of the wavelet method over Zernike polynomials in computational complexity is shown via numerical example. This work provides a feasible method for the description along with influence analysis of diversified wavefront deformation from a practical viewpoint and will be helpful for designing optical systems.

  17. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    PubMed Central

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  18. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media.

    PubMed

    Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C

    2016-06-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.

  19. A High-Precision Instrument for Mapping of Rotational Errors in Rotary Stages

    DOE PAGES

    Xu, W.; Lauer, K.; Chu, Y.; ...

    2014-11-02

    A rotational stage is a key component of every X-ray instrument capable of providing tomographic or diffraction measurements. To perform accurate three-dimensional reconstructions, runout errors due to imperfect rotation (e.g. circle of confusion) must be quantified and corrected. A dedicated instrument capable of full characterization and circle of confusion mapping in rotary stages down to the sub-10 nm level has been developed. A high-stability design, with an array of five capacitive sensors, allows simultaneous measurements of wobble, radial and axial displacements. The developed instrument has been used for characterization of two mechanical stages which are part of an X-ray microscope.

  20. Effects of diffraction and static wavefront errors on high-contrast imaging from the Thirty Meter Telescope

    NASA Technical Reports Server (NTRS)

    Troya, Mitchell; Chananb, Gary; Crossfielda, Ian; Dumonta, Philip; Green, Joseph J.; Macintosh, Bruce

    2006-01-01

    High-contrast imaging, particularly direct detection of extrasolar planets, is a major science driver for the next generation of extremely large telescopes such as the segmented Thirty Meter Telescope. This goal requires more than merely diffraction-limited imaging, but also attention to residual scattered light from wavefront errors and diffraction effects at the contrast level of 10-8-10-9. Using a wave-optics simulation of adaptive optics and a diffraction suppression system we investigate diffraction from the segmentation geometry, intersegment gaps, obscuration by the secondary mirror and its supports. We find that the large obscurations pose a greater challenge than the much smaller segment gaps. In addition the impact of wavefront errors from the primary mirror, including segment alignment and figure errors, are analyzed. Segment-to-segment reflectivity variations and residual segment figure error will be the dominant error contributors from the primary mirror. Strategies to mitigate these errors are discussed.

  1. SAMQA: error classification and validation of high-throughput sequenced read data

    PubMed Central

    2011-01-01

    Background The advances in high-throughput sequencing technologies and growth in data sizes has highlighted the need for scalable tools to perform quality assurance testing. These tests are necessary to ensure that data is of a minimum necessary standard for use in downstream analysis. In this paper we present the SAMQA tool to rapidly and robustly identify errors in population-scale sequence data. Results SAMQA has been used on samples from three separate sets of cancer genome data from The Cancer Genome Atlas (TCGA) project. Using technical standards provided by the SAM specification and biological standards defined by researchers, we have classified errors in these sequence data sets relative to individual reads within a sample. Due to an observed linearithmic speedup through the use of a high-performance computing (HPC) framework for the majority of tasks, poor quality data was identified prior to secondary analysis in significantly less time on the HPC framework than the same data run using alternative parallelization strategies on a single server. Conclusions The SAMQA toolset validates a minimum set of data quality standards across whole-genome and exome sequences. It is tuned to run on a high-performance computational framework, enabling QA across hundreds gigabytes of samples regardless of coverage or sample type. PMID:21851633

  2. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  3. High Count Rate Electron Probe Microanalysis.

    PubMed

    Geller, Joseph D; Herrington, Charles

    2002-01-01

    Reducing the measurement uncertainty of quantitative analyses made using electron probe microanalyzers (EPMA) requires a careful study of the individual uncertainties from each definable step of the measurement. Those steps include measuring the incident electron beam current and voltage, knowing the angle between the electron beam and the sample (takeoff angle), collecting the emitted x rays from the sample, comparing the emitted x-ray flux to known standards (to determine the k-ratio) and transformation of the k-ratio to concentration using algorithms which includes, as a minimum, the atomic number, absorption, and fluorescence corrections. This paper discusses the collection and counting of the emitted x rays, which are diffracted into the gas flow or sealed proportional x-ray detectors. The representation of the uncertainty in the number of collected x rays collected reduces as the number of counts increase. The uncertainty of the collected signal is fully described by Poisson statistics. Increasing the number of x rays collected involves either counting longer or at a higher counting rate. Counting longer means the analysis time increases and may become excessive to get to the desired uncertainty. Instrument drift also becomes an issue. Counting at higher rates has its limitations, which are a function of the detector physics and the detecting electronics. Since the beginning of EPMA analysis, analog electronics have been used to amplify and discriminate the x-ray induced ionizations within the proportional counter. This paper will discuss the use of digital electronics for this purpose. These electronics are similar to that used for energy dispersive analysis of x rays with either Si(Li) or Ge(Li) detectors except that the shaping time constants are much smaller.

  4. The study about forming high-precision optical lens minimalized sinuous error structures for designed surface

    NASA Astrophysics Data System (ADS)

    Katahira, Yu; Fukuta, Masahiko; Katsuki, Masahide; Momochi, Takeshi; Yamamoto, Yoshihiro

    2016-09-01

    Recently, it has been required to improve qualities of aspherical lenses mounted on camera units. Optical lenses in highvolume production generally are applied with molding process using cemented carbide or Ni-P coated steel, which can be selected from lens material such as glass and plastic. Additionally it can be obtained high quality of the cut or ground surface on mold due to developments of different mold product technologies. As results, it can be less than 100nmPV as form-error and 1nmRa as surface roughness in molds. Furthermore it comes to need higher quality, not only formerror( PV) and surface roughness(Ra) but also other surface characteristics. For instance, it can be caused distorted shapes at imaging by middle spatial frequency undulations on the lens surface. In this study, we made focus on several types of sinuous structures, which can be classified into form errors for designed surface and deteriorate optical system performances. And it was obtained mold product processes minimalizing undulations on the surface. In the report, it was mentioned about the analyzing process by using PSD so as to evaluate micro undulations on the machined surface quantitatively. In addition, it was mentioned that the grinding process with circumferential velocity control was effective for large aperture lenses fabrication and could minimalize undulations appeared on outer area of the machined surface, and mentioned about the optical glass lens molding process by using the high precision press machine.

  5. High dose rate brachytherapy source measurement intercomparison.

    PubMed

    Poder, Joel; Smith, Ryan L; Shelton, Nikki; Whitaker, May; Butler, Duncan; Haworth, Annette

    2017-03-24

    This work presents a comparison of air kerma rate (AKR) measurements performed by multiple radiotherapy centres for a single HDR (192)Ir source. Two separate groups (consisting of 15 centres) performed AKR measurements at one of two host centres in Australia. Each group travelled to one of the host centres and measured the AKR of a single (192)Ir source using their own equipment and local protocols. Results were compared to the (192)Ir source calibration certificate provided by the manufacturer by means of a ratio of measured to certified AKR. The comparisons showed remarkably consistent results with the maximum deviation in measurement from the decay-corrected source certificate value being 1.1%. The maximum percentage difference between any two measurements was less than 2%. The comparisons demonstrated the consistency of well-chambers used for (192)Ir AKR measurements in Australia, despite the lack of a local calibration service, and served as a valuable focal point for the exchange of ideas and dosimetry methods.

  6. Automated measurement of the bit-error rate as a function of signal-to-noise ratio for microwave communications systems

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Daugherty, Elaine S.; Kramarchuk, Ihor

    1987-01-01

    The performance of microwave systems and components for digital data transmission can be characterized by a plot of the bit-error rate as a function of the signal to noise ratio (or E sub b/E sub o). Methods for the efficient automated measurement of bit-error rates and signal-to-noise ratios, developed at NASA Lewis Research Center, are described. Noise measurement considerations and time requirements for measurement accuracy, as well as computer control and data processing methods, are discussed.

  7. High-deposition-rate ceramics synthesis

    SciTech Connect

    Allendorf, M.D.; Osterheld, T.H.; Outka, D.A.

    1995-05-01

    Parallel experimental and computational investigations are conducted in this project to develop validated numerical models of ceramic synthesis processes. Experiments are conducted in the High-Temperature Materials Synthesis Laboratory in Sandia`s Combustion Research Facility. A high-temperature flow reactor that can accommodate small preforms (1-3 cm diameter) generates conditions under which deposition can be observed, with flexibility to vary both deposition temperature (up to 1500 K) and pressure (as low as 10 torr). Both mass spectrometric and laser diagnostic probes are available to provide measurements of gas-phase compositions. Experiments using surface analytical techniques are also applied to characterize important processes occuring on the deposit surface. Computational tools developed through extensive research in the combustion field are employed to simulate the chemically reacting flows present in typical industrial reactors. These include the CHEMKIN and Surface-CHEMKIN suites of codes, which permit facile development of complex reaction mechanisms and vastly simplify the implementation of multi-component transport and thermodynamics. Quantum chemistry codes are also used to estimate thermodynamic and kinetic data for species and reactions for which this information is unavailable.

  8. High Strain Rate Tensile and Compressive Effects in Glassy Polymers

    DTIC Science & Technology

    2013-02-08

    polymers under high strain rates has been determined in compression. Some research programs have studied the combined effects of temperature and strain rate...glassy polymers to high strain rate loading in compression. More recently, research programs that study the combined effects of temperature and strain...Force Materiel Command  United States Air Force  Eglin Air Force Base AFRL-RW-EG-TP-2013-006 High Strain Rate

  9. High rate fabrication of compression molded components

    DOEpatents

    Matsen, Marc R.; Negley, Mark A.; Dykstra, William C.; Smith, Glen L.; Miller, Robert J.

    2016-04-19

    A method for fabricating a thermoplastic composite component comprises inductively heating a thermoplastic pre-form with a first induction coil by inducing current to flow in susceptor wires disposed throughout the pre-form, inductively heating smart susceptors in a molding tool to a leveling temperature with a second induction coil by applying a high-strength magnetic field having a magnetic flux that passes through surfaces of the smart susceptors, shaping the magnetic flux that passes through surfaces of the smart susceptors to flow substantially parallel to a molding surface of the smart susceptors, placing the heated pre-form between the heated smart susceptors; and applying molding pressure to the pre-form to form the composite component.

  10. Achieving High Rates and High Uniformity in Copper Chemical Mechanical Polishing

    NASA Astrophysics Data System (ADS)

    Nolan, Lucy Marjorie

    The chemical mechanical polishing of Copper (Cu-CMP) is a complex and poorly understood process. Despite this, it is widely used throughout the semiconductor and microelectronics industries, and makes up a significant portion of wafer processing costs. In these contexts, desirable polishing outcomes such as a high rate of removal from the copper surface, and high removal rate uniformity, are achieved largely by trial-and-error. In this study, the same outcomes are pursued through a systematic investigation of polishing lubrication characteristics and abrasive and oxidiser concentrations in the polishing slurry. A strong link between lubrication characteristics, quantified by the dimensionless Sommerfield number, and the uniformity of polishing is demonstrated. A mechanism for the observed relationship is proposed, based on an adaptation of hydrodynamic lubrication theory. The overall rate of removal is maximized by polishing in a slurry containing oxidiser and abrasives in a synergistic ratio. Polishing away from this ratio has additional effects on the overall quality of the surface produced. Transport of slurry across the polishing pad is investigated by using tracers; the results demonstrate that slurry usage can be reduced in many circumstances with no impact on overall polishing outcomes, reducing overall processing costs. These findings are combined to design a polishing process, with good results.

  11. Method and apparatus for reducing quantization error in laser gyro test data through high speed filtering

    SciTech Connect

    Mark, J.G.; Brown, A.K.; Matthews, A.

    1987-01-06

    A method is described for processing ring laser gyroscope test data comprising the steps of: (a) accumulating the data over a preselected sample period; and (b) filtering the data at a predetermined frequency so that non-time dependent errors are reduced by a substantially greater amount than are time dependent errors; then (c) analyzing the random walk error of the filtered data.

  12. High data rate optical transceiver terminal

    NASA Technical Reports Server (NTRS)

    Clarke, E. S.

    1973-01-01

    The objectives of this study were: (1) to design a 400 Mbps optical transceiver terminal to operate from a high-altitude balloon-borne platform in order to permit the quantitative evaluation of a space-qualifiable optical communications system design, (2) to design an atmospheric propagation experiment to operate in conjunction with the terminal to measure the degrading effects of the atmosphere on the links, and (3) to design typical optical communications experiments for space-borne laboratories in the 1980-1990 time frame. As a result of the study, a transceiver package has been configured for demonstration flights during late 1974. The transceiver contains a 400 Mbps transmitter, a 400 Mbps receiver, and acquisition and tracking receivers. The transmitter is a Nd:YAG, 200 Mhz, mode-locked, CW, diode-pumped laser operating at 1.06 um requiring 50 mW for 6 db margin. It will be designed to implement Pulse Quaternary Modulation (PQM). The 400 Mbps receiver utilizes a Dynamic Crossed-Field Photomultiplier (DCFP) detector. The acquisition receiver is a Quadrant Photomultiplier Tube (QPMT) and receives a 400 Mbps signal chopped at 0.1 Mhz.

  13. Assessment of high-rate GPS using a single-axis shake table

    NASA Astrophysics Data System (ADS)

    Häberling, S.; Rothacher, M.; Zhang, Y.; Clinton, J. F.; Geiger, A.

    2015-07-01

    The developments in GNSS receiver and antenna technologies, especially the increased sampling rate up to 100 sps, open up the possibility to measure high-rate earthquake ground motions with GNSS. In this paper we focus on the GPS errors in the frequency band above 1 Hz. The dominant error sources are mainly the carrier phase jitter caused by thermal noise and the stress error caused by the dynamics, e.g. antenna motions. To generate a large set of different motions, we used a single-axis shake table, where a GNSS antenna and a strong motion seismometer were mounted with a well-known ground truth. The generated motions were recorded with three different GNSS receivers with sampling rates up to 100 sps and different receiver baseband parameters. The baseband parameters directly dictate the carrier phase jitter and the correlations between subsequent epochs. A narrow loop filter bandwidth keeps the carrier phase jitter on a low level, but has an extreme impact on the receiver response for motions above 1 Hz. The amplitudes above 3 Hz are overestimated up to 50 % or reduced by well over half. The corresponding phase errors are between 30 and 90 degrees. Compared to the GNSS receiver response, the strong motion seismometer measurements do not show any amplitude or phase variations for the frequency range from 1 to 20 Hz. Due to the large errors for dynamic GNSS measurements, it is essential to account for the baseband parameters of the GNSS receivers if high-rate GNSS is to become a valuable tool for seismic displacement measurements above 1 Hz. Fortunately, the receiver response can be corrected by an inverse filter if the baseband parameters are known.

  14. The Effect of Minimum Wage Rates on High School Completion

    ERIC Educational Resources Information Center

    Warren, John Robert; Hamrock, Caitlin

    2010-01-01

    Does increasing the minimum wage reduce the high school completion rate? Previous research has suffered from (1. narrow time horizons, (2. potentially inadequate measures of states' high school completion rates, and (3. potentially inadequate measures of minimum wage rates. Overcoming each of these limitations, we analyze the impact of changes in…

  15. Dose rate in brachytherapy using after-loading machine: pulsed or high-dose rate?

    PubMed

    Hannoun-Lévi, J-M; Peiffert, D

    2014-10-01

    Since February 2014, it is no longer possible to use low-dose rate 192 iridium wires due to the end of industrial production of IRF1 and IRF2 sources. The Brachytherapy Group of the French society of radiation oncology (GC-SFRO) has recommended switching from iridium wires to after-loading machines. Two types of after-loading machines are currently available, based on the dose rate used: pulsed-dose rate or high-dose rate. In this article, we propose a comparative analysis between pulsed-dose rate and high-dose rate brachytherapy, based on biological, technological, organizational and financial considerations.

  16. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue.

  17. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    PubMed

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is.

  18. A high-precision instrument for mapping of rotational errors in rotary stages

    SciTech Connect

    Xu, Weihe; Lauer, Kenneth; Chu, Yong; Nazaretski, Evgeny

    2014-10-02

    A rotational stage is a key component of every X-ray instrument capable of providing tomographic or diffraction measurements. To perform accurate three-dimensional reconstructions, runout errors due to imperfect rotation (e.g.circle of confusion) must be quantified and corrected. A dedicated instrument capable of full characterization and circle of confusion mapping in rotary stages down to the sub-10 nm level has been developed. A high-stability design, with an array of five capacitive sensors, allows simultaneous measurements of wobble, radial and axial displacements. The developed instrument has been used for characterization of two mechanical stages which are part of an X-ray microscope.

  19. Heritability and molecular genetic basis of antisaccade eye tracking error rate: A genome-wide association study

    PubMed Central

    Vaidyanathan, Uma; Malone, Stephen M.; Donnelly, Jennifer M.; Hammer, Micah A.; Miller, Michael B.; McGue, Matt; Iacono, William G.

    2014-01-01

    Antisaccade deficits reflect abnormalities in executive function linked to various disorders including schizophrenia, externalizing psychopathology, and neurological conditions. We examined the genetic bases of antisaccade error in a sample of community-based twins and parents (N = 4,469). Biometric models showed that about half of the variance in the antisaccade response was due to genetic factors and half due to nonshared environmental factors. Molecular genetic analyses supported these results, showing that the heritability accounted for by common molecular genetic variants approximated biometric estimates. Genome-wide analyses revealed several SNPs as well as two genes—B3GNT7 and NCL—on Chromosome 2 associated with antisaccade error. SNPs and genes hypothesized to be associated with antisaccade error based on prior work, although generating some suggestive findings for MIR137, GRM8, and CACNG2, could not be confirmed. PMID:25387707

  20. High-Rate Strong-Signal Quantum Cryptography

    NASA Technical Reports Server (NTRS)

    Yuen, Horace P.

    1996-01-01

    Several quantum cryptosystems utilizing different kinds of nonclassical lights, which can accommodate high intensity fields and high data rate, are described. However, they are all sensitive to loss and both the high rate and the strong-signal character rapidly disappear. A squeezed light homodyne detection scheme is proposed which, with present-day technology, leads to more than two orders of magnitude data rate improvement over other current experimental systems for moderate loss.

  1. High Strain Rate Mechanical Properties of Glassy Polymers

    DTIC Science & Technology

    2012-07-25

    Force Materiel Command  United States Air Force  Eglin Air Force Base AFRL-RW-EG-TP-2012-008 High Strain Rate...TITLE AND SUBTITLE High Strain Rate Mechanical Properties of Glassy Polymers 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...1990s, a range of experimental data has been generated describing the response of glassy polymers to high strain rate loading in compression. More

  2. Internal Consistency, Test–Retest Reliability and Measurement Error of the Self-Report Version of the Social Skills Rating System in a Sample of Australian Adolescents

    PubMed Central

    Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn

    2013-01-01

    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test–retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID). PMID:24040116

  3. Solar Cell Short Circuit Current Errors and Uncertainties During High Altitude Calibrations

    NASA Technical Reports Server (NTRS)

    Snyder, David D.

    2012-01-01

    High altitude balloon based facilities can make solar cell calibration measurements above 99.5% of the atmosphere to use for adjusting laboratory solar simulators. While close to on-orbit illumination, the small attenuation to the spectra may result in under measurements of solar cell parameters. Variations of stratospheric weather, may produce flight-to-flight measurement variations. To support the NSCAP effort, this work quantifies some of the effects on solar cell short circuit current (Isc) measurements on triple junction sub-cells. This work looks at several types of high altitude methods, direct high altitude meas urements near 120 kft, and lower stratospheric Langley plots from aircraft. It also looks at Langley extrapolation from altitudes above most of the ozone, for potential small balloon payloads. A convolution of the sub-cell spectral response with the standard solar spectrum modified by several absorption processes is used to determine the relative change from AMO, lscllsc(AMO). Rayleigh scattering, molecular scatterin g from uniformly mixed gases, Ozone, and water vapor, are included in this analysis. A range of atmosph eric pressures are examined, from 0. 05 to 0.25 Atm to cover the range of atmospheric altitudes where solar cell calibrations a reperformed. Generally these errors and uncertainties are less than 0.2%

  4. Resident Physicians' Clinical Training and Error Rate: The Roles of Autonomy, Consultation, and Familiarity with the Literature

    ERIC Educational Resources Information Center

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-01-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores…

  5. Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu

    2015-07-01

    Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in

  6. Estimating the designated use attainment decision error rates of US Environmental Protection Agency's proposed numeric total phosphorus criteria for Florida, USA, colored lakes.

    PubMed

    McLaughlin, Douglas B

    2012-01-01

    The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors.

  7. High-shear-rate capillary viscometer for inkjet inks

    SciTech Connect

    Wang Xi; Carr, Wallace W.; Bucknall, David G.; Morris, Jeffrey F.

    2010-06-15

    A capillary viscometer developed to measure the apparent shear viscosity of inkjet inks at high apparent shear rates encountered during inkjet printing is described. By using the Weissenberg-Rabinowitsch equation, true shear viscosity versus true shear rate is obtained. The device is comprised of a constant-flow generator, a static pressure monitoring device, a high precision submillimeter capillary die, and a high stiffness flow path. The system, which is calibrated using standard Newtonian low-viscosity silicone oil, can be easily operated and maintained. Results for measurement of the shear-rate-dependent viscosity of carbon-black pigmented water-based inkjet inks at shear rates up to 2x10{sup 5} s{sup -1} are discussed. The Cross model was found to closely fit the experimental data. Inkjet ink samples with similar low-shear-rate viscosities exhibited significantly different shear viscosities at high shear rates depending on particle loading.

  8. Experimental Minimum-Error Quantum-State Discrimination in High Dimensions

    NASA Astrophysics Data System (ADS)

    Solís-Prosser, M. A.; Fernandes, M. F.; Jiménez, O.; Delgado, A.; Neves, L.

    2017-03-01

    Quantum mechanics forbids perfect discrimination among nonorthogonal states through a single shot measurement. To optimize this task, many strategies were devised that later became fundamental tools for quantum information processing. Here, we address the pioneering minimum-error (ME) measurement and give the first experimental demonstration of its application for discriminating nonorthogonal states in high dimensions. Our scheme is designed to distinguish symmetric pure states encoded in the transverse spatial modes of an optical field; the optimal measurement is performed by a projection onto the Fourier transform basis of these modes. For dimensions ranging from D =2 to D =21 and nearly 14 000 states tested, the deviations of the experimental results from the theoretical values range from 0.3% to 3.6% (getting below 2% for the vast majority), thus showing the excellent performance of our scheme. This ME measurement is a building block for high-dimensional implementations of many quantum communication protocols, including probabilistic state discrimination, dense coding with nonmaximal entanglement, and cryptographic schemes.

  9. HIGH-RATE DISINFECTION TECHNIQUES FOR COMBIND SEWER OVERFLOW

    EPA Science Inventory

    This paper presents high-rate disinfection technologies for combined sewer overflow (CSO). The high-rate disinfection technologies of interest are: chlorination/dechlorination, ultraviolet light irradiation (UV), chlorine dioxide (ClO2 ), ozone (O3), peracetic acid (CH3COOOH )...

  10. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  11. Combinatorial FSK modulation for power-efficient high-rate communications

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Budinger, James M.; Vanderaar, Mark J.

    1991-01-01

    Deep-space and satellite communications systems must be capable of conveying high-rate data accurately with low transmitter power, often through dispersive channels. A class of noncoherent Combinatorial Frequency Shift Keying (CFSK) modulation schemes is investigated which address these needs. The bit error rate performance of this class of modulation formats is analyzed and compared to the more traditional modulation types. Candidate modulator, demodulator, and digital signal processing (DSP) hardware structures are examined in detail. System-level issues are also discussed.

  12. The Rate of Return to the High/Scope Perry Preschool Program

    PubMed Central

    Heckman, James J.; Moon, Seong Hyeok; Pinto, Rodrigo; Savelyev, Peter A.; Yavitz, Adam

    2010-01-01

    This paper estimates the rate of return to the High/Scope Perry Preschool Program, an early intervention program targeted toward disadvantaged African-American youth. Estimates of the rate of return to the Perry program are widely cited to support the claim of substantial economic benefits from preschool education programs. Previous studies of the rate of return to this program ignore the compromises that occurred in the randomization protocol. They do not report standard errors. The rates of return estimated in this paper account for these factors. We conduct an extensive analysis of sensitivity to alternative plausible assumptions. Estimated annual social rates of return generally fall between 7–10 percent, with most estimates substantially lower than those previously reported in the literature. However, returns are generally statistically significantly different from zero for both males and females and are above the historical return on equity. Estimated benefit-to-cost ratios support this conclusion. PMID:21804653

  13. High speed imaging for material parameters calibration at high strain rate

    NASA Astrophysics Data System (ADS)

    Sasso, M.; Fardmoshiri, M.; Mancini, E.; Rossi, M.; Cortese, L.

    2016-05-01

    To describe the material behaviour at high strain rates dynamic experimental tests are necessary, and appropriate constitutive models are to be calibrated accordingly. A way to achieve this is through an inverse procedure, based on the minimization of an error function calculated as the difference between experimental and numerical data coming from Finite Element analysis. This approach, widely used in the literature, has a heavy computational cost associated with the minimization process that requires, for each variation of the material model parameters, the execution of FE calculations. In this work, a faster but yet effective calibration procedure is studied Experimental tests were performed on an aluminium alloy AA6061-T6, by means of a direct tension-compression Split Hopkinson bar. A fast camera with a resolution of 192 × 128 pixels and capable of a sample rate of 100,000 fps captured images of the deformation process undergone by the samples during the tests. The profile of the sample obtained after the image binarization and processing, was postprocessed to derive the deformation history; afterwards it was possible to calculate the true stress and strain, and carry out the inverse calibration by analytical computations. The results of this method were compared with the ones coming from the Finite Element approach.

  14. High-Strain-Rate behavior of Hydrated Cement Paste.

    DTIC Science & Technology

    1987-01-29

    bar and the transmitter bar are made from high yield- strength material, peak loads of 150,000 psi or 10 kbar are easily reached. Typical strain rates...was originally set up for testing very high yield- strength materials. Therefore, for use with cement paste samples, a series of new pressure bars -- 1...a. A a.5.. ~ A - a .- ~- . . . ~0 MML TR 87-12c HIGH -STRAIN-RATE BEHAVIOR OF HYDRATED CEMENT PASTE

  15. Characterization of semiconductor-laser phase noise and estimation of bit-error rate performance with low-speed offline digital coherent receivers.

    PubMed

    Kikuchi, Kazuro

    2012-02-27

    We develop a systematic method for characterizing semiconductor-laser phase noise, using a low-speed offline digital coherent receiver. The field spectrum, the FM-noise spectrum, and the phase-error variance measured with such a receiver can completely describe phase-noise characteristics of lasers under test. The sampling rate of the digital coherent receiver should be much higher than the phase-fluctuation speed. However, 1 GS/s is large enough for most of the single-mode semiconductor lasers. In addition to such phase-noise characterization, interpolating the taken data at 1.25 GS/s to form a data stream at 10 GS/s, we can predict the bit-error rate (BER) performance of multi-level modulated optical signals at 10 Gsymbol/s. The BER degradation due to the phase noise is well explained by the result of the phase-noise measurements.

  16. Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment

    NASA Astrophysics Data System (ADS)

    Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.

    2016-11-01

    This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.

  17. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case

    PubMed Central

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing. PMID:27452275

  18. Investigation of High-Pressure Hydraulic Vortex Rate Sensor

    DTIC Science & Technology

    stability - augmentation system . The feasibility of low-pressure fluid stabilization systems was demonstrated. The primary component that requires development for implementation in a high pressure system is the vortex rate sensor. The high-pressure hydraulic vortex rate sensor has an on-board built-in supply of hydraulic fluid which is used in the primary hydro-mechanical flight control of the vehicle. A small amount of hydraulic fluid under high pressure can be diverted from the main system to the vortex rate sensor, used to perform a sensing function, and

  19. Quantum data locking for high-rate private communication

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Lloyd, Seth

    2015-03-01

    We show that, if the accessible information is used as a security quantifier, quantum channels with a certain symmetry can convey private messages at a tremendously high rate, as high as less than one bit below the rate of non-private classical communication. This result is obtained by exploiting the quantum data locking effect. The price to pay to achieve such a high private communication rate is that accessible information security is in general not composable. However, composable security holds against an eavesdropper who is forced to measure her share of the quantum system within a finite time after she gets it.

  20. Managing Errors to Reduce Accidents in High Consequence Networked Information Systems

    SciTech Connect

    Ganter, J.H.

    1999-02-01

    Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classify these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.

  1. High repetition rate optical switch using an electroabsorption modulator in TOAD configuration

    NASA Astrophysics Data System (ADS)

    Huo, Li; Yang, Yanfu; Lou, Caiyun; Gao, Yizhi

    2007-07-01

    A novel optical switch featured with high repetition rate, short switching window width, and high contrast ratio is proposed and demonstrated for the first time by placing an electroabsorption modulator (EAM) in a terahertz optical asymmetric demultiplexer (TOAD) configuration. The feasibility and main characteristics of the switch are investigated by numerical simulations and experiments. With this EAM-based TOAD, an error-free return-to-zero signal wavelength conversion with 0.62 dB power penalty at 20 Gbit/s is demonstrated.

  2. Uncovering high-strain rate protection mechanism in nacre

    NASA Astrophysics Data System (ADS)

    Huang, Zaiwang; Li, Haoze; Pan, Zhiliang; Wei, Qiuming; Chao, Yuh J.; Li, Xiaodong

    2011-11-01

    Under high-strain-rate compression (strain rate ~103 s-1), nacre (mother-of-pearl) exhibits surprisingly high fracture strength vis-à-vis under quasi-static loading (strain rate 10-3 s-1). Nevertheless, the underlying mechanism responsible for such sharply different behaviors in these two loading modes remains completely unknown. Here we report a new deformation mechanism, adopted by nacre, the best-ever natural armor material, to protect itself against predatory penetrating impacts. It involves the emission of partial dislocations and the onset of deformation twinning that operate in a well-concerted manner to contribute to the increased high-strain-rate fracture strength of nacre. Our findings unveil that Mother Nature delicately uses an ingenious strain-rate-dependent stiffening mechanism with a purpose to fight against foreign attacks. These findings should serve as critical design guidelines for developing engineered body armor materials.

  3. Uncovering high-strain rate protection mechanism in nacre.

    PubMed

    Huang, Zaiwang; Li, Haoze; Pan, Zhiliang; Wei, Qiuming; Chao, Yuh J; Li, Xiaodong

    2011-01-01

    Under high-strain-rate compression (strain rate approximately 10(3) s(-1)), nacre (mother-of-pearl) exhibits surprisingly high fracture strength vis-à-vis under quasi-static loading (strain rate 10(-3) s(-1)). Nevertheless, the underlying mechanism responsible for such sharply different behaviors in these two loading modes remains completely unknown. Here we report a new deformation mechanism, adopted by nacre, the best-ever natural armor material, to protect itself against predatory penetrating impacts. It involves the emission of partial dislocations and the onset of deformation twinning that operate in a well-concerted manner to contribute to the increased high-strain-rate fracture strength of nacre. Our findings unveil that Mother Nature delicately uses an ingenious strain-rate-dependent stiffening mechanism with a purpose to fight against foreign attacks. These findings should serve as critical design guidelines for developing engineered body armor materials.

  4. Uncovering high-strain rate protection mechanism in nacre

    PubMed Central

    Huang, Zaiwang; Li, Haoze; Pan, Zhiliang; Wei, Qiuming; Chao, Yuh J.; Li, Xiaodong

    2011-01-01

    Under high-strain-rate compression (strain rate ∼103 s−1), nacre (mother-of-pearl) exhibits surprisingly high fracture strength vis-à-vis under quasi-static loading (strain rate 10−3 s−1). Nevertheless, the underlying mechanism responsible for such sharply different behaviors in these two loading modes remains completely unknown. Here we report a new deformation mechanism, adopted by nacre, the best-ever natural armor material, to protect itself against predatory penetrating impacts. It involves the emission of partial dislocations and the onset of deformation twinning that operate in a well-concerted manner to contribute to the increased high-strain-rate fracture strength of nacre. Our findings unveil that Mother Nature delicately uses an ingenious strain-rate-dependent stiffening mechanism with a purpose to fight against foreign attacks. These findings should serve as critical design guidelines for developing engineered body armor materials. PMID:22355664

  5. Laser nanoablation of diamond surface at high pulse repetition rates

    NASA Astrophysics Data System (ADS)

    Kononenko, V. V.; Gololobov, V. M.; Pashinin, V. P.; Konov, V. I.

    2016-10-01

    The chemical etching of the surface of a natural diamond single crystal irradiated by subpicosecond laser pulses with a high repetition rate (f ≤slant 500 {\\text{kHz}}) in air is experimentally investigated. The irradiation has been performed by the second-harmonic (515 {\\text{nm}}) radiation of a disk Yb : YAG laser. Dependences of the diamond surface etch rate on the laser energy density and pulse repetition rate are obtained.

  6. Rural and Urban High School Dropout Rates: Are They Different?

    ERIC Educational Resources Information Center

    Jordan, Jeffrey L.; Kostandini, Genti; Mykerezi, Elton

    2012-01-01

    This study estimates the high school dropout rate in rural and urban areas, the determinants of dropping out, and whether the differences in graduation rates have changed over time. We use geocoded data from two nationally representative panel household surveys (NLSY 97 and NLSY 79) and a novel methodology that corrects for biases in graduation…

  7. How Did Successful High Schools Improve Their Graduation Rates?

    ERIC Educational Resources Information Center

    Robertson, Janna Siegel; Smith, Robert W.; Rinka, Jason

    2016-01-01

    The researchers surveyed 23 North Carolina high schools that had markedly improved their graduation rates over the past five years. The administrators reported on the dropout prevention practices and programs to which they attributed their improved graduation rates. The majority of schools reported policy changes, especially with suspension. The…

  8. Anamorphic imaging at high-NA EUV: mask error factor and interaction between demagnification and lithographic metrics

    NASA Astrophysics Data System (ADS)

    Bottiglieri, Gerardo; Last, Thorsten; Colina, Alberto; van Setten, Eelco; Rispens, Gijsbert; van Schoot, Jan; van Ingen Schenau, Koen

    2016-10-01

    This paper presents some of the main imaging properties introduced with the design of a possible new EUV High-NA (NA > 0.5) exposure system with anamorphic projection lens, a concept not new in optics but applied for the first time in semiconductor lithography. The system is projected to use a demagnification of 4 in the X-direction and of 8 in the Y-direction. We show that a new definition of the Mask Error Factor needs to be used in order to describe correctly the property introduced by the anamorphic optics. Moreover, for both 1-Dimensional (1D) and 2-Dimensional (2D) features the reticle writing error in the low demagnification direction X is more critical than the error in high demagnification direction Y. The effects of the change in demagnification on imaging are described on an elementary case, and are ultimately linked to the basic physical phenomenon of diffraction.

  9. Identifying High-Rate Flows Based on Sequential Sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Fang, Binxing; Luo, Hao

    We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.

  10. High Heating Rates Affect Greatly the Inactivation Rate of Escherichia coli

    PubMed Central

    Huertas, Juan-Pablo; Aznar, Arantxa; Esnoz, Arturo; Fernández, Pablo S.; Iguaz, Asunción; Periago, Paula M.; Palop, Alfredo

    2016-01-01

    Heat resistance of microorganisms can be affected by different influencing factors. Although, the effect of heating rates has been scarcely explored by the scientific community, recent researches have unraveled its important effect on the thermal resistance of different species of vegetative bacteria. Typically heating rates described in the literature ranged from 1 to 20°C/min but the impact of much higher heating rates is unclear. The aim of this research was to explore the effect of different heating rates, such as those currently achieved in the heat exchangers used in the food industry, on the heat resistance of Escherichia coli. A pilot plant tubular heat exchanger and a thermoresistometer Mastia were used for this purpose. Results showed that fast heating rates had a deep impact on the thermal resistance of E. coli. Heating rates between 20 and 50°C/min were achieved in the heat exchanger, which were much slower than those around 20°C/s achieved in the thermoresistometer. In all cases, these high heating rates led to higher inactivation than expected: in the heat exchanger, for all the experiments performed, when the observed inactivation had reached about seven log cycles, the predictions estimated about 1 log cycle of inactivation; in the thermoresistometer these differences between observed and predicted values were even more than 10 times higher, from 4.07 log cycles observed to 0.34 predicted at a flow rate of 70 mL/min and a maximum heating rate of 14.7°C/s. A quantification of the impact of the heating rates on the level of inactivation achieved was established. These results point out the important effect that the heating rate has on the thermal resistance of E. coli, with high heating rates resulting in an additional sensitization to heat and therefore an effective food safety strategy in terms of food processing. PMID:27563300

  11. High Heating Rates Affect Greatly the Inactivation Rate of Escherichia coli.

    PubMed

    Huertas, Juan-Pablo; Aznar, Arantxa; Esnoz, Arturo; Fernández, Pablo S; Iguaz, Asunción; Periago, Paula M; Palop, Alfredo

    2016-01-01

    Heat resistance of microorganisms can be affected by different influencing factors. Although, the effect of heating rates has been scarcely explored by the scientific community, recent researches have unraveled its important effect on the thermal resistance of different species of vegetative bacteria. Typically heating rates described in the literature ranged from 1 to 20°C/min but the impact of much higher heating rates is unclear. The aim of this research was to explore the effect of different heating rates, such as those currently achieved in the heat exchangers used in the food industry, on the heat resistance of Escherichia coli. A pilot plant tubular heat exchanger and a thermoresistometer Mastia were used for this purpose. Results showed that fast heating rates had a deep impact on the thermal resistance of E. coli. Heating rates between 20 and 50°C/min were achieved in the heat exchanger, which were much slower than those around 20°C/s achieved in the thermoresistometer. In all cases, these high heating rates led to higher inactivation than expected: in the heat exchanger, for all the experiments performed, when the observed inactivation had reached about seven log cycles, the predictions estimated about 1 log cycle of inactivation; in the thermoresistometer these differences between observed and predicted values were even more than 10 times higher, from 4.07 log cycles observed to 0.34 predicted at a flow rate of 70 mL/min and a maximum heating rate of 14.7°C/s. A quantification of the impact of the heating rates on the level of inactivation achieved was established. These results point out the important effect that the heating rate has on the thermal resistance of E. coli, with high heating rates resulting in an additional sensitization to heat and therefore an effective food safety strategy in terms of food processing.

  12. Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto

    2006-01-01

    We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.

  13. Improving transcriptome assembly through error correction of high-throughput sequence reads.

    PubMed

    Macmanes, Matthew D; Eisen, Michael B

    2013-01-01

    The study of functional genomics, particularly in non-model organisms, has been dramatically improved over the last few years by the use of transcriptomes and RNAseq. While these studies are potentially extremely powerful, a computationally intensive procedure, the de novo construction of a reference transcriptome must be completed as a prerequisite to further analyses. The accurate reference is critically important as all downstream steps, including estimating transcript abundance are critically dependent on the construction of an accurate reference. Though a substantial amount of research has been done on assembly, only recently have the pre-assembly procedures been studied in detail. Specifically, several stand-alone error correction modules have been reported on and, while they have shown to be effective in reducing errors at the level of sequencing reads, how error correction impacts assembly accuracy is largely unknown. Here, we show via use of a simulated and empiric dataset, that applying error correction to sequencing reads has significant positive effects on assembly accuracy, and should be applied to all datasets. A complete collection of commands which will allow for the production of Reptile corrected reads is available at https://github.com/macmanes/error_correction/tree/master/scripts and as File S1.

  14. High-order Taylor series expansion methods for error propagation in geographic information systems

    NASA Astrophysics Data System (ADS)

    Xue, Jie; Leung, Yee; Ma, Jiang-Hong

    2015-04-01

    The quality of modeling results in GIS operations depends on how well we can track error propagating from inputs to outputs. Monte Carlo simulation, moment design and Taylor series expansion have been employed to study error propagation over the years. Among them, first-order Taylor series expansion is popular because error propagation can be analytically studied. Because most operations in GIS are nonlinear, first-order Taylor series expansion generally cannot meet practical needs, and higher-order approximation is thus necessary. In this paper, we employ Taylor series expansion methods of different orders to investigate error propagation when the random error vectors are normally and independently or dependently distributed. We also extend these methods to situations involving multi-dimensional output vectors. We employ these methods to examine length measurement of linear segments, perimeter of polygons and intersections of two line segments basic in GIS operations. Simulation experiments indicate that the fifth-order Taylor series expansion method is most accurate compared with the first-order and third-order method. Compared with the third-order expansion; however, it can only slightly improve the accuracy, but on the expense of substantially increasing the number of partial derivatives that need to be calculated. Striking a balance between accuracy and complexity, the third-order Taylor series expansion method appears to be a more appropriate choice for practical applications.

  15. Sources of error in the estimation of mosquito infection rates used to assess risk of arbovirus transmission.

    PubMed

    Bustamante, Dulce M; Lord, Cynthia C

    2010-06-01

    Infection rate is an estimate of the prevalence of arbovirus infection in a mosquito population. It is assumed that when infection rate increases, the risk of arbovirus transmission to humans and animals also increases. We examined some of the factors that can invalidate this assumption. First, we used a model to illustrate how the proportion of mosquitoes capable of virus transmission, or infectious, is not a constant fraction of the number of infected mosquitoes. Thus, infection rate is not always a straightforward indicator of risk. Second, we used a model that simulated the process of mosquito sampling, pooling, and virus testing and found that mosquito infection rates commonly underestimate the prevalence of arbovirus infection in a mosquito population. Infection rate should always be used in conjunction with other surveillance indicators (mosquito population size, age structure, weather) and historical baseline data when assessing the risk of arbovirus transmission.

  16. Studying solutions at high shear rates: a dedicated microfluidics setup.

    PubMed

    Wieland, D C F; Garamus, V M; Zander, T; Krywka, C; Wang, M; Dedinaite, A; Claesson, P M; Willumeit-Römer, R

    2016-03-01

    The development of a dedicated small-angle X-ray scattering setup for the investigation of complex fluids at different controlled shear conditions is reported. The setup utilizes a microfluidics chip with a narrowing channel. As a consequence, a shear gradient is generated within the channel and the effect of shear rate on structure and interactions is mapped spatially. In a first experiment small-angle X-ray scattering is utilized to investigate highly concentrated protein solutions up to a shear rate of 300000 s(-1). These data demonstrate that equilibrium clusters of lysozyme are destabilized at high shear rates.

  17. Slow rate of molecular evolution in high-elevation hummingbirds.

    PubMed

    Bleiweiss, R

    1998-01-20

    Estimates of relative rates of molecular evolution from a DNA-hybridization phylogeny for 26 hummingbird species provide evidence for a negative association between elevation and rate of single-copy genome evolution. This effect of elevation on rate remains significant even after taking into account a significant negative association between body mass and molecular rate. Population-level processes do not appear to account for these patterns because (i) all hummingbirds breed within their first year and (ii) the more extensive subdivision and speciation of bird populations living at high elevations predicts a positive association between elevation and rate. The negative association between body mass and molecular rate in other organisms has been attributed to higher mutation rates in forms with higher oxidative metabolism. As ambient oxygen tensions and temperature decrease with elevation, the slow rate of molecular evolution in high-elevation hummingbirds also may have a metabolic basis. A slower rate of single-copy DNA change at higher elevations suggests that the dynamics of molecular evolution cannot be separated from the environmental context.

  18. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Burr, Tom; Favalli, Andrea; Nicholson, Andrew

    2016-03-01

    The declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar - Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to model the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. We find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters.

  19. A Comparative Study of EFL Teachers' and Intermediate High School Students' Perceptions of Written Corrective Feedback on Grammatical Errors

    ERIC Educational Resources Information Center

    Jodaie, Mina; Farrokhi, Farahman; Zoghi, Masoud

    2011-01-01

    This study was an attempt to compare EFL teachers' and intermediate high school students' perceptions of written corrective feedback on grammatical errors and also to specify their reasons for choosing comprehensive or selective feedback and some feedback strategies over some others. To collect the required data, the student version of…

  20. Line-Bisecting Performance in Highly Skilled Athletes: Does Preponderance of Rightward Error Reflect Unique Cortical Organization and Functioning?

    ERIC Educational Resources Information Center

    Carlstedt, Roland A.

    2004-01-01

    A line-bisecting test was administered to 250 highly skilled right-handed athletes and a control group of 60 right-handed age matched non-athletes. Results revealed that athletes made overwhelmingly more rightward errors than non-athletes, who predominantly bisected lines to the left of the veridical center. These findings were interpreted in the…

  1. Stretching Behavior of Red Blood Cells at High Strain Rates

    NASA Astrophysics Data System (ADS)

    Mancuso, Jordan; Ristenpart, William

    2016-11-01

    Most work on the mechanical behavior of red blood cells (RBCs) has focused on simple shear flows. Relatively little work has examined RBC deformations in the physiologically important extensional flow that occurs at the entrance to a constriction. In particular, previous work suggests that RBCs rapidly stretch out and then retract upon entering the constriction, but to date no model predicts this behavior for the extremely high strain rates typically experienced there. In this work, we use high speed video to perform systematic measurements of the dynamic stretching behavior of RBCs as they enter a microfluidic constriction. We demonstrate that a simple viscoelastic model captures the observed stretching dynamics, up to strain rates as high as 1000 s-1. The results indicate that the effective elastic modulus of the RBC membrane at these strain rates is an order of magnitude larger than moduli measured by micropipette aspiration or other low strain rate techniques.

  2. Advances in solid polymer electrochemical capacitors for high rate applications

    NASA Astrophysics Data System (ADS)

    Lian, Keryn; Gao, Han

    2011-06-01

    All solid electrochemical capacitors (EC) have been demonstrated using proton conducting silicotungstic acid (SiWA) and poly(vinyl alcohol) (PVA) based polymer electrolytes. Graphite electrodes were utilized for electrochemical double layer capacitors (EDLC), while RuO2 electrodes were employed as pseudocapacitive electrodes. Both solid EDLC and pseudocapacitors exhibited very high charge/discharge rate capability. Especially for solid EDLC, a charge/discharge rate of 25 V/s and a 10 ms time constant ("factor of merit") were obtained. The rate capability of the solid EC is attributable to thin film thickness, good proton conductivity of the polymer electrolyte, and intimate contact between electrode and electrolyte. These results demonstrate promise of polymer electrolytes as enablers of high rate and high performance solid EC devices.

  3. Solidification at the High and Low Rate Extreme

    SciTech Connect

    Meco, Halim

    2004-12-19

    The microstructures formed upon solidification are strongly influenced by the imposed growth rates on an alloy system. Depending on the characteristics of the solidification process, a wide range of growth rates is accessible. The prevailing solidification mechanisms, and thus the final microstructure of the alloy, are governed by these imposed growth rates. At the high rate extreme, for instance, one can have access to novel microstructures that are unattainable at low growth rates. While the low growth rates can be utilized for the study of the intrinsic growth behavior of a certain phase growing from the melt. Although the length scales associated with certain processes, such as capillarity, and the diffusion of heat and solute, are different at low and high rate extremes, the phenomena that govern the selection of a certain microstructural length scale or a growth mode are the same. Consequently, one can analyze the solidification phenomena at both high and low rates by using the same governing principles. In this study, we examined the microstructural control at both low and high extremes. For the high rate extreme, the formation of crystalline products and factors that control the microstructure during rapid solidification by free-jet melt spinning are examined in Fe-Si-B system. Particular attention was given to the behavior of the melt pool at different quench-wheel speeds. Since the solidification process takes place within the melt-pool that forms on the rotating quench-wheel, we examined the influence of melt-pool dynamics on nucleation and growth of crystalline solidification products and glass formation. High-speed imaging of the melt-pool, analysis of ribbon microstructure, and measurement of ribbon geometry and surface character all indicate upper and lower limits for melt-spinning rates for which nucleation can be avoided, and fully amorphous ribbons can be achieved. Comparison of the relevant time scales reveals that surface-controlled melt

  4. Determination and Modeling of Error Densities in Ephemeris Prediction

    SciTech Connect

    Jones, J.P.; Beckerman, M.

    1999-02-07

    The authors determined error densities of ephemeris predictions for 14 LEO satellites. The empirical distributions are not inconsistent with the hypothesis of a Gaussian distribution. The growth rate of radial errors are most highly correlated with eccentricity ({vert_bar}r{vert_bar} = 0.63, {alpha} < 0.05). The growth rate of along-track errors is most highly correlated with the decay rate of the semimajor axis ({vert_bar}r{vert_bar} = 0.97; {alpha} < 0.01).

  5. Authoritative School Climate and High School Dropout Rates

    ERIC Educational Resources Information Center

    Jia, Yuane; Konold, Timothy R.; Cornell, Dewey

    2016-01-01

    This study tested the association between school-wide measures of an authoritative school climate and high school dropout rates in a statewide sample of 315 high schools. Regression models at the school level of analysis used teacher and student measures of disciplinary structure, student support, and academic expectations to predict overall high…

  6. Error propagation equations and tables for estimating the uncertainty in high-speed wind tunnel test results

    SciTech Connect

    Clark, E.L.

    1993-08-01

    Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, calibration Mach number and Reynolds number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-stream Mach number, M{infinity}, on various aerodynamic ratios. To facilitate use of the error equations, sensitivity coefficients are derived and evaluated for nine fundamental aerodynamic ratios, most of which relate free-stream test conditions (pressure, temperature, density or velocity) to a reference condition. Tables of the ratios, R, absolute sensitivity coefficients, {partial_derivative}R/{partial_derivative}M{infinity}, and relative sensitivity coefficients, (M{infinity}/R) ({partial_derivative}R/{partial_derivative}M{infinity}), are provided as functions of M{infinity}.

  7. Breakdown Limit Studies in High-Rate Gaseous Detectors

    NASA Technical Reports Server (NTRS)

    Ivaniouchenkov, Yu; Fonte, P.; Peskov, V.; Ramsey, B. D.

    1999-01-01

    We report results from a systematic study of breakdown limits for novel high-rate gaseous detectors: MICROMEGAS, CAT and GEM, together with more conventional devices such as thin-gap parallel-mesh chambers and high-rate wire chambers. It was found that for all these detectors, the maximum achievable pin, before breakdown appears, drops dramatically with incident flux, and is sometimes inversely proportional to it. Further, in the presence of alpha particles, typical of the breakgrounds in high-energy experiments, additional gain drops of 1-2 orders of magnitude were observed for many detectors. It was found that breakdowns at high rates occur through what we have termed an "accumulative" mechanism, which does not seem to have been previously reported in the literature. Results of these studies may help in choosing the optimum detector for given experimental conditions.

  8. High-rate squeezing process of bulk metallic glasses

    NASA Astrophysics Data System (ADS)

    Fan, Jitang

    2017-03-01

    High-rate squeezing process of bulk metallic glasses from a cylinder into an intact sheet achieved by impact loading is investigated. Such a large deformation is caused by plastic flow, accompanied with geometrical confinement, shear banding/slipping, thermo softening, melting and joining. Temperature rise during the high-rate squeezing process makes a main effect. The inherent mechanisms are illustrated. Like high-pressure torsion (HPT), equal channel angular pressing (ECAP) and surface mechanical attrition treatments (SMAT) for refining grain of metals, High-Rate Squeezing (HRS), as a multiple-functions technique, not only creates a new road of processing metallic glasses and other metallic alloys for developing advanced materials, but also directs a novel technology of processing, grain refining, coating, welding and so on for treating materials.

  9. High-rate squeezing process of bulk metallic glasses

    PubMed Central

    Fan, Jitang

    2017-01-01

    High-rate squeezing process of bulk metallic glasses from a cylinder into an intact sheet achieved by impact loading is investigated. Such a large deformation is caused by plastic flow, accompanied with geometrical confinement, shear banding/slipping, thermo softening, melting and joining. Temperature rise during the high-rate squeezing process makes a main effect. The inherent mechanisms are illustrated. Like high-pressure torsion (HPT), equal channel angular pressing (ECAP) and surface mechanical attrition treatments (SMAT) for refining grain of metals, High-Rate Squeezing (HRS), as a multiple-functions technique, not only creates a new road of processing metallic glasses and other metallic alloys for developing advanced materials, but also directs a novel technology of processing, grain refining, coating, welding and so on for treating materials. PMID:28338092

  10. Dynamic analysis of high speed gears by using loaded static transmission error

    NASA Astrophysics Data System (ADS)

    Özgüven, H. Nevzat; Houser, D. R.

    1988-08-01

    A single degree of freedom non-linear model is used for the dynamic analysis of a gear pair. Two methods are suggested and a computer program is developed for calculating the dynamic mesh and tooth forces, dynamic factors based on stresses, and dynamic transmission error from measured or calculated loaded static transmission errors. The analysis includes the effects of variable mesh stiffness and mesh damping, gear errors (pitch, profile and runout errors), profile modifications and backlash. The accuracy of the method, which includes the time variation of both mesh stiffness and damping is demonstrated with numerical examples. In the second method, which is an approximate one, the time average of the mesh stiffness is used. However, the formulation used in the approximate analysis allows for the inclusion of the excitation effect of the variable mesh stiffness. It is concluded from the comparison of the results of the two methods that the displacement excitation resulting from a variable mesh stiffness is more important than the change in system natural frequency resulting from the mesh stiffness variation. Although the theory presented is general and applicable to spur, helical and spiral bevel gears, the computer program prepared is for only spur gears.

  11. Evolution of High Tooth Replacement Rates in Sauropod Dinosaurs

    PubMed Central

    Smith, Kathlyn M.; Fisher, Daniel C.; Wilson, Jeffrey A.

    2013-01-01

    Background Tooth replacement rate can be calculated in extinct animals by counting incremental lines of deposition in tooth dentin. Calculating this rate in several taxa allows for the study of the evolution of tooth replacement rate. Sauropod dinosaurs, the largest terrestrial animals that ever evolved, exhibited a diversity of tooth sizes and shapes, but little is known about their tooth replacement rates. Methodology/Principal Findings We present tooth replacement rate, formation time, crown volume, total dentition volume, and enamel thickness for two coexisting but distantly related and morphologically disparate sauropod dinosaurs Camarasaurus and Diplodocus. Individual tooth formation time was determined by counting daily incremental lines in dentin. Tooth replacement rate is calculated as the difference between the number of days recorded in successive replacement teeth. Each tooth family in Camarasaurus has a maximum of three replacement teeth, whereas each Diplodocus tooth family has up to five. Tooth formation times are about 1.7 times longer in Camarasaurus than in Diplodocus (315 vs. 185 days). Average tooth replacement rate in Camarasaurus is about one tooth every 62 days versus about one tooth every 35 days in Diplodocus. Despite slower tooth replacement rates in Camarasaurus, the volumetric rate of Camarasaurus tooth replacement is 10 times faster than in Diplodocus because of its substantially greater tooth volumes. A novel method to estimate replacement rate was developed and applied to several other sauropodomorphs that we were not able to thin section. Conclusions/Significance Differences in tooth replacement rate among sauropodomorphs likely reflect disparate feeding strategies and/or food choices, which would have facilitated the coexistence of these gigantic herbivores in one ecosystem. Early neosauropods are characterized by high tooth replacement rates (despite their large tooth size), and derived titanosaurs and diplodocoids independently

  12. The Contribution Of Sampling Errors In Satellite Precipitation Estimates To High Flood Uncertainty In Subtropical South America

    NASA Astrophysics Data System (ADS)

    Demaria, E. M.; Valdes, J. B.; Nijssen, B.; Rodriguez, D.; Su, F.

    2009-12-01

    Satellite precipitation estimates are becoming increasingly available at temporal and spatial scales of interest for hydrological applications. Unfortunately precipitation estimated from global satellites is prone to errors hailing from different sources. The impact of sampling errors on the hydrological cycle of a large-size basin was assessed with a macroscale hydrological model. Synthetic precipitation fields were generated in a Monte Carlo fashion by perturbing observed precipitation fields with sampling errors. Three sampling intervals were chosen to generate the precipitation fields: one-hour, three-hours which is the canonical Global Precipitation Mission (GPM) sampling interval, and six-hours. The Variable Infiltration Capacity (VIC) model was used to assess the impact of sampling errors on hydrological fluxes and states in the Iguazu basin in South America for the period 1982-2005. The propagation of sampling errors through the hydrological cycle was evaluated for high flow events that have the 2% chance of being exceeded in any given time. Results show that observed event volumes are underestimated for small volumes for the three and six-hours sampling intervals but for the one-hour sampling interval the difference is almost negligible.The timing of the hydrograph is not affected by uncertainty existent in satellite-derived precipitation when it propagates through the hydrological cycle. Results of two non-parametric tests: the Kruskal-Wallis test on the mean ranks of the population and the Ansari-Bradley test on the equality of the variances indicate that sampling errors do no affect the occurrence of high flows since their probability distribution is not affected. The applicability of these results is limited to a humid climate. However the Iguazu basin is representative of several basins located in subtropical regions around the world, many of which are under-instrumented catchments, where satellite precipitation might be one of the few available data

  13. High rate and stable cycling of lithium metal anode

    DOE PAGES

    Qian, Jiangfeng; Henderson, Wesley A.; Xu, Wu; ...

    2015-02-20

    Lithium (Li) metal is an ideal anode material for rechargeable batteries. However, dendritic Li growth and limited Coulombic efficiency (CE) during repeated Li deposition/stripping processes have prevented the application of this anode in rechargeable Li metal batteries, especially for use at high current densities. Here, we report that the use of highly concentrated electrolytes composed of ether solvents and the lithium bis(fluorosulfonyl)imide (LiFSI) salt enables the high rate cycling of a Li metal anode at high CE (up to 99.1 %) without dendrite growth. With 4 M LiFSI in 1,2-dimethoxyethane (DME) as the electrolyte, a Li|Li cell can be cycledmore » at high rates (10 mA cm-2) for more than 6000 cycles with no increase in the cell impedance, and a Cu|Li cell can be cycled at 4 mA cm-2 for more than 1000 cycles with an average CE of 98.4%. These excellent high rate performances can be attributed to the increased solvent coordination and increased availability of Li+ concentration in the electrolyte. Lastly, further development of this electrolyte may lead to practical applications for Li metal anode in rechargeable batteries. The fundamental mechanisms behind the high rate ion exchange and stability of the electrolytes also shine light on the stability of other electrochemical systems.« less

  14. High rate and stable cycling of lithium metal anode

    SciTech Connect

    Qian, Jiangfeng; Henderson, Wesley A.; Xu, Wu; Bhattacharya, Priyanka; Engelhard, Mark H.; Borodin, Oleg; Zhang, Jiguang

    2015-02-20

    Lithium (Li) metal is an ideal anode material for rechargeable batteries. However, dendritic Li growth and limited Coulombic efficiency (CE) during repeated Li deposition/stripping processes have prevented the application of this anode in rechargeable Li metal batteries, especially for use at high current densities. Here, we report that the use of highly concentrated electrolytes composed of ether solvents and the lithium bis(fluorosulfonyl)imide (LiFSI) salt enables the high rate cycling of a Li metal anode at high CE (up to 99.1 %) without dendrite growth. With 4 M LiFSI in 1,2-dimethoxyethane (DME) as the electrolyte, a Li|Li cell can be cycled at high rates (10 mA cm-2) for more than 6000 cycles with no increase in the cell impedance, and a Cu|Li cell can be cycled at 4 mA cm-2 for more than 1000 cycles with an average CE of 98.4%. These excellent high rate performances can be attributed to the increased solvent coordination and increased availability of Li+ concentration in the electrolyte. Lastly, further development of this electrolyte may lead to practical applications for Li metal anode in rechargeable batteries. The fundamental mechanisms behind the high rate ion exchange and stability of the electrolytes also shine light on the stability of other electrochemical systems.

  15. High-performance micromachined vibratory rate- and rate-integrating gyroscopes

    NASA Astrophysics Data System (ADS)

    Cho, Jae Yoong

    The performance of vibratory micromachined gyroscopes has been continuously improving for the past two decades. However, to further improve performance of the MEMS gyroscope in harsh environment, it is necessary for gyros to reduce the sensitivity to environmental parameters, including vibration and temperature change. In addition, conventional rate-mode MEMS gyroscopes have limitation in performance due to tradeoff between resolution, bandwidth, and full-scale range. In this research, we aim to reduce vibration sensitivity by developing gyros that operate in the balanced mode. The balanced mode creates zero net momentum and reduces energy loss through an anchor. The gyro can differentially cancel measurement errors from external vibration along both sensor axes. The vibration sensitivity of the balanced-mode gyroscope including structural imbalance from microfabrication reduces as the absolute difference between in-phase parasitic mode and operating mode frequencies increases. The parasitic sensing mode frequency is designed larger than the operating mode frequency to achieve both improved vibration insensitivity and shock resistivity. A single anchor is used in order to minimize thermoresidual stress change. We developed two gyroscope based on these design principles. The Balanced Oscillating Gyro (BOG) is a quad-mass tuning-fork rate gyroscope. The relationship between gyro design and modal characteristics is studied extensively using finite element method (FEM). The gyro is fabricated using the planar Si-on-glass (SOG) process with a device thickness of 100microm. The BOG is evaluated using the first-generation analog interface circuitry. Under a frequency mismatch of 5Hz between driving and sense modes, the angle random walk (ARW) is measured to be 0.44°/sec/✓Hz. The performance is limited by quadrature error and low-frequency noise in the circuit. The Cylindrical Rate-Integrating Gyroscope (CING) operates in whole-angle mode. The gyro is completely

  16. Performing repetitive error detection in a superconducting quantum circuit

    NASA Astrophysics Data System (ADS)

    Kelly, J.; Barends, R.; Fowler, A.; Megrant, A.; Jeffrey, E.; White, T.; Sank, D.; Mutus, J.; Campbell, B.; Chen, Y.; Chen, Z.; Chiaro, B.; Dunsworth, A.; Hoi, I.-C.; Neill, C.; O'Malley, P. J. J.; Roushan, P.; Quintana, C.; Vainsencher, A.; Wenner, J.; Cleland, A. N.; Martinis, J. M.

    2015-03-01

    Recently, there has been a large interest in the surface code error correction scheme, as gate and measurement fidelities are near the threshold. If error rates are sufficiently low, increased systems size leads to suppression of logical error. We have combined high fidelity gate and measurements in a single nine qubit device, and use it to perform up to eight rounds of repetitive bit error detection. We demonstrate suppression of environmentally-induced error as compared to a single physical qubit, as well as reduced logical error rates with increasing system size.

  17. Estimates of rates and errors for measurements of direct-. gamma. and direct-. gamma. + jet production by polarized protons at RHIC

    SciTech Connect

    Beddo, M.E.; Spinka, H.; Underwood, D.G.

    1992-08-14

    Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.

  18. High power, high efficiency millimeter wavelength traveling wave tubes for high rate communications from deep space

    NASA Technical Reports Server (NTRS)

    Dayton, James A., Jr.

    1991-01-01

    The high-power transmitters needed for high data rate communications from deep space will require a new class of compact, high efficiency traveling wave tubes (TWT's). Many of the recent TWT developments in the microwave frequency range are generically applicable to mm wave devices, in particular much of the technology of computer aided design, cathodes, and multistage depressed collectors. However, because TWT dimensions scale approximately with wavelength, mm wave devices will be physically much smaller with inherently more stringent fabrication tolerances and sensitivity to thermal dissipation.

  19. High Strain Rate Behavior of Polymer Matrix Composites Analyzed

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Roberts, Gary D.

    2001-01-01

    Procedures for modeling the high-speed impact of composite materials are needed for designing reliable composite engine cases that are lighter than the metal cases in current use. The types of polymer matrix composites that are likely to be used in such an application have a deformation response that is nonlinear and that varies with strain rate. To characterize and validate material models that could be used in the design of impactresistant engine cases, researchers must obtain material data over a wide variety of strain rates. An experimental program has been carried out through a university grant with the Ohio State University to obtain deformation data for a representative polymer matrix composite for strain rates ranging from quasi-static to high rates of several hundred per second. This information has been used to characterize and validate a constitutive model that was developed at the NASA Glenn Research Center.

  20. Study of High Strain Rate Response of Composites

    NASA Technical Reports Server (NTRS)

    Gilat, Amos

    2003-01-01

    The objective of the research was to continue the experimental study of the effect of strain rate on mechanical response (deformation and failure) of epoxy resins and carbon fibers/epoxy matrix composites, and to initiate a study of the effects of temperature by developing an elevated temperature test. The experimental data provide the information needed for NASA scientists for the development of a nonlinear, rate dependent deformation and strength models for composites that can subsequently be used in design. This year effort was directed into testing the epoxy resin. Three types of epoxy resins were tested in tension and shear at various strain rates that ranges from 5 x 10(exp -5), to 1000 per second. Pilot shear experiments were done at high strain rate and an elevated temperature of 80 C. The results show that all, the strain rate, the mode of loading, and temperature significantly affect the response of epoxy.

  1. High-Strain Rate Testing of Gun Propellants

    DTIC Science & Technology

    1988-12-01

    specimen is loaded beyond the elastic range. Instrumentation of the bars allows recording of the strain history in the bars during the test event. The...strain history on the input bar gives a record of the strain rate history in the sample. )The output bar strain history is proportional to the stress... history in the sample.) The data were compared to the results reported in the literature of earlier high strain rate tests on the same propellants. The

  2. High density bit transition requirements versus the effects on BCH error correcting code. [bit synchronization

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.

  3. General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.

    2011-01-01

    The Coronagraph Performance Error Budget (CPEB) tool automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. The tool uses a Code V prescription of the optical train, and uses MATLAB programs to call ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled fine-steering mirrors (FSMs). The sensitivity matrices are imported by macros into Excel 2007, where the error budget is evaluated. The user specifies the particular optics of interest, and chooses the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions, and combines that with the sensitivity matrices to generate an error budget for the system. CPEB also contains a combination of form and ActiveX controls with Visual Basic for Applications code to allow for user interaction in which the user can perform trade studies such as changing engineering requirements, and identifying and isolating stringent requirements. It contains summary tables and graphics that can be instantly used for reporting results in view graphs. The entire process to obtain a coronagraphic telescope performance error budget has been automated into three stages: conversion of optical prescription from Zemax or Code V to MACOS (in-house optical modeling and analysis tool), a linear models process, and an error budget tool process. The first process was improved by developing a MATLAB package based on the Class Constructor Method with a number of user-defined functions that allow the user to modify the MACOS optical prescription. The second process was modified by creating a MATLAB package that contains user-defined functions that automate the process. The user interfaces with the process by utilizing an initialization file where the user defines the parameters of the linear model

  4. High-Strain-Rate Compression Testing of Ice

    NASA Technical Reports Server (NTRS)

    Shazly, Mostafa; Prakash, Vikas; Lerch, Bradley A.

    2006-01-01

    In the present study a modified split Hopkinson pressure bar (SHPB) was employed to study the effect of strain rate on the dynamic material response of ice. Disk-shaped ice specimens with flat, parallel end faces were either provided by Dartmouth College (Hanover, NH) or grown at Case Western Reserve University (Cleveland, OH). The SHPB was adapted to perform tests at high strain rates in the range 60 to 1400/s at test temperatures of -10 and -30 C. Experimental results showed that the strength of ice increases with increasing strain rates and this occurs over a change in strain rate of five orders of magnitude. Under these strain rate conditions the ice microstructure has a slight influence on the strength, but it is much less than the influence it has under quasi-static loading conditions. End constraint and frictional effects do not influence the compression tests like they do at slower strain rates, and therefore the diameter/thickness ratio of the samples is not as critical. The strength of ice at high strain rates was found to increase with decreasing test temperatures. Ice has been identified as a potential source of debris to impact the shuttle; data presented in this report can be used to validate and/or develop material models for ice impact analyses for shuttle Return to Flight efforts.

  5. Semi-solid electrodes having high rate capability

    DOEpatents

    Chiang, Yet-Ming; Duduta, Mihai; Holman, Richard; Limthongkul, Pimpa; Tan, Taison

    2015-11-10

    Embodiments described herein relate generally to electrochemical cells having high rate capability, and more particularly to devices, systems and methods of producing high capacity and high rate capability batteries having relatively thick semi-solid electrodes. In some embodiments, an electrochemical cell includes an anode, a semi-solid cathode that includes a suspension of an active material and a conductive material in a liquid electrolyte, and an ion permeable membrane disposed between the anode and the cathode. The semi-solid cathode has a thickness in the range of about 250 .mu.m-2,500 .mu.m, and the electrochemical cell has an area specific capacity of at least 5 mAh/cm.sup.2 at a C-rate of C/2.

  6. Semi-solid electrodes having high rate capability

    DOEpatents

    Chiang, Yet-Ming; Duduta, Mihai; Holman, Richard; Limthongkul, Pimpa; Tan, Taison

    2016-07-05

    Embodiments described herein relate generally to electrochemical cells having high rate capability, and more particularly to devices, systems and methods of producing high capacity and high rate capability batteries having relatively thick semi-solid electrodes. In some embodiments, an electrochemical cell includes an anode, a semi-solid cathode that includes a suspension of an active material and a conductive material in a liquid electrolyte, and an ion permeable membrane disposed between the anode and the cathode. The semi-solid cathode has a thickness in the range of about 250 .mu.m-2,500 .mu.m, and the electrochemical cell has an area specific capacity of at least 5 mAh/cm.sup.2 at a C-rate of C/2.

  7. Online aging study of a high rate MRPC

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Wang, Yi; Feng, S. Q.; Xie, Bo; Lv, Pengfei; Wang, Fuyue; Guo, Baohong; Han, Dong; Li, Yuanjing

    2016-05-01

    With the constant increase of accelerator luminosity, the rate requirements of MRPC detectors have become very important, and the aging characteristics of the detector have to be studied meticulously. An online aging test system has been set up in our lab, and in this paper the setup of the system is described and the performance stability of a high-rate MRPC studied over a long running time under a high luminosity environment. The high rate MRPC was irradiated by X-rays for 36 days and the accumulated charge density reached 0.1 C/cm2. No obvious performance degradation was observed for the detector. Supported by National Natural Science Foundation of China (11420101004, 11461141011, 11275108), Ministry of Science and Technology (2015CB856905)

  8. Boosting bit rates and error detection for the classification of fast-paced motor commands based on single-trial EEG analysis.

    PubMed

    Blankertz, Benjamin; Dornhege, Guido; Schäfer, Christin; Krepki, Roman; Kohlmorgen, Jens; Müller, Klaus-Robert; Kunzmann, Volker; Losch, Florian; Curio, Gabriel

    2003-06-01

    Brain-computer interfaces (BCIs) involve two coupled adapting systems--the human subject and the computer. In developing our BCI, our goal was to minimize the need for subject training and to impose the major learning load on the computer. To this end, we use behavioral paradigms that exploit single-trial EEG potentials preceding voluntary finger movements. Here, we report recent results on the basic physiology of such premovement event-related potentials (ERP). 1) We predict the laterality of imminent left- versus right-hand finger movements in a natural keyboard typing condition and demonstrate that a single-trial classification based on the lateralized Bereitschaftspotential (BP) achieves good accuracies even at a pace as fast as 2 taps/s. Results for four out of eight subjects reached a peak information transfer rate of more than 15 b/min; the four other subjects reached 6-10 b/min. 2) We detect cerebral error potentials from single false-response trials in a forced-choice task, reflecting the subject's recognition of an erroneous response. Based on a specifically tailored classification procedure that limits the rate of false positives at, e.g., 2%, the algorithm manages to detect 85% of error trials in seven out of eight subjects. Thus, concatenating a primary single-trial BP-paradigm involving finger classification feedback with such secondary error detection could serve as an efficient online confirmation/correction tool for improvement of bit rates in a future BCI setting. As the present variant of the Berlin BCI is designed to achieve fast classifications in normally behaving subjects, it opens a new perspective for assistance of action control in time-critical behavioral contexts; the potential transfer to paralyzed patients will require further study.

  9. Flexible high-repetition-rate ultrafast fiber laser

    PubMed Central

    Mao, Dong; Liu, Xueming; Sun, Zhipei; Lu, Hua; Han, Dongdong; Wang, Guoxi; Wang, Fengqiu

    2013-01-01

    High-repetition-rate pulses have widespread applications in the fields of fiber communications, frequency comb, and optical sensing. Here, we have demonstrated high-repetition-rate ultrashort pulses in an all-fiber laser by exploiting an intracavity Mach-Zehnder interferometer (MZI) as a comb filter. The repetition rate of the laser can be tuned flexibly from about 7 to 1100 GHz by controlling the optical path difference between the two arms of the MZI. The pulse duration can be reduced continuously from about 10.1 to 0.55 ps with the spectral width tunable from about 0.35 to 5.7 nm by manipulating the intracavity polarization controller. Numerical simulations well confirm the experimental observations and show that filter-driven four-wave mixing effect, induced by the MZI, is the main mechanism that governs the formation of the high-repetition-rate pulses. This all-fiber-based laser is a simple and low-cost source for various applications where high-repetition-rate pulses are necessary. PMID:24226153

  10. High strain rate deformation of NiAl

    SciTech Connect

    Maloy, S.A.; Gray, G.T. III; Darolia, R.

    1994-07-01

    NiAl is a potential high temperature structural material. Applications for which NiAl is being considered (such as rotating components in jet engines) requires knowledge of mechanical properties over a wide range of strain rates. Single crystal NiAl (stoichiometric and Ni 49.75Al 0.25Fe) has been deformed in compression along [100] at strain rates of 0.001, 0.1/s and 2000/s and temperatures of 76,298 and 773K. <111> slip was observed after 76K testing at a strain rate of 0.001/s and 298K testing at a strain rate of 2000/s. Kinking was observed after deformation at 298K and a strain rate of 0.001/s and sometimes at 298 K and a strain rate of 0.1/s. Strain hardening rates of 8200 and 4000 MPa were observed after 773 and 298K testing respectively, at a strain rate of 2000/s. Results are discussed in reference to resulting dislocation substructure.

  11. Temporal pitch perception at high rates in cochlear implants.

    PubMed

    Kong, Ying-Yee; Carlyon, Robert P

    2010-05-01

    A recent study reported that a group of Med-El COMBI 40+CI (cochlear implant) users could, in a forced-choice task, detect changes in the rate of a pulse train for rates higher than the 300 pps "upper limit" commonly reported in the literature [Kong, Y.-Y., et al. (2009). J. Acoust. Soc. Am. 125, 1649-1657]. The present study further investigated the upper limit of temporal pitch in the same group of CI users on three tasks [pitch ranking, rate discrimination, and multidimensional scaling (MDS)]. The patterns of results were consistent across the three tasks and all subjects could follow rate changes above 300 pps. Two subjects showed exceptional ability to follow temporal pitch change up to about 900 pps. Results from the MDS study indicated that, for the two listeners tested, changes in pulse rate over the range of 500-840 pps were perceived along a perceptual dimension that was orthogonal to the place of excitation. Some subjects showed a temporal pitch reversal at rates beyond their upper limit of pitch and some showed a reversal within a small range of rates below the upper limit. These results are discussed in relation to the possible neural bases for temporal pitch processing at high rates.

  12. Error correction algorithm for high accuracy bio-impedance measurement in wearable healthcare applications.

    PubMed

    Kubendran, Rajkumar; Lee, Seulki; Mitra, Srinjoy; Yazicioglu, Refet Firat

    2014-04-01

    Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs.

  13. Average bit error rate performance analysis of subcarrier intensity modulated MRC and EGC FSO systems with dual branches over M distribution turbulence channels

    NASA Astrophysics Data System (ADS)

    Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang

    2015-07-01

    Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.

  14. Strategies for adapting to high rates of employee turnover.

    PubMed

    Mowday, R T

    1984-01-01

    For many organizations facing high rates of employee turnover, strategies for increasing employee retention may not be practical because employees leave for reasons beyond the control of management or the costs of reducing turnover exceed the benefits to be derived. In this situation managers need to consider strategies that can minimize or buffer the organization from the negative consequences that often follow from turnover. Strategies organizations can use to adapt to uncontrollably high employee turnover rates are presented in this article. In addition, suggestions are made for how managers should make choices among the alternative strategies.

  15. Calcium thionyl chloride high-rate reserve cell

    NASA Astrophysics Data System (ADS)

    Peled, E.; Meitav, A.; Brand, M.

    1981-09-01

    The goal is to assess the high-rate capability of a reserve type calcium-Ca(AlCl4) thionyl chloride cell and to demonstrate its excellent safety features. The good discharge performance at a discharge time of 10-15 min, together with the excellent safety features of the cell, is seen as warranting further investigations of this system as a candidate for high-rate multicell reserved and nonreserved battery applications. A test is described proving that it is practically impossible to 'charge' this cell.

  16. High removal rate laser-based coating removal system

    DOEpatents

    Matthews, Dennis L.; Celliers, Peter M.; Hackel, Lloyd; Da Silva, Luiz B.; Dane, C. Brent; Mrowka, Stanley

    1999-11-16

    A compact laser system that removes surface coatings (such as paint, dirt, etc.) at a removal rate as high as 1000 ft.sup.2 /hr or more without damaging the surface. A high repetition rate laser with multiple amplification passes propagating through at least one optical amplifier is used, along with a delivery system consisting of a telescoping and articulating tube which also contains an evacuation system for simultaneously sweeping up the debris produced in the process. The amplified beam can be converted to an output beam by passively switching the polarization of at least one amplified beam. The system also has a personal safety system which protects against accidental exposures.

  17. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    SciTech Connect

    Loughry, Thomas A.

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  18. A General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.; Shaklan, Stuart B.

    2009-01-01

    This paper describes a general purpose Coronagraph Performance Error Budget (CPEB) tool that we have developed under the NASA Exoplanet Exploration Program. The CPEB automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. It operates in 3 steps: first, a CodeV or Zemax prescription is converted into a MACOS optical prescription. Second, a Matlab program calls ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled coarse and fine-steering mirrors. Third, the sensitivity matrices are imported by macros into Excel 2007 where the error budget is created. Once created, the user specifies the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions and combines them with the sensitivity matrices to generate an error budget for the system. The user can easily modify the motion allocations to perform trade studies.

  19. A general tool for evaluating high-contrast coronagraphic telescope performance error budgets

    NASA Astrophysics Data System (ADS)

    Marchen, Luis F.; Shaklan, Stuart B.

    2009-08-01

    This paper describes a general purpose Coronagraph Performance Error Budget (CPEB) tool that we have developed under the NASA Exoplanet Exploration Program. The CPEB automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. It operates in 3 steps: first, a CodeV or Zemax prescription is converted into a MACOS optical prescription. Second, a Matlab program calls ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-ofsight pointing, with and without controlled coarse and fine-steering mirrors. Third, the sensitivity matrices are imported by macros into Excel 2007 where the error budget is created. Once created, the user specifies the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions and combines them with the sensitivity matrices to generate an error budget for the system. The user can easily modify the motion allocations to perform trade studies.

  20. Adaptivity with near-orthogonality constraint for high compression rates in lifting scheme framework

    NASA Astrophysics Data System (ADS)

    Sliwa, Tadeusz; Voisin, Yvon; Diou, Alain

    2004-01-01

    Since few years, Lifting Scheme has proven its utility in compression field. It permits to easily create fast, reversible, separable or no, not necessarily linear, multiresolution analysis for sound, image, video or even 3D graphics. An interesting feature of lifting scheme is the ability to build adaptive transforms for compression, more easily than with other decompositions. Many works have already be done in this subject, especially in lossless or near-lossless compression framework : better compression than with usually used methods can be obtained. However, most of the techniques used in adaptive near-lossless compression can not be extended to higher lossy compression rates, even in the simplest cases. Indeed, this is due to the quantization error introduced before coding, which has not controlled propagation through inverse transform. Authors have put their interest to the classical Lifting Scheme, with linear convolution filters, but they studied criterions to maintain a high level of adaptivity and a good error propagation through inverse transform. This article aims to present relatively simple criterion to obtain filters able to build image and video compression with high compression rate, tested here with the Spiht coder. For this, upgrade and predict filters are simultaneously adapted thanks to a constrained least-square method. The constraint consists in a near-orthogonality inequality, letting sufficiently high level of adaptivity. Some compression results are given, illustrating relevance of this method, even with short filters.

  1. Ultra High-Rate Germanium (UHRGe) Modeling Status Report

    SciTech Connect

    Warren, Glen A.; Rodriguez, Douglas C.

    2012-06-07

    The Ultra-High Rate Germanium (UHRGe) project at Pacific Northwest National Laboratory (PNNL) is conducting research to develop a high-purity germanium (HPGe) detector that can provide both the high resolution typical of germanium and high signal throughput. Such detectors may be beneficial for a variety of potential applications ranging from safeguards measurements of used fuel to material detection and verification using active interrogation techniques. This report describes some of the initial radiation transport modeling efforts that have been conducted to help guide the design of the detector as well as a description of the process used to generate the source spectrum for the used fuel application evaluation.

  2. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    NASA Technical Reports Server (NTRS)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  3. Machining and grinding: High rate deformation in practice

    SciTech Connect

    Follansbee, P.S.

    1993-04-01

    Machining and grinding are well-established material-working operations involving highly non-uniform deformation and failure processes. A typical machining operation is characterized by uncertain boundary conditions (e.g.,surface interactions), three-dimensional stress states, large strains, high strain rates, non-uniform temperatures, highly localized deformations, and failure by both nominally ductile and brittle mechanisms. While machining and grinding are thought to be dominated by empiricism, even a cursory inspection leads one to the conclusion that this results more from necessity arising out of the complicated and highly interdisciplinary nature of the processes than from the lack thereof. With these conditions in mind, the purpose of this paper is to outline the current understanding of strain rate effects in metals.

  4. Machining and grinding: High rate deformation in practice

    SciTech Connect

    Follansbee, P.S.

    1993-01-01

    Machining and grinding are well-established material-working operations involving highly non-uniform deformation and failure processes. A typical machining operation is characterized by uncertain boundary conditions (e.g.,surface interactions), three-dimensional stress states, large strains, high strain rates, non-uniform temperatures, highly localized deformations, and failure by both nominally ductile and brittle mechanisms. While machining and grinding are thought to be dominated by empiricism, even a cursory inspection leads one to the conclusion that this results more from necessity arising out of the complicated and highly interdisciplinary nature of the processes than from the lack thereof. With these conditions in mind, the purpose of this paper is to outline the current understanding of strain rate effects in metals.

  5. High frame rate CCD camera with fast optical shutter

    SciTech Connect

    Yates, G.J.; McDonald, T.E. Jr.; Turko, B.T.

    1998-09-01

    A high frame rate CCD camera coupled with a fast optical shutter has been designed for high repetition rate imaging applications. The design uses state-of-the-art microchannel plate image intensifier (MCPII) technology fostered/developed by Los Alamos National Laboratory to support nuclear, military, and medical research requiring high-speed imagery. Key design features include asynchronous resetting of the camera to acquire random transient images, patented real-time analog signal processing with 10-bit digitization at 40--75 MHz pixel rates, synchronized shutter exposures as short as 200pS, sustained continuous readout of 512 x 512 pixels per frame at 1--5Hz rates via parallel multiport (16-port CCD) data transfer. Salient characterization/performance test data for the prototype camera are presented, temporally and spatially resolved images obtained from range-gated LADAR field testing are included, an alternative system configuration using several cameras sequenced to deliver discrete numbers of consecutive frames at effective burst rates up to 5GHz (accomplished by time-phasing of consecutive MCPII shutter gates without overlap) is discussed. Potential applications including dynamic radiography and optical correlation will be presented.

  6. Characteristics of a magnetorheological fluid in high shear rate

    NASA Astrophysics Data System (ADS)

    Kikuchi, Takehito; Abe, Isao; Inoue, Akio; Iwasaki, Akihiko; Okada, Katsuhiko

    2016-11-01

    The information on the properties of the magnetorheological fluid (MRF) in high shear rate, in particular a shear rate greater than 10 000 s-1, is important for the design of devices utilizing the MRF with very narrow fluid gaps, which are used in high-speed applications. However, very little research has been conducted on this subject. The objective of this study is to provide such information. MRF-140CG (Lord Corp.) is chosen as an example MRF. The plastic viscosity, thermal sensitivity, and durability of the fluid, especially under a shear rate greater than 10 000 s-1, are reported. The plastic viscosity is almost constant under a wide range of magnetic input. In contrast, MRF-140CG is sensitive to the shear rate; its sensitivity is relatively low at high shear rates. The thermal sensitivity shows negative values, and the effect of temperature decreases with increasing magnetic input. According to the result of the duration test at 30 000 s-1 and at a temperature of 120 °C, the lifetime dissipation energy is 5.48 MJ ml-1.

  7. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  8. User microprogrammable processors for high data rate telemetry preprocessing

    NASA Technical Reports Server (NTRS)

    Pugsley, J. H.; Ogrady, E. P.

    1973-01-01

    The use of microprogrammable processors for the preprocessing of high data rate satellite telemetry is investigated. The following topics are discussed along with supporting studies: (1) evaluation of commercial microprogrammable minicomputers for telemetry preprocessing tasks; (2) microinstruction sets for telemetry preprocessing; and (3) the use of multiple minicomputers to achieve high data processing. The simulation of small microprogrammed processors is discussed along with examples of microprogrammed processors.

  9. High-rate deformation of nanocrystalline iron and copper

    NASA Astrophysics Data System (ADS)

    Sinani, A. B.; Shpeizman, V. V.; Vlasov, A. S.; Zil'berbrand, E. L.; Kozachuk, A. I.

    2016-11-01

    Stress-strain curves are recorded during a high-speed impact and slow loading for nanocrystalline and coarse-grained iron and copper. The strain-rate sensitivity is determined as a function of the grain size and the strain. It is shown that the well-known difference between the variations of the strain-rate sensitivity of the yield strength with the grain size in fcc and bcc metals can be extended to other strain dependences: the strain-rate sensitivity of flow stresses in iron decreases with increasing strain, and that in copper increases. This difference also manifests itself in different slopes of the dependence of the strain-rate sensitivity on the grain size when the strain changes.

  10. The perturbation paradigm modulates error-based learning in a highly automated task: outcomes in swallowing kinematics

    PubMed Central

    Anderson, C.; Macrae, P.; Taylor-Kamara, I.; Serel, S.; Vose, A.

    2015-01-01

    Traditional motor learning studies focus on highly goal-oriented, volitional tasks that often do not readily generalize to real-world movements. The goal of this study was to investigate how different perturbation paradigms alter error-based learning outcomes in a highly automated task. Swallowing was perturbed with neck surface electrical stimulation that opposes hyo-laryngeal elevation in 25 healthy adults (30 swallows: 10 preperturbation, 10 perturbation, and 10 postperturbation). The four study conditions were gradual-masked, gradual-unmasked, abrupt-masked, and abrupt-unmasked. Gradual perturbations increasingly intensified overtime, while abrupt perturbations were sustained at the same high intensity. The masked conditions reduced cues about the presence/absence of the perturbation (pre- and postperturbation periods had low stimulation), but unmasked conditions did not (pre- and postperturbation periods had no stimulation). Only hyo-laryngeal range of motion measures had significant outcomes; no timing measure demonstrated learning. Systematic-error reduction occurred only during the abrupt-masked and abrupt-unmasked perturbations. Only the abrupt-masked perturbation caused aftereffects. In this highly automated task, gradual perturbations did not induce learning similarly to findings of some volitional, goal-oriented adaptation task studies. Furthermore, our subtle and brief adjustment of the stimulation paradigm (masked vs. unmasked) determined whether aftereffects were present. This suggests that, in the unmasked group, sensory predictions of a motor plan were quickly and efficiently modified to disengage error-based learning behaviors. PMID:26023226

  11. High Reported Spontaneous Stuttering Recovery Rates: Fact or Fiction?

    ERIC Educational Resources Information Center

    Ramig, Peter R.

    1993-01-01

    Contact after 6 to 8 years with families of 21 children who were diagnosed as stuttering but did not receive fluency intervention services found that almost all subjects still had a stuttering problem. Results dispute the high spontaneous recovery rates reported in the literature and support the value of early intervention. (Author/DB)

  12. Distance Education: Why Are the Attrition Rates so High?

    ERIC Educational Resources Information Center

    Moody, Johnette

    2004-01-01

    Distance education is being hailed as the next best thing to sliced bread. But is it really? Many problems exist with distance-delivered courses. Everything from course development and management to the student not being adequately prepared are problematic and result in high attrition rates in distance-delivered courses. Students initially…

  13. Binary interactions with high accretion rates onto main sequence stars

    NASA Astrophysics Data System (ADS)

    Shiber, Sagiv; Schreier, Ron; Soker, Noam

    2016-07-01

    Energetic outflows from main sequence stars accreting mass at very high rates might account for the powering of some eruptive objects, such as merging main sequence stars, major eruptions of luminous blue variables, e.g., the Great Eruption of Eta Carinae, and other intermediate luminosity optical transients (ILOTs; red novae; red transients). These powerful outflows could potentially also supply the extra energy required in the common envelope process and in the grazing envelope evolution of binary systems. We propose that a massive outflow/jets mediated by magnetic fields might remove energy and angular momentum from the accretion disk to allow such high accretion rate flows. By examining the possible activity of the magnetic fields of accretion disks, we conclude that indeed main sequence stars might accrete mass at very high rates, up to ≈ 10-2 M ⊙ yr-1 for solar type stars, and up to ≈ 1 M ⊙ yr-1 for very massive stars. We speculate that magnetic fields amplified in such extreme conditions might lead to the formation of massive bipolar outflows that can remove most of the disk's energy and angular momentum. It is this energy and angular momentum removal that allows the very high mass accretion rate onto main sequence stars.

  14. Design of abrasive tool for high-rate grinding

    NASA Astrophysics Data System (ADS)

    Ilinykh, AS

    2017-02-01

    The experimental studies aimed to design heavy-duty abrasive wheels for high-rate grinding are presented. The design of abrasive wheels with the working speed up to 100 m/s is based on the selection of optimized material composition and manufacture technology of the wheels.

  15. Plant respirometer enables high resolution of oxygen consumption rates

    NASA Technical Reports Server (NTRS)

    Foster, D. L.

    1966-01-01

    Plant respirometer permits high resolution of relatively small changes in the rate of oxygen consumed by plant organisms undergoing oxidative metabolism in a nonphotosynthetic state. The two stage supply and monitoring system operates by a differential pressure transducer and provides a calibrated output by digital or analog signals.

  16. Cassini High Rate Detector V16.0

    NASA Astrophysics Data System (ADS)

    Economou, T.; DiDonna, P.

    2016-05-01

    The High Rate Detector (HRD) from the University of Chicago is an independent part of the CDA instrument on the Cassini Orbiter that measures the dust flux and particle mass distribution of dust particles hitting the HRD detectors. This data set includes all data from the HRD through December 31, 2015. Please refer to Srama et al. (2004) for a detailed HRD description.

  17. Predicting the College Attendance Rate of Graduating High School Classes.

    ERIC Educational Resources Information Center

    Hoover, Donald R.

    1990-01-01

    An important element of school counseling is providing assessments on the collective future needs and activities of a graduating school class. The College Attendance Rate (CAR) is defined here as the proportion of seniors graduating from a given high school, during a given year, that will enroll full-time at an academic college sometime during the…

  18. Digital approach to high rate gamma-ray spectrometry

    SciTech Connect

    Korolczuk, Stefan; Mianowski, Slawomir; Rzadkiewicz, Jacek; Sibczynski, Pawel; Swiderski, Lukasz; Szewinski, Jaroslaw; Zychor, Izabella

    2015-07-01

    Basic concepts and preliminary results of creating high rate digital spectrometry system using efficient ADCs and latest FPGA are presented as well as a comparison with commercially available devices. The possibility to use such systems, coupled to scintillators, in plasma experiments is discussed. (authors)

  19. Corrected High-Frame Rate Anchored Ultrasound with Software Alignment

    ERIC Educational Resources Information Center

    Miller, Amanda L.; Finch, Kenneth B.

    2011-01-01

    Purpose: To improve lingual ultrasound imaging with the Corrected High Frame Rate Anchored Ultrasound with Software Alignment (CHAUSA; Miller, 2008) method. Method: A production study of the IsiXhosa alveolar click is presented. Articulatory-to-acoustic alignment is demonstrated using a Tri-Modal 3-ms pulse generator. Images from 2 simultaneous…

  20. Childhood Onset Schizophrenia: High Rate of Visual Hallucinations

    ERIC Educational Resources Information Center

    David, Christopher N.; Greenstein, Deanna; Clasen, Liv; Gochman, Pete; Miller, Rachel; Tossell, Julia W.; Mattai, Anand A.; Gogtay, Nitin; Rapoport, Judith L.

    2011-01-01

    Objective: To document high rates and clinical correlates of nonauditory hallucinations in childhood onset schizophrenia (COS). Method: Within a sample of 117 pediatric patients (mean age 13.6 years), diagnosed with COS, the presence of auditory, visual, somatic/tactile, and olfactory hallucinations was examined using the Scale for the Assessment…

  1. Cassini High Rate Detector V14.0

    NASA Astrophysics Data System (ADS)

    Economou, T.; DiDonna, P.

    2014-06-01

    The High Rate Detector (HRD) from the University of Chicago is an independent part of the CDA instrument on the Cassini Orbiter that measures the dust flux and particle mass distribution of dust particles hitting the HRD detectors. This data set includes all data from the HRD through December 31, 2013. Please refer to Srama et al. (2004) for a detailed HRD description.

  2. READOUT ELECTRONICS FOR A HIGH-RATE CSC DETECTOR

    SciTech Connect

    OCONNOR,P.; GRATCHEV,V.; KANDASAMY,A.; POLYCHRONAKOS,V.; TCHERNIATINE,V.; PARSONS,J.; SIPPACH,W.

    1999-09-25

    A readout system for a high-rate muon Cathode Strip Chamber (CSC) is described. The system, planned for use in the forward region of the ATLAS muon spectrometer, uses two custom CMOS integrated circuits to achieve good position resolution at a flux of up to 2,500 tracks/cm{sup 2}/s.

  3. Trends in High School Graduation Rates. Research Brief. Volume 0710

    ERIC Educational Resources Information Center

    Romanik, Dale; Froman, Terry

    2008-01-01

    This Research Brief addresses an outcome measure that is of paramount importance to senior high schools--graduation rate. Nationwide a student drops out of school approximately every nine seconds. The significance of this issue locally is exemplified by a recent American Civil Liberties Union filing of a class action law suit against the Palm…

  4. High Interview Response Rates: Much Ado about Nothing?

    ERIC Educational Resources Information Center

    Berdie, Doug R.

    The question of how high a response rate is needed in order for telephone surveys to obtain data that accurately represent the entire sample, was investigated via reevaluating results of three previously published studies and reporting on three 1989 studies for the first time. The three previous studies indicated that, if the sample…

  5. Quality Control of High-Dose-Rate Brachytherapy: Treatment Delivery Analysis Using Statistical Process Control

    SciTech Connect

    Able, Charles M.; Bright, Megan; Frizzell, Bart

    2013-03-01

    Purpose: Statistical process control (SPC) is a quality control method used to ensure that a process is well controlled and operates with little variation. This study determined whether SPC was a viable technique for evaluating the proper operation of a high-dose-rate (HDR) brachytherapy treatment delivery system. Methods and Materials: A surrogate prostate patient was developed using Vyse ordnance gelatin. A total of 10 metal oxide semiconductor field-effect transistors (MOSFETs) were placed from prostate base to apex. Computed tomography guidance was used to accurately position the first detector in each train at the base. The plan consisted of 12 needles with 129 dwell positions delivering a prescribed peripheral dose of 200 cGy. Sixteen accurate treatment trials were delivered as planned. Subsequently, a number of treatments were delivered with errors introduced, including wrong patient, wrong source calibration, wrong connection sequence, single needle displaced inferiorly 5 mm, and entire implant displaced 2 mm and 4 mm inferiorly. Two process behavior charts (PBC), an individual and a moving range chart, were developed for each dosimeter location. Results: There were 4 false positives resulting from 160 measurements from 16 accurately delivered treatments. For the inaccurately delivered treatments, the PBC indicated that measurements made at the periphery and apex (regions of high-dose gradient) were much more sensitive to treatment delivery errors. All errors introduced were correctly identified by either the individual or the moving range PBC in the apex region. Measurements at the urethra and base were less sensitive to errors. Conclusions: SPC is a viable method for assessing the quality of HDR treatment delivery. Further development is necessary to determine the most effective dose sampling, to ensure reproducible evaluation of treatment delivery accuracy.

  6. The Nature of Error in Adolescent Student Writing

    ERIC Educational Resources Information Center

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  7. Evaluation of the Effect of Noise on the Rate of Errors and Speed of Work by the Ergonomic Test of Two-Hand Co-Ordination

    PubMed Central

    Habibi, Ehsanollah; Dehghan, Habibollah; Dehkordy, Sina Eshraghy; Maracy, Mohammad Reza

    2013-01-01

    Background: Among the most important and effective factors affecting the efficiency of the human workforce are accuracy, promptness, and ability. In the context of promoting levels and quality of productivity, the aim of this study was to investigate the effects of exposure to noise on the rate of errors, speed of work, and capability in performing manual activities. Methods: This experimental study was conducted on 96 students (52 female and 44 male) of the Isfahan Medical Science University with the average and standard deviations of age, height, and weight of 22.81 (3.04) years, 171.67 (8.51) cm, and 65.05 (13.13) kg, respectively. Sampling was conducted with a randomized block design. Along with controlling for intervening factors, a combination of sound pressure levels [65 dB (A), 85 dB (A), and 95 dB (A)] and exposure times (0, 20, and 40) were used for evaluation of precision and speed of action of the participants, in the ergonomic test of two-hand coordination. Data was analyzed by SPSS18 software using a descriptive and analytical statistical method by analysis of covariance (ANCOVA) repeated measures. Results: The results of this study showed that increasing sound pressure level from 65 to 95 dB in network ‘A’ increased the speed of work (P < 0.05). Increase in the exposure time (0 to 40 min of exposure) and gender showed no significant differences statistically in speed of work (P > 0.05). Male participants got annoyed from the noise more than females. Also, increase in sound pressure level increased the rate of error (P < 0.05). Conclusions: According to the results of this research, increasing the sound pressure level decreased efficiency and increased the errors and in exposure to sounds less than 85 dB in the beginning, the efficiency decreased initially and then increased in a mild slope. PMID:23930164

  8. Determination of the Contamination Rate and the Associated Error for Targets Observed by CoRoT in the Exoplanet Channel

    NASA Astrophysics Data System (ADS)

    Gardes, B.; Chabaud, P.-Y.; Guterman, P.

    2012-09-01

    In the CoRoT exoplanet field of view, photometric measurements are obtained by aperture integration using a generic collection of masks. The total flux held within the photometric mask may be split in two parts, the target flux itself and the flux due to the nearest neighbours considered as contaminants. So far ExoDat (http://cesam.oamp.fr/exodat) gives a rough estimate of the contamination rate for all potential exoplanet targets (level-0) based on generic PSF shapes built before CoRoT launch. Here, we present the updated estimate of the contamination rate (level-1) with its associated error. This estimate is done for each target observed by CoRoT in the exoplanet channel using a new catalog of PSF built from the first available flight images and taking into account the line of sight of the satellite (i.e. the satellite orientation).

  9. Lithographically encoded polymer microtaggant using high-capacity and error-correctable QR code for anti-counterfeiting of drugs.

    PubMed

    Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook

    2012-11-20

    A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug.

  10. Quantification of in vivo progenitor mutation accrual with ultra-low error rate and minimal input DNA using SIP-HAVA-seq.

    PubMed

    Taylor, Pete H; Cinquin, Amanda; Cinquin, Olivier

    2016-11-01

    Assaying in vivo accrual of DNA damage and DNA mutations by stem cells and pinpointing sources of damage and mutations would further our understanding of aging and carcinogenesis. Two main hurdles must be overcome. First, in vivo mutation rates are orders of magnitude lower than raw sequencing error rates. Second, stem cells are vastly outnumbered by differentiated cells, which have a higher mutation rate-quantification of stem cell DNA damage and DNA mutations is thus best performed from small, well-defined cell populations. Here we report a mutation detection technique, based on the "duplex sequencing" principle, with an error rate below ∼10(-10) and that can start from as little as 50 pg DNA. We validate this technique, which we call SIP-HAVA-seq, by characterizing Caenorhabditis elegans germline stem cell mutation accrual and asking how mating affects that accrual. We find that a moderate mating-induced increase in cell cycling correlates with a dramatic increase in accrual of mutations. Intriguingly, these mutations consist chiefly of deletions in nonexpressed genes. This contrasts with results derived from mutation accumulation lines and suggests that mutation spectrum and genome distribution change with replicative age, chronological age, cell differentiation state, and/or overall worm physiological state. We also identify single-stranded gaps as plausible deletion precursors, providing a starting point to identify the molecular mechanisms of mutagenesis that are most active. SIP-HAVA-seq provides the first direct, genome-wide measurements of in vivo mutation accrual in stem cells and will enable further characterization of underlying mechanisms and their dependence on age and cell state.

  11. High frame rate photoacoustic imaging using clinical ultrasound system

    NASA Astrophysics Data System (ADS)

    Sivasubramanian, Kathyayini; Pramanik, Manojit

    2016-03-01

    Photoacoustic tomography (PAT) is a potential hybrid imaging modality which is gaining attention in the field of medical imaging. Typically a Q-switched Nd:YAG laser is used to excite the tissue and generate photoacoustic signals. But, they are not suitable for clinical applications owing to their high cost, large size. Also, their low pulse repetition rate (PRR) of few tens of hertz prevents them from being used in real-time PAT. So, there is a growing need for an imaging system capable of real-time imaging for various clinical applications. In this work, we are using a nanosecond pulsed laser diode as an excitation source and a clinical ultrasound imaging system to obtain the photoacoustic imaging. The excitation laser is ~803 nm in wavelength with energy of ~1.4 mJ per pulse. So far, the reported frame rate for photoacoustic imaging is only a few hundred Hertz. We have demonstrated up to 7000 frames per second framerate in photoacoustic imaging (B-mode) and measured the flow rate of fast moving obje ct. Phantom experiments were performed to test the fast imaging capability and measure the flow rate of ink solution inside a tube. This fast photoacoustic imaging can be used for various clinical applications including cardiac related problems, where the blood flow rate is quite high, or other dynamic studies.

  12. Spaceflight Ka-Band High-Rate Radiation-Hard Modulator

    NASA Technical Reports Server (NTRS)

    Jaso, Jeffery M.

    2011-01-01

    A document discusses the creation of a Ka-band modulator developed specifically for the NASA/GSFC Solar Dynamics Observatory (SDO). This flight design consists of a high-bandwidth, Quadriphase Shift Keying (QPSK) vector modulator with radiation-hardened, high-rate driver circuitry that receives I and Q channel data. The radiationhard design enables SDO fs Ka-band communications downlink system to transmit 130 Mbps (300 Msps after data encoding) of science instrument data to the ground system continuously throughout the mission fs minimum life of five years. The low error vector magnitude (EVM) of the modulator lowers the implementation loss of the transmitter in which it is used, thereby increasing the overall communication system link margin. The modulator comprises a component within the SDO transmitter, and meets the following specifications over a 0 to 40 C operational temperature range: QPSK/OQPSK modulator, 300-Msps symbol rate, 26.5-GHz center frequency, error vector magnitude less than or equal to 10 percent rms, and compliance with the NTIA (National Telecommunications and Information Administration) spectral mask.

  13. High strain rate behavior of pure metals at elevated temperature

    NASA Astrophysics Data System (ADS)

    Testa, Gabriel; Bonora, Nicola; Ruggiero, Andrew; Iannitti, Gianluca; Domenico, Gentile

    2013-06-01

    In many applications and technology processes, such as stamping, forging, hot working etc., metals and alloys are subjected to elevated temperature and high strain rate deformation process. Characterization tests, such as quasistatic and dynamic tension or compression test, and validation tests, such as Taylor impact and DTE - dynamic tensile extrusion -, provide the experimental base of data for constitutive model validation and material parameters identification. Testing material at high strain rate and temperature requires dedicated equipment. In this work, both tensile Hopkinson bar and light gas gun where modified in order to allow material testing under sample controlled temperature conditions. Dynamic tension tests and Taylor impact tests, at different temperatures, on high purity copper (99.98%), tungsten (99.95%) and 316L stainless steel were performed. The accuracy of several constitutive models (Johnson and Cook, Zerilli-Armstrong, etc.) in predicting the observed material response was verified by means of extensive finite element analysis (FEA).

  14. Magnetic Implosion for Novel Strength Measurements at High Strain Rates

    SciTech Connect

    Lee, H.; Preston, D.L.; Bartsch, R.R.; Bowers, R.L.; Holtkamp, D.; Wright, B.L.

    1998-10-19

    Recently Lee and Preston have proposed to use magnetic implosions as a new method for measuring material strength in a regime of large strains and high strain rates inaccessible to previously established techniques. By its shockless nature, this method avoids the intrinsic difficulties associated with an earlier approach using high explosives. The authors illustrate how the stress-strain relation for an imploding liner can be obtained by measuring the velocity and temperature history of its inner surface. They discuss the physical requirements that lead us to a composite liner design applicable to different test materials, and also compare the code-simulated prediction with the measured data for the high strain-rate experiments conducted recently at LANL. Finally, they present a novel diagnostic scheme that will enable us to remove the background in the pyrometric measurement through data reduction.

  15. High repetition rate plasma mirror device for attosecond science

    SciTech Connect

    Borot, A.; Douillet, D.; Iaquaniello, G.; Lefrou, T.; Lopez-Martens, R.; Audebert, P.; Geindre, J.-P.

    2014-01-15

    This report describes an active solid target positioning device for driving plasma mirrors with high repetition rate ultra-high intensity lasers. The position of the solid target surface with respect to the laser focus is optically monitored and mechanically controlled on the nm scale to ensure reproducible interaction conditions for each shot at arbitrary repetition rate. We demonstrate the target capabilities by driving high-order harmonic generation from plasma mirrors produced on glass targets with a near-relativistic intensity few-cycle pulse laser system operating at 1 kHz. During experiments, residual target surface motion can be actively stabilized down to 47 nm (root mean square), which ensures sub-300-as relative temporal stability of the plasma mirror as a secondary source of coherent attosecond extreme ultraviolet radiation in pump-probe experiments.

  16. Systematic Uncertainties in High-Rate Germanium Data

    SciTech Connect

    Gilbert, Andrew J.; Fast, James E.; Fulsom, Bryan G.; Pitts, William K.; VanDevender, Brent A.; Wood, Lynn S.

    2016-10-06

    For many nuclear material safeguards inspections, spectroscopic gamma detectors are required which can achieve high event rates (in excess of 10^6 s^-1) while maintaining very good energy resolution for discrimination of neighboring gamma signatures in complex backgrounds. Such spectra can be useful for non-destructive assay (NDA) of spent nuclear fuel with long cooling times, which contains many potentially useful low-rate gamma lines, e.g., Cs-134, in the presence of a few dominating gamma lines, such as Cs-137. Detectors in use typically sacrifice energy resolution for count rate, e.g., LaBr3, or visa versa, e.g., CdZnTe. In contrast, we anticipate that beginning with a detector with high energy resolution, e.g., high-purity germanium (HPGe), and adapting the data acquisition for high throughput will be able to achieve the goals of the ideal detector. In this work, we present quantification of Cs-134 and Cs-137 activities, useful for fuel burn-up quantification, in fuel that has been cooling for 22.3 years. A segmented, planar HPGe detector is used for this inspection, which has been adapted for a high-rate throughput in excess of 500k counts/s. Using a very-high-statistic spectrum of 2.4*10^11 counts, isotope activities can be determined with very low statistical uncertainty. However, it is determined that systematic uncertainties dominate in such a data set, e.g., the uncertainty in the pulse line shape. This spectrum offers a unique opportunity to quantify this uncertainty and subsequently determine required counting times for given precision on values of interest.

  17. The error-related negativity relates to sadness following mood induction among individuals with high neuroticism

    PubMed Central

    Hajcak, Greg

    2012-01-01

    The error-related negativity (ERN) is an event-related potential (ERP) that indexes error monitoring. Research suggests that the ERN is increased in internalizing disorders, such as depression and anxiety. Although studies indicate that the ERN is insensitive to state-related fluctuations in anxiety, few studies have carefully examined the effect of state-related changes in sadness on the ERN. In the current study, we sought to determine whether the ERN would be altered by a sad mood induction using a between-subjects design. Additionally, we explored if this relationship would be moderated by individual differences in neuroticism—a personality trait related to both anxiety and depression. Forty-seven undergraduate participants were randomly assigned to either a sad or neutral mood induction prior to performing an arrow version of the flanker task. Participants reported greater sadness following the sad than neutral mood induction; there were no significant group differences on behavioral or ERP measures. Across the entire sample, however, participants with a larger increase in sad mood from baseline to post-induction had a larger (i.e. more negative) ERN. Furthermore, this effect was larger among individuals reporting higher neuroticism. These data indicate that neuroticism moderates the relationship between the ERN and changes in sad mood. PMID:21382967

  18. High rate constitutive modeling of aluminium alloy tube

    NASA Astrophysics Data System (ADS)

    Salisbury, C. P.; Worswick, M. J.; Mayer, R.

    2006-08-01

    As the need for fuel efficient automobiles increases, car designers are investigating light-weight materials for automotive bodies that will reduce the overall automobile weight. Aluminium alloy tube is a desirable material to use in automotive bodies due to its light weight. However, aluminium suffers from lower formability than steel and its energy absorption ability in a crash event after a forming operation is largely unknown. As part of a larger study on the relationship between crashworthiness and forming processes, constitutive models for 3mm AA5754 aluminium tube were developed. A nominal strain rate of 100/s is often used to characterize overall automobile crash events, whereas strain rates on the order of 1000/s can occur locally. Therefore, tests were performed at quasi-static rates using an Instron test fixture and at strain rates of 500/s to 1500/s using a tensile split Hopkinson bar. High rate testing was then conducted at rates of 500/s, 1000/s and 1500/s at 21circC, 150circC and 300circC. The generated data was then used to determine the constitutive parameters for the Johnson-Cook and Zerilli-Armstrong material models.

  19. Calibration of the straightness and orthogonality error of a laser feedback high-precision stage using self-calibration methods

    NASA Astrophysics Data System (ADS)

    Kim, Dongmin; Kim, Kihyun; Park, Sang Hyun; Jang, Sangdon

    2014-12-01

    An ultra high-precision 3-DOF air-bearing stage is developed and calibrated in this study. The stage was developed for the transportation of a glass or wafer with x and y following errors in the nanometer regime. To apply the proposed stage to display or semiconductor fabrication equipment, x and y straightness errors should be at the sub-micron level and the x-y orthogonality error should be in the region of several arcseconds with strokes of several hundreds of mm. Our system was designed to move a 400 mm stroke on the x axis and a 700 mm stroke on the y axis. To do this, 1000 mm and 550 mm bar-type mirrors were adopted for real time Δx and Δy laser measurements and feedback control. In this system, with the laser wavelength variation and instability being kept to a minimum through environmental control, the straightness and orthogonality become purely dependent upon the surface shape of the bar mirrors. Compensation for the distortion of the bar mirrors is accomplished using a self-calibration method. The successful application of the method nearly eliminated the straightness and orthogonality errors of the stage, allowing their specifications to be fully satisfied. As a result, the straightness and orthogonality errors of the stage were successfully decreased from 4.4 μm to 0.8 μm and from 0.04° to 2.48 arcsec, respectively.

  20. High strain-rate model for fiber-reinforced composites

    SciTech Connect

    Aidun, J.B.; Addessio, F.L.

    1995-07-01

    Numerical simulations of dynamic uniaxial strain loading of fiber-reinforced composites are presented that illustrate the wide range of deformation mechanisms that can be captured using a micromechanics-based homogenization technique as the material model in existing continuum mechanics computer programs. Enhancements to the material model incorporate high strain-rate plastic response, elastic nonlinearity, and rate-dependent strength degradation due to material damage, fiber debonding, and delamination. These make the model relevant to designing composite structural components for crash safety, armor, and munitions applications.

  1. Demonstration of a high repetition rate capillary discharge waveguide

    SciTech Connect

    Gonsalves, A. J. Pieronek, C.; Daniels, J.; Bulanov, S. S.; Waldron, W. L.; Mittelberger, D. E.; Leemans, W. P.; Liu, F.; Antipov, S.; Butler, J. E.; Bobrova, N. A.; Sasorov, P. V.

    2016-01-21

    A hydrogen-filled capillary discharge waveguide operating at kHz repetition rates is presented for parameters relevant to laser plasma acceleration (LPA). The discharge current pulse was optimized for erosion mitigation with laser guiding experiments and MHD simulation. Heat flow simulations and measurements showed modest temperature rise at the capillary wall due to the average heat load at kHz repetition rates with water-cooled capillaries, which is promising for applications of LPAs such as high average power radiation sources.

  2. Highly Challenging Balance Program Reduces Fall Rate in Parkinson Disease

    PubMed Central

    Sparrow, David; DeAngelis, Tamara R.; Hendron, Kathryn; Thomas, Cathi A.; Saint-Hilaire, Marie; Ellis, Terry

    2015-01-01

    Background and Purpose There is a paucity of effective treatment options to reduce falls in Parkinson’s disease (PD). Although a variety of rehabilitative approaches have been shown to improve balance, evidence of a reduction in falls has been mixed. Prior balance trials suggest that programs with highly challenging exercises had superior outcomes. We investigated the effects of a theoretically driven, progressive, highly challenging group exercise program on fall rate, balance, and fear of falling. Methods Twenty-three subjects with PD participated in this randomized cross-over trial. Subjects were randomly allocated to 3 months of active balance exercises or usual care followed by the reverse. During the active condition, subjects participated in a progressive, highly challenging group exercise program twice weekly for 90 minutes. Outcomes included a change in fall rate over the 3-month active period and differences in balance (Mini-BESTest), and fear of falling (Falls Efficacy Scale-International (FES-I)) between active and usual care conditions. Results: The effect of time on falls was significant (regression coefficient = −0.015 per day, p<0.001). The estimated rate ratio comparing incidence rates at time points one month apart was 0.632 (95% CI 0.524 to 0.763). Thus, there was an estimated 37% decline in fall rate per month (95% CI 24% to 48%). Improvements were also observed on the Mini-BESTest (p=0.037) and FES-I (p=0.059). Discussion and Conclusions The results of this study show that a theoretically based, highly challenging, and progressive exercise program was effective in reducing falls, improving balance, and reducing fear of falling in PD. PMID:26655100

  3. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  4. Palaeohistological Evidence for Ancestral High Metabolic Rate in Archosaurs.

    PubMed

    Legendre, Lucas J; Guénard, Guillaume; Botha-Brink, Jennifer; Cubo, Jorge

    2016-11-01

    Metabolic heat production in archosaurs has played an important role in their evolutionary radiation during the Mesozoic, and their ancestral metabolic condition has long been a matter of debate in systematics and palaeontology. The study of fossil bone histology provides crucial information on bone growth rate, which has been used to indirectly investigate the evolution of thermometabolism in archosaurs. However, no quantitative estimation of metabolic rate has ever been performed on fossils using bone histological features. Moreover, to date, no inference model has included phylogenetic information in the form of predictive variables. Here we performed statistical predictive modeling using the new method of phylogenetic eigenvector maps on a set of bone histological features for a sample of extant and extinct vertebrates, to estimate metabolic rates of fossil archosauromorphs. This modeling procedure serves as a case study for eigenvector-based predictive modeling in a phylogenetic context, as well as an investigation of the poorly known evolutionary patterns of metabolic rate in archosaurs. Our results show that Mesozoic theropod dinosaurs exhibit metabolic rates very close to those found in modern birds, that archosaurs share a higher ancestral metabolic rate than that of extant ectotherms, and that this derived high metabolic rate was acquired at a much more inclusive level of the phylogenetic tree, among non-archosaurian archosauromorphs. These results also highlight the difficulties of assigning a given heat production strategy (i.e., endothermy, ectothermy) to an estimated metabolic rate value, and confirm findings of previous studies that the definition of the endotherm/ectotherm dichotomy may be ambiguous.

  5. Hispanic High School Graduates Pass Whites in Rate of College Enrollment: High School Drop-out Rate at Record Low

    ERIC Educational Resources Information Center

    Fry, Richard; Taylor, Paul

    2013-01-01

    A record seven-in-ten (69%) Hispanic high school graduates in the class of 2012 enrolled in college that fall, two percentage points higher than the rate (67%) among their white counterparts, according to a Pew Research Center analysis of new data from the U.S. Census Bureau. This milestone is the result of a long-term increase in Hispanic…

  6. Vitreous bond CBN high speed and high material removal rate grinding of ceramics

    SciTech Connect

    Shih, A.J.; Grant, M.B.; Yonushonis, T.M.; Morris, T.O.; McSpadden, S.B.

    1998-08-01

    High speed (up to 127 m/s) and high material removal rate (up to 10 mm{sup 3}/s/mm) grinding experiments using a vitreous bond CBN wheel were conducted to investigate the effects of material removal rate, wheel speed, dwell time and truing speed ratio on cylindrical grinding of silicon nitride and zirconia. Experimental results show that the high grinding wheel surface speed can reduce the effective chip thickness, lower grinding forces, enable high material removal rate grinding and achieve a higher G-ratio. The radial feed rate was increased to as high as 0.34 {micro}m/s for zirconia and 0.25 {micro}m/s for silicon nitride grinding to explore the advantage of using high wheel speed for cost-effective high material removal rate grinding of ceramics.

  7. A high-rate PCI-based telemetry processor system

    NASA Astrophysics Data System (ADS)

    Turri, R.

    2002-07-01

    The high performances reached by the Satellite on-board telemetry generation and transmission, as consequently, will impose the design of ground facilities with higher processing capabilities at low cost to allow a good diffusion of these ground station. The equipment normally used are based on complex, proprietary bus and computing architectures that prevent the systems from exploiting the continuous and rapid increasing in computing power available on market. The PCI bus systems now allow processing of high-rate data streams in a standard PC-system. At the same time the Windows NT operating system supports multitasking and symmetric multiprocessing, giving the capability to process high data rate signals. In addition, high-speed networking, 64 bit PCI-bus technologies and the increase in processor power and software, allow creating a system based on COTS products (which in future may be easily and inexpensively upgraded). In the frame of EUCLID RTP 9.8 project, a specific work element was dedicated to develop the architecture of a system able to acquire telemetry data of up to 600 Mbps. Laben S.p.A - a Finmeccanica Company -, entrusted of this work, has designed a PCI-based telemetry system making possible the communication between a satellite down-link and a wide area network at the required rate.

  8. A software control system for the ACTS high-burst-rate link evaluation terminal

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Daugherty, Elaine S.

    1991-01-01

    Control and performance monitoring of NASA's High Burst Rate Link Evaluation Terminal (HBR-LET) is accomplished by using several software control modules. Different software modules are responsible for controlling remote radio frequency (RF) instrumentation, supporting communication between a host and a remote computer, controlling the output power of the Link Evaluation Terminal and data display. Remote commanding of microwave RF instrumentation and the LET digital ground terminal allows computer control of various experiments, including bit error rate measurements. Computer communication allows system operators to transmit and receive from the Advanced Communications Technology Satellite (ACTS). Finally, the output power control software dynamically controls the uplink output power of the terminal to compensate for signal loss due to rain fade. Included is a discussion of each software module and its applications.

  9. CW Interference Effects on High Data Rate Transmission Through the ACTS Wideband Channel

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Ngo, Duc H.; Tran, Quang K.; Tran, Diepchi T.; Yu, John; Kachmar, Brian A.; Svoboda, James S.

    1996-01-01

    Satellite communications channels are susceptible to various sources of interference. Wideband channels have a proportionally greater probability of receiving interference than narrowband channels. NASA's Advanced Communications Technology Satellite (ACTS) includes a 900 MHz bandwidth hardlimiting transponder which has provided an opportunity for the study of interference effects of wideband channels. A series of interference tests using two independent ACTS ground terminals measured the effects of continuous-wave (CW) uplink interference on the bit-error rate of a 220 Mbps digitally modulated carrier. These results indicate the susceptibility of high data rate transmissions to CW interference and are compared to results obtained with a laboratory hardware-based system simulation and a computer simulation.

  10. High Pressure Burn Rate Measurements on an Ammonium Perchlorate Propellant

    SciTech Connect

    Glascoe, E A; Tan, N

    2010-04-21

    High pressure deflagration rate measurements of a unique ammonium perchlorate (AP) based propellant are required to design the base burn motor for a Raytheon weapon system. The results of these deflagration rate measurements will be key in assessing safety and performance of the system. In particular, the system may experience transient pressures on the order of 100's of MPa (10's kPSI). Previous studies on similar AP based materials demonstrate that low pressure (e.g. P < 10 MPa or 1500 PSI) burn rates can be quite different than the elevated pressure deflagration rate measurements (see References and HPP results discussed herein), hence elevated pressure measurements are necessary in order understand the deflagration behavior under relevant conditions. Previous work on explosives have shown that at 100's of MPa some explosives will transition from a laminar burn mechanism to a convective burn mechanism in a process termed deconsolidative burning. The resulting burn rates that are orders-of-magnitude faster than the laminar burn rates. Materials that transition to the deconsolidative-convective burn mechanism at elevated pressures have been shown to be considerably more violent in confined heating experiments (i.e. cook-off scenarios). The mechanisms of propellant and explosive deflagration are extremely complex and include both chemical, and mechanical processes, hence predicting the behavior and rate of a novel material or formulation is difficult if not impossible. In this work, the AP/HTPB based material, TAL-1503 (B-2049), was burned in a constant volume apparatus in argon up to 300 MPa (ca. 44 kPSI). The burn rate and pressure were measured in-situ and used to calculate a pressure dependent burn rate. In general, the material appears to burn in a laminar fashion at these elevated pressures. The experiment was reproduced multiple times and the burn rate law using the best data is B = (0.6 {+-} 0.1) x P{sup (1.05{+-}0.02)} where B is the burn rate in mm/s and

  11. Modeling Large-Strain, High-Rate Deformation in Metals

    SciTech Connect

    Lesuer, D R; Kay, G J; LeBlanc, M M

    2001-07-20

    The large strain deformation response of 6061-T6 and Ti-6Al-4V has been evaluated over a range in strain rates from 10{sup -4} s{sup -1} to over 10{sup 4} s{sup -1}. The results have been used to critically evaluate the strength and damage components of the Johnson-Cook (JC) material model. A new model that addresses the shortcomings of the JC model was then developed and evaluated. The model is derived from the rate equations that represent deformation mechanisms active during moderate and high rate loading. Another model that accounts for the influence of void formation on yield and flow behavior of a ductile metal (the Gurson model) was also evaluated. The characteristics and predictive capabilities of these models are reviewed.

  12. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  13. High-Resolution Multi-Shot Spiral Diffusion Tensor Imaging with Inherent Correction of Motion-Induced Phase Errors

    PubMed Central

    Truong, Trong-Kha; Guidon, Arnaud

    2014-01-01

    Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457

  14. Dynamic High-Temperature Characterization of an Iridium Alloy in Compression at High Strain Rates

    SciTech Connect

    Song, Bo; Nelson, Kevin; Lipinski, Ronald J.; Bignell, John L.; Ulrich, G. B.; George, E. P.

    2014-06-01

    Iridium alloys have superior strength and ductility at elevated temperatures, making them useful as structural materials for certain high-temperature applications. However, experimental data on their high-temperature high-strain-rate performance are needed for understanding high-speed impacts in severe elevated-temperature environments. Kolsky bars (also called split Hopkinson bars) have been extensively employed for high-strain-rate characterization of materials at room temperature, but it has been challenging to adapt them for the measurement of dynamic properties at high temperatures. Current high-temperature Kolsky compression bar techniques are not capable of obtaining satisfactory high-temperature high-strain-rate stress-strain response of thin iridium specimens investigated in this study. We analyzed the difficulties encountered in high-temperature Kolsky compression bar testing of thin iridium alloy specimens. Appropriate modifications were made to the current high-temperature Kolsky compression bar technique to obtain reliable compressive stress-strain response of an iridium alloy at high strain rates (300 – 10000 s-1) and temperatures (750°C and 1030°C). Uncertainties in such high-temperature high-strain-rate experiments on thin iridium specimens were also analyzed. The compressive stress-strain response of the iridium alloy showed significant sensitivity to strain rate and temperature.

  15. Metasurface-based broadband hologram with high tolerance to fabrication errors

    PubMed Central

    Zhang, Xiaohu; Jin, Jinjin; Wang, Yanqin; Pu, Mingbo; Li, Xiong; Zhao, Zeyu; Gao, Ping; Wang, Changtao; Luo, Xiangang

    2016-01-01

    With new degrees of freedom to achieve full control of the optical wavefront, metasurfaces could overcome the fabrication embarrassment faced by the metamaterials. In this paper, a broadband hologram using metasurface consisting of elongated nanoapertures array with different orientations has been experimentally demonstrated. Owing to broadband characteristic of the polarization-dependent scattering, the performance is verified at working wavelength ranging from 405 nm to 914 nm. Furthermore, the tolerance to the fabrication errors, which include the length and width of the elongated aperture, the shape deformation and the phase noise, has been theoretically investigated to be as large as 10% relative to the original hologram. We believe the method proposed here is promising in emerging applications such as holographic display, optical information processing and lithography technology etc. PMID:26818130

  16. Statistical Approach to Decreasing the Error Rate of Noninvasive Prenatal Aneuploid Detection caused by Maternal Copy Number Variation.

    PubMed

    Zhang, Han; Zhao, Yang-Yu; Song, Jing; Zhu, Qi-Ying; Yang, Hua; Zheng, Mei-Ling; Xuan, Zhao-Ling; Wei, Yuan; Chen, Yang; Yuan, Peng-Bo; Yu, Yang; Li, Da-Wei; Liang, Jun-Bin; Fan, Ling; Chen, Chong-Jian; Qiao, Jie

    2015-11-04

    Analyses of cell-free fetal DNA (cff-DNA) from maternal plasma using massively parallel sequencing enable the noninvasive detection of feto-placental chromosome aneuploidy; this technique has been widely used in clinics worldwide. Noninvasive prenatal tests (NIPT) based on cff-DNA have achieved very high accuracy; however, they suffer from maternal copy-number variations (CNV) that may cause false positives and false negatives. In this study, we developed an algorithm to exclude the effect of maternal CNV and refined the Z-score that is used to determine fetal aneuploidy. The simulation results showed that the algorithm is robust against variations of fetal concentration and maternal CNV size. We also introduced a method based on the discrepancy between feto-placental concentrations to help reduce the false-positive ratio. A total of 6615 pregnant women were enrolled in a prospective study to validate the accuracy of our method. All 106 fetuses with T21, 20 with T18, and three with T13 were tested using our method, with sensitivity of 100% and specificity of 99.97%. In the results, two cases with maternal duplications in chromosome 21, which were falsely predicted as T21 by the previous NIPT method, were correctly classified as normal by our algorithm, which demonstrated the effectiveness of our approach.

  17. High rates of evolution preceded the origin of birds.

    PubMed

    Puttick, Mark N; Thomas, Gavin H; Benton, Michael J

    2014-05-01

    The origin of birds (Aves) is one of the great evolutionary transitions. Fossils show that many unique morphological features of modern birds, such as feathers, reduction in body size, and the semilunate carpal, long preceded the origin of clade Aves, but some may be unique to Aves, such as relative elongation of the forelimb. We study the evolution of body size and forelimb length across the phylogeny of coelurosaurian theropods and Mesozoic Aves. Using recently developed phylogenetic comparative methods, we find an increase in rates of body size and body size dependent forelimb evolution leading to small body size relative to forelimb length in Paraves, the wider clade comprising Aves and Deinonychosauria. The high evolutionary rates arose primarily from a reduction in body size, as there were no increased rates of forelimb evolution. In line with a recent study, we find evidence that Aves appear to have a unique relationship between body size and forelimb dimensions. Traits associated with Aves evolved before their origin, at high rates, and support the notion that numerous lineages of paravians were experimenting with different modes of flight through the Late Jurassic and Early Cretaceous.

  18. HIGH RATES OF EVOLUTION PRECEDED THE ORIGIN OF BIRDS

    PubMed Central

    Puttick, Mark N; Thomas, Gavin H; Benton, Michael J; Polly, P David

    2014-01-01

    The origin of birds (Aves) is one of the great evolutionary transitions. Fossils show that many unique morphological features of modern birds, such as feathers, reduction in body size, and the semilunate carpal, long preceded the origin of clade Aves, but some may be unique to Aves, such as relative elongation of the forelimb. We study the evolution of body size and forelimb length across the phylogeny of coelurosaurian theropods and Mesozoic Aves. Using recently developed phylogenetic comparative methods, we find an increase in rates of body size and body size dependent forelimb evolution leading to small body size relative to forelimb length in Paraves, the wider clade comprising Aves and Deinonychosauria. The high evolutionary rates arose primarily from a reduction in body size, as there were no increased rates of forelimb evolution. In line with a recent study, we find evidence that Aves appear to have a unique relationship between body size and forelimb dimensions. Traits associated with Aves evolved before their origin, at high rates, and support the notion that numerous lineages of paravians were experimenting with different modes of flight through the Late Jurassic and Early Cretaceous. PMID:24471891

  19. Investigation of high-rate lithium-thionyl chloride cells

    NASA Astrophysics Data System (ADS)

    Hayes, Catherine A.; Gust, Steven; Farrington, Michael D.; Lockwood, Judith A.; Donaldson, George J.

    Chemical analysis of a commercially produced high-rate D-size lithium-thionyl cell was carried out, as a function of rate of discharge (1 ohm and 5 ohms), depth of discharge, and temperature (25 C and -40 C), using specially developed methods for identifying suspected minor cell products or impurities which may effect cell performance. These methods include a product-retrieval system which involves solvent extraction to enhance the recovery of suspected semivolatile minor chemicals, and methods of quantitative GC analysis of volatile and semivolatile products. The nonvolatile products were analyzed by wet chemical methods. The results of the analyses indicate that the predominant discharge reaction in this cell is 4Li + 2SOCl2 going to 4LiCl + S + SO2, with SO2 formation decreasing towards the end of cell life (7 to 12 Ah). The rate of discharge had no effect on the product distribution. Upon discharge of the high-rate cell at -40 C, one cell exploded, and all others exhibited overheating and rapid internal pressure rise when allowed to warm up to room temperature.

  20. Small cryptopredators contribute to high predation rates on coral reefs

    NASA Astrophysics Data System (ADS)

    Goatley, Christopher H. R.; González-Cabello, Alonso; Bellwood, David R.

    2017-03-01

    Small fishes suffer high mortality rates on coral reefs, primarily due to predation. Although studies have identified the predators of early post-settlement fishes, the predators of small cryptobenthic fishes remain largely unknown. We therefore used a series of mesocosm experiments with natural habitat and cryptobenthic fish communities to identify the impacts of a range of small potential predators, including several invertebrates, on prey fish populations. While there was high variability in predation rates, many members of the cryptobenthic fish community act as facultative cryptopredators, being prey when small and piscivores when larger. Surprisingly, we also found that smashing mantis shrimps may be important fish predators. Our results highlight the diversity of the predatory community on coral reefs and identify previously unknown trophic links in these complex ecosystems.

  1. Failure Rate Data Analysis for High Technology Components

    SciTech Connect

    L. C. Cadwallader

    2007-07-01

    Understanding component reliability helps designers create more robust future designs and supports efficient and cost-effective operations of existing machines. The accelerator community can leverage the commonality of its high-vacuum and high-power systems with those of the magnetic fusion community to gain access to a larger database of reliability data. Reliability studies performed under the auspices of the International Energy Agency are the result of an international working group, which has generated a component failure rate database for fusion experiment components. The initial database work harvested published data and now analyzes operating experience data. This paper discusses the usefulness of reliability data, describes the failure rate data collection and analysis effort, discusses reliability for components with scarce data, and points out some of the intersections between magnetic fusion experiments and accelerators.

  2. Elastoplastic behavior of copper upon high-strain-rate deformation

    NASA Astrophysics Data System (ADS)

    Chembarisova, R. G.

    2015-06-01

    The deformation behavior of copper under conditions of high-strain-rate deformation has been investigated based on the model of elastoplastic medium with allowance for the kinetics of plastic deformation. Data have been obtained on the evolution of the dislocation subsystem, namely, on the average dislocation density, density of mobile dislocations, velocity of dislocation slip, concentration of deformation-induced vacancies, and density of twins. The coefficient of the annihilation of screw dislocations has been estimated depending on pressure and temperature. It has been shown that severe shear stresses that arise upon high-strain-rate deformation can lead to a significant increase in the concentration of vacancies. The time of the dislocation annihilation upon their nonconservative motion has been estimated. It has been shown that this time is much greater than the time of the deformation process in the samples, which makes it possible to exclude the annihilation of dislocations upon their nonconservative motion from the active mechanisms of deformation.

  3. High-rate diamond deposition by microwave plasma CVD

    NASA Astrophysics Data System (ADS)

    Li, Xianglin

    In this dissertation, the growth of CVD (Chemical Vapor Deposition) diamond thin films is studied both theoretically and experimentally. The goal of this research is to deposit high quality HOD (Highly Oriented Diamond) films with a growth rate greater than 1 mum/hr. For the (100)-oriented HOD films, the growth rate achieved by the traditional process is only 0.3 mum/hr while the theoretical limit is ˜0.45 mum/hr. This research increases the growth rate up to 5.3 mum/hr (with a theoretical limit of ˜7 mum/hr) while preserving the crystal quality. This work builds a connection between the theoretical study of the CVD process and the experimental research. The study is extended from the growth of regular polycrystalline diamond to highly oriented diamond (HOD) films. For the increase of the growth rate of regular polycrystalline diamond thin films, a scaling growth model developed by Goodwin is introduced in details to assist in the understanding of the MPCVD (Microwave Plasma CVD) process. Within the Goodwin's scaling model, there are only four important sub-processes for the growth of diamond: surface modification, adsorption, desorption, and incorporation. The factors determining the diamond growth rate and film quality are discussed following the description of the experimental setup and process parameters. Growth rate and crystal quality models are reviewed to predict and understand the experimental results. It is shown that the growth rate of diamond can be increased with methane input concentration and the amount of atomic hydrogen (by changing the total pressure). It is crucial to provide enough atomic hydrogen to conserve crystal quality of the deposited diamond film. The experimental results demonstrate that for a fixed methane concentration, there is a minimum pressure for growth of good diamond. Similarly, for a fixed total pressure, there is a maximum methane concentration for growth of good diamond, and this maximum methane concentration increases

  4. Adjunct payload for ISS high-rate communications

    NASA Astrophysics Data System (ADS)

    Mitchell, W. Carl; Cleave, Robert; Ford, David

    1999-01-01

    An adjunct payload on commercial geosynchronous satellites is developed for ISS and similar high-rate communications. The technical parameters of this payload are set forth and bounds on user fees are established. Depending on the financial arrangements-e.g., development funds, long-term lease agreement, other value offered, commercial subscriptions-the adjunct payload can be a viable option for ISS communications service.

  5. Data Feature Extraction for High-Rate 3-Phase Data

    SciTech Connect

    2016-10-18

    This algorithm processes high-rate 3-phase signals to identify the start time of each signal and estimate its envelope as data features. The start time and magnitude of each signal during the steady state is also extracted. The features can be used to detect abnormal signals. This algorithm is developed to analyze Exxeno's 3-phase voltage and current data recorded from refrigeration systems to detect device failure or degradation.

  6. Semi-solid electrodes having high rate capability

    DOEpatents

    Chiang, Yet-Ming; Duduta, Mihai; Holman, Richard; Limthongkul, Pimpa; Tan, Taison

    2016-06-07

    Embodiments described herein relate generally to electrochemical cells having high rate capability, and more particularly to devices, systems and methods of producing high capacity and high rate capability batteries having relatively thick semi-solid electrodes. In some embodiments, an electrochemical cell includes an anode and a semi-solid cathode. The semi-solid cathode includes a suspension of an active material of about 35% to about 75% by volume of an active material and about 0.5% to about 8% by volume of a conductive material in a non-aqueous liquid electrolyte. An ion-permeable membrane is disposed between the anode and the semi-solid cathode. The semi-solid cathode has a thickness of about 250 .mu.m to about 2,000 .mu.m, and the electrochemical cell has an area specific capacity of at least about 7 mAh/cm.sup.2 at a C-rate of C/4. In some embodiments, the semi-solid cathode slurry has a mixing index of at least about 0.9.

  7. Dissociation rate of cognate peptidyl-tRNA from the A-site of hyper-accurate and error-prone ribosomes.

    PubMed

    Karimi, R; Ehrenberg, M

    1994-12-01

    The binding stability of the aminoacyl-tRNA site (A-site), estimated from the dissociation rate constant kd, of AcPhe-Phe-tRNA(Phe) has been studied for wild-type (wt), for hyperaccurate ribosomes altered in S12 [streptomycin-dependent (SmD) and streptomycin-pseudodependent (SmP) phenotypes], for error-prone ribosomes altered in S4 (Ram phenotype), and for ribosomes in complex with the error-inducing aminoglycosides streptomycin and neomycin. The AcPhe2-tRNA stability is slightly and identically reduced for SmD and SmP phenotypes in relation to wt ribosomes. The stability is increased (kd is reduced) for Ram ribosomes to about the same extent as the proof-reading accuracy is decreased for this phenotype. kd is also reduced by the action of streptomycin and neomycin, but much less than the reduction in proof-reading accuracy induced by streptomycin. Similar kd values for SmD and SmP ribosomes indicate that the cause of streptomycin dependence is not excessive drop-off of peptidyl-tRNAs from the A-site.

  8. Mechanical Solder Characterisation Under High Strain Rate Conditions

    NASA Astrophysics Data System (ADS)

    Meier, Karsten; Roellig, Mike; Wiese, Steffen; Wolter, Klaus-Juergen

    2010-11-01

    Using a setup for high strain rate tensile experiments the mechanical behavior of two lead-free tin based solders is investigated. The first alloy is SnAg1.3Cu0.5Ni. The second alloy has a higher silver content but no addition of Ni. Solder joints are the main electrical, thermal and mechanical interconnection technology on the first and second interconnection level. With the recent rise of 3D packaging technologies many novel interconnection ideas are proposed with innovative or visionary nature. Copper pillar, stud bump, intermetallic (SLID) and even spring like joints are presented in a number of projects. However, soldering will remain one of the important interconnect technologies. Knowing the mechanical properties of solder joints is important for any reliability assessment, especially when it comes to vibration and mechanical shock associated with mobile applications. Taking the ongoing miniaturization and linked changes in solder joint microstructure and mechanical behavior into account the need for experimental work on that issue is not satisfied. The tests are accomplished utilizing miniature bulk specimens to match the microstructure of real solder joints as close as possible. The dogbone shaped bulk specimens have a crucial diameter of 1 mm, which is close to BGA solder joints. Experiments were done in the strain rate range from 20 s-1 to 600 s-1. Solder strengthening has been observed with increased strain rate for both SAC solder alloys. The yield stress increases by about 100% in the investigated strain rate range. The yield level differs strongly. A high speed camera system was used to assist the evaluation process of the stress and strain data. Besides the stress and strain data extracted from the experiment the ultimate fracture strain is determined and the fracture surfaces are evaluated using SEM technique considering rate dependency.

  9. Method for generating high-energy and high repetition rate laser pulses from CW amplifiers

    DOEpatents

    Zhang, Shukui

    2013-06-18

    A method for obtaining high-energy, high repetition rate laser pulses simultaneously using continuous wave (CW) amplifiers is described. The method provides for generating micro-joule level energy in pico-second laser pulses at Mega-hertz repetition rates.

  10. Counting High School Graduates when Graduates Count: Measuring Graduation Rates under the High Stakes of NCLB.

    ERIC Educational Resources Information Center

    Swanson, Christopher B.; Chaplin, Duncan

    This paper addresses the debate over high school graduation rates, examining how the No Child Left Behind Act of 2001 (NCLB) has redirected attention toward graduation rates. It introduces provisions of the NCLB pertaining to high school graduation, discussing implications from a measurement perspective, and presents strategies for developing a…

  11. Dynamic Strength of Metals at High Pressure and Strain Rate

    NASA Astrophysics Data System (ADS)

    Lorenz, Thomas

    2006-03-01

    A new approach to materials science at very high pressures and strain rates has been developed on the Omega laser, using a ramped plasma piston drive. A laser drives an ablative shock through a solid plastic reservoir where it unloads at the rear free surface, expands across a vacuum gap, and stagnates on the metal sample under study. This produces a gently increasing ram pressure, compressing the sample nearly isentropically. The peak pressure on the sample, diagnosed with VISAR measurements, can be varied by adjusting the laser energy and pulse length, gap size, and reservoir density, and obeys a simple scaling relation.^1 This has been demonstrated at OMEGA at pressures to 200 GPa in Al foils. In an important application, using in-flight x-ray radiography, the material strength of solid-state samples at high pressure can be inferred by measuring the reductions in the growth rates (stabilization) of Rayleigh-Taylor (RT) unstable interfaces. RT instability measurements of solid of Al-6061-T6 ^2 and vanadium, at pressures of 20-100 GPa, and strain rates of 10^6 to 10^8 s-1, show clear material strength effects. Modelling results for two constitutive strength models -- Steinberg-Guinan and Preston-Tonks-Wallace, show enhanced dynamic strength that may be correlated with a high-strain-rate, phono-drag mechanism. Data, modeling details and future prospects for this project using the National Ignition Facility laser, will be presented. [1] J. Edwards et al., Phys. Rev. Lett., 92, 075002 (2004). [2] K. T. Lorenz et al., Phys. Plasmas 12, 056309 (2005). This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.

  12. Scale dependence of rock friction at high work rate.

    PubMed

    Yamashita, Futoshi; Fukuyama, Eiichi; Mizoguchi, Kazuo; Takizawa, Shigeru; Xu, Shiqing; Kawakata, Hironori

    2015-12-10

    Determination of the frictional properties of rocks is crucial for an understanding of earthquake mechanics, because most earthquakes are caused by frictional sliding along faults. Prior studies using rotary shear apparatus revealed a marked decrease in frictional strength, which can cause a large stress drop and strong shaking, with increasing slip rate and increasing work rate. (The mechanical work rate per unit area equals the product of the shear stress and the slip rate.) However, those important findings were obtained in experiments using rock specimens with dimensions of only several centimetres, which are much smaller than the dimensions of a natural fault (of the order of 1,000 metres). Here we use a large-scale biaxial friction apparatus with metre-sized rock specimens to investigate scale-dependent rock friction. The experiments show that rock friction in metre-sized rock specimens starts to decrease at a work rate that is one order of magnitude smaller than that in centimetre-sized rock specimens. Mechanical, visual and material observations suggest that slip-evolved stress heterogeneity on the fault accounts for the difference. On the basis of these observations, we propose that stress-concentrated areas exist in which frictional slip produces more wear materials (gouge) than in areas outside, resulting in further stress concentrations at these areas. Shear stress on the fault is primarily sustained by stress-concentrated areas that undergo a high work rate, so those areas should weaken rapidly and cause the macroscopic frictional strength to decrease abruptly. To verify this idea, we conducted numerical simulations assuming that local friction follows the frictional properties observed on centimetre-sized rock specimens. The simulations reproduced the macroscopic frictional properties observed on the metre-sized rock specimens. Given that localized stress concentrations commonly occur naturally, our results suggest that a natural fault may lose its

  13. High Dose-Rate Versus Low Dose-Rate Brachytherapy for Lip Cancer

    SciTech Connect

    Ghadjar, Pirus; Bojaxhiu, Beat; Simcock, Mathew; Terribilini, Dario; Isaak, Bernhard; Gut, Philipp; Wolfensberger, Patrick; Broemme, Jens O.; Geretschlaeger, Andreas; Behrensmeier, Frank; Pica, Alessia; Aebersold, Daniel M.

    2012-07-15

    Purpose: To analyze the outcome after low-dose-rate (LDR) or high-dose-rate (HDR) brachytherapy for lip cancer. Methods and Materials: One hundred and three patients with newly diagnosed squamous cell carcinoma of the lip were treated between March 1985 and June 2009 either by HDR (n = 33) or LDR brachytherapy (n = 70). Sixty-eight patients received brachytherapy alone, and 35 received tumor excision followed by brachytherapy because of positive resection margins. Acute and late toxicity was assessed according to the Common Terminology Criteria for Adverse Events 3.0. Results: Median follow-up was 3.1 years (range, 0.3-23 years). Clinical and pathological variables did not differ significantly between groups. At 5 years, local recurrence-free survival, regional recurrence-free survival, and overall survival rates were 93%, 90%, and 77%. There was no significant difference for these endpoints when HDR was compared with LDR brachytherapy. Forty-two of 103 patients (41%) experienced acute Grade 2 and 57 of 103 patients (55%) experienced acute Grade 3 toxicity. Late Grade 1 toxicity was experienced by 34 of 103 patients (33%), and 5 of 103 patients (5%) experienced late Grade 2 toxicity; no Grade 3 late toxicity was observed. Acute and late toxicity rates were not significantly different between HDR and LDR brachytherapy. Conclusions: As treatment for lip cancer, HDR and LDR brachytherapy have comparable locoregional control and acute and late toxicity rates. HDR brachytherapy for lip cancer seems to be an effective treatment with acceptable toxicity.

  14. Development of a high-rate submerged anaerobic membrane bioreactor.

    PubMed

    Mahmoud, I; Gao, W J; Liao, B Q; Cumin, J; Dagnew, M; Hong, Y

    2017-04-04

    Typically, anaerobic membrane bioreactors are operated at an organic loading rate (OLR) less than 10 kg chemical oxygen demand (COD)/m(3 )d. This paper discusses the development and performance of a high-rate submerged anaerobic membrane bioreactor (SAnMBR) for a high-strength synthetic industrial wastewater treatment. An OLR as high as 41 kg COD/m(3) d was achieved with excellent COD removal efficiency (>99%). The membrane was operated at constant fluxes (9.4-9.9 ± 0.5 L/m(2) h) and the change in trans-membrane pressure (TMP) was monitored to characterize the membrane performance. The results showed a low TMP (<5 kPa) under steady-state operation with only biogas sparging and relaxation as control strategy for over 300 days, implying no significant fouling was developed. Inorganic fouling was the dominant fouling mechanism occurred at the end of the study. The results suggest that the newly developed SAnMBR configuration can treat high-strength wastewater at lower capital expenditure while still providing superior effluent quality for water reuse or system closure.

  15. Attenuation and bit error rate for four co-propagating spatially multiplexed optical communication channels of exactly same wavelength in step index multimode fibers

    NASA Astrophysics Data System (ADS)

    Murshid, Syed H.; Chakravarty, Abhijit

    2011-06-01

    Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.

  16. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands

    PubMed Central

    Sánchez-Durán, José A.; Hidalgo-López, José A.; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando

    2015-01-01

    Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics. PMID:26295393

  17. ALTIMETER ERRORS,

    DTIC Science & Technology

    CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.

  18. High rate reactive sputtering of MoN(x) coatings

    NASA Technical Reports Server (NTRS)

    Rudnik, Paul J.; Graham, Michael E.; Sproul, William D.

    1991-01-01

    High rate reactive sputtering of MoN(x) films was performed using feedback control of the nitorgen partial pressure. Coatings were made at four different target powers: 2.5, 5.0, 7.5 and 10 kW. No hysteresis was observed in the nitrogen partial pressure vs. flow plot, as is typically seen for the Ti-N system. Four phases were determined by X-ray diffraction: molybdenum, Mo-N solid solution, Beta-Mo2N and gamma-Mo2N. The hardness of the coatings depended upon composition, substrate bias, and target power. The phases present in the hardest films differed depending upon deposition parameters. For example, the Beta-Mo2N phase was hardest (load 25 gf) at 5.0 kW with a value of 3200 kgf/sq mm, whereas the hardest coatings at 10 kW were the gamma-Mo2N phase (3000 kgf/sq mm). The deposition rate generally decreased with increasing nitrogen partial pressure, but there was a range of partial pressures where the rate was relatively constant. At a target power of 5.0 kW, for example, the deposition rates were 3300 A/min for a N2 partial pressure of 0.05 - 1.0 mTorr.

  19. High monetary reward rates and caloric rewards decrease temporal persistence

    PubMed Central

    Bode, Stefan; Murawski, Carsten

    2017-01-01

    Temporal persistence refers to an individual's capacity to wait for future rewards, while forgoing possible alternatives. This requires a trade-off between the potential value of delayed rewards and opportunity costs, and is relevant to many real-world decisions, such as dieting. Theoretical models have previously suggested that high monetary reward rates, or positive energy balance, may result in decreased temporal persistence. In our study, 50 fasted participants engaged in a temporal persistence task, incentivised with monetary rewards. In alternating blocks of this task, rewards were delivered at delays drawn randomly from distributions with either a lower or higher maximum reward rate. During some blocks participants received either a caloric drink or water. We used survival analysis to estimate participants' probability of quitting conditional on the delay distribution and the consumed liquid. Participants had a higher probability of quitting in blocks with the higher reward rate. Furthermore, participants who consumed the caloric drink had a higher probability of quitting than those who consumed water. Our results support the predictions from the theoretical models, and importantly, suggest that both higher monetary reward rates and physiologically relevant rewards can decrease temporal persistence, which is a crucial determinant for survival in many species. PMID:28228517

  20. On the accuracy of framing-rate measurements in ultra-high speed rotating mirror cameras.

    PubMed

    Conneely, Michael; Rolfsnes, Hans O; Main, Charles; McGloin, David; Campbell, Paul A

    2011-08-15

    Rotating mirror systems based on the Miller Principle are a mainstay modality for ultra-high speed imaging within the range 1-25 million frames per second. Importantly, the true temporal accuracy of observations recorded in such cameras is sensitive to the framing rate that the system directly associates with each individual data acquisition. The purpose for the present investigation was to examine the validity of such system-reported frame rates in a widely used commercial system (a Cordin 550-62 model) by independently measuring the framing rate at the instant of triggering. Here, we found a small but significant difference between such measurements: the average discrepancy (over the entire spectrum of frame rates used) was found to be 0.66 ± 0.48%, with a maximum difference of 2.33%. The principal reason for this discrepancy was traced to non-optimized sampling of the mirror rotation rate within the system protocol. This paper thus serves three purposes: (i) we highlight a straightforward diagnostic approach to facilitate scrutiny of rotating-mirror system integrity; (ii) we raise awareness of the intrinsic errors associated with data previously acquired with this particular system and model; and (iii), we recommend that future control routines address the sampling issue by implementing real-time measurement at the instant of triggering.

  1. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Kasami, Tadao; Fujiwara, Toru; Takata, Toyoo; Lin, Shu

    1988-01-01

    A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error-correcting codes, called the inner and outer codes. Its error performance is analyzed for a binary symmetric channel with bit-error rate epsilon less than 1/2. It is shown that, if the inner and outer codes are chosen properly, high reliability can be attained even for a high-channel bit-error rate. Specific examples with inner codes ranging from high rates and Reed-Solomon codes as outer codes are considered, and their error probabilities evaluated. They all provide high reliability even for high bit-error rates, say 0.1-0.01. Several example schemes are being considered for satellite and spacecraft downlink error control.

  2. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  3. Fast demographic traits promote high diversification rates of Amazonian trees

    PubMed Central

    Baker, Timothy R; Pennington, R Toby; Magallon, Susana; Gloor, Emanuel; Laurance, William F; Alexiades, Miguel; Alvarez, Esteban; Araujo, Alejandro; Arets, Eric J M M; Aymard, Gerardo; de Oliveira, Atila Alves; Amaral, Iêda; Arroyo, Luzmila; Bonal, Damien; Brienen, Roel J W; Chave, Jerome; Dexter, Kyle G; Di Fiore, Anthony; Eler, Eduardo; Feldpausch, Ted R; Ferreira, Leandro; Lopez-Gonzalez, Gabriela; van der Heijden, Geertje; Higuchi, Niro; Honorio, Eurídice; Huamantupa, Isau; Killeen, Tim J; Laurance, Susan; Leaño, Claudio; Lewis, Simon L; Malhi, Yadvinder; Marimon, Beatriz Schwantes; Marimon Junior, Ben Hur; Monteagudo Mendoza, Abel; Neill, David; Peñuela-Mora, Maria Cristina; Pitman, Nigel; Prieto, Adriana; Quesada, Carlos A; Ramírez, Fredy; Ramírez Angulo, Hirma; Rudas, Agustin; Ruschel, Ademir R; Salomão, Rafael P; de Andrade, Ana Segalin; Silva, J Natalino M; Silveira, Marcos; Simon, Marcelo F; Spironello, Wilson; ter Steege, Hans; Terborgh, John; Toledo, Marisol; Torres-Lezama, Armando; Vasquez, Rodolfo; Vieira, Ima Célia Guimarães; Vilanova, Emilio; Vos, Vincent A; Phillips, Oliver L; Wiens, John

    2014-01-01

    The Amazon rain forest sustains the world's highest tree diversity, but it remains unclear why some clades of trees are hyperdiverse, whereas others are not. Using dated phylogenies, estimates of current species richness and trait and demographic data from a large network of forest plots, we show that fast demographic traits – short turnover times – are associated with high diversification rates across 51 clades of canopy trees. This relationship is robust to assuming that diversification rates are either constant or decline over time, and occurs in a wide range of Neotropical tree lineages. This finding reveals the crucial role of intrinsic, ecological variation among clades for understanding the origin of the remarkable diversity of Amazonian trees and forests. PMID:24589190

  4. Solid State Experiments at High Pressure and Strain Rates

    SciTech Connect

    Kalantar, D.H.; Remington, B.A.; Colvin, J.D.; Mikaelian, K.O.; Weber, S.V.; Wiley, L.G.; Wark, J.S.; Loveridge, A.; Allen, A.M.; Hauer, A.; Meyers, M.A.

    1999-11-24

    Experiments have been developed using high powered laser facilities to study the response of materials in the solid state under extreme pressures and strain rates. Details of the target and drive development required for solid state experiments and results from two separate experiments are presented. In the first, thin foils were compressed to a peak pressure of 180 GPa and accelerated. A pre-imposed modulation at the embedded RT unstable interface was observed to grow. The growth rates were fluid-like at early time, but suppressed at later time. This result is suggestive of the theory of localized heating in shear bands, followed by dissipation of the heat, allowing for recovery of the bulk material strength. In the second experiment, the response of Si was studied by dynamic x-ray diffraction. The crystal was observed to respond with uni-axial compression at a peak pressure 11.5-13.5 GPa.

  5. Low resistance bakelite RPC study for high rate working capability

    DOE PAGES

    Dai, T.; Han, L.; Hou, S.; ...

    2014-11-19

    This paper presents series efforts to lower resistance of bakelite electrode plate to improve the RPC capability under high rate working condition. New bakelite material with alkali metallic ion doping has been manufactured and tested. This bakelite is found unstable under large charge flux and need further investigation. A new structure of carbon-embedded bakelite RPC has been developed, which can reduce the effective resistance of electrode by a factor of 10. The prototype of the carbon-embedded chamber could function well under gamma radiation source at event rate higher than 10 kHz/cm2. The preliminary tests show that this kind of newmore » structure performs as efficiently as traditional RPCs.« less

  6. Low resistance bakelite RPC study for high rate working capability

    SciTech Connect

    Dai, T.; Han, L.; Hou, S.; Liu, M.; Li, Q.; Song, H.; Xia, L.; Zhang, Z.

    2014-11-19

    This paper presents series efforts to lower resistance of bakelite electrode plate to improve the RPC capability under high rate working condition. New bakelite material with alkali metallic ion doping has been manufactured and tested. This bakelite is found unstable under large charge flux and need further investigation. A new structure of carbon-embedded bakelite RPC has been developed, which can reduce the effective resistance of electrode by a factor of 10. The prototype of the carbon-embedded chamber could function well under gamma radiation source at event rate higher than 10 kHz/cm2. The preliminary tests show that this kind of new structure performs as efficiently as traditional RPCs.

  7. Multianode cylindrical proportional counter for high count rates

    DOEpatents

    Hanson, J.A.; Kopp, M.K.

    1980-05-23

    A cylindrical, multiple-anode proportional counter is provided for counting of low-energy photons (< 60 keV) at count rates of greater than 10/sup 5/ counts/sec. A gas-filled proportional counter cylinder forming an outer cathode is provided with a central coaxially disposed inner cathode and a plurality of anode wires disposed in a cylindrical array in coaxial alignment with and between the inner and outer cathodes to form a virtual cylindrical anode coaxial with the inner and outer cathodes. The virtual cylindrical anode configuration improves the electron drift velocity by providing a more uniform field strength throughout the counter gas volume, thus decreasing the electron collection time following the detection of an ionizing event. This avoids pulse pile-up and coincidence losses at these high count rates. Conventional RC position encoding detection circuitry may be employed to extract the spatial information from the counter anodes.

  8. Multianode cylindrical proportional counter for high count rates

    DOEpatents

    Hanson, James A.; Kopp, Manfred K.

    1981-01-01

    A cylindrical, multiple-anode proportional counter is provided for counting of low-energy photons (<60 keV) at count rates of greater than 10.sup.5 counts/sec. A gas-filled proportional counter cylinder forming an outer cathode is provided with a central coaxially disposed inner cathode and a plurality of anode wires disposed in a cylindrical array in coaxial alignment with and between the inner and outer cathodes to form a virtual cylindrical anode coaxial with the inner and outer cathodes. The virtual cylindrical anode configuration improves the electron drift velocity by providing a more uniform field strength throughout the counter gas volume, thus decreasing the electron collection time following the detection of an ionizing event. This avoids pulse pile-up and coincidence losses at these high count rates. Conventional RC position encoding detection circuitry may be employed to extract the spatial information from the counter anodes.

  9. High-pressure burning rate studies of solid rocket propellants

    NASA Astrophysics Data System (ADS)

    Atwood, A. I.; Ford, K. P.; Wheeler, C. J.

    2013-03-01

    Increased rocket motor performance is a major driver in the development of solid rocket propellant formulations for chemical propulsion systems. The use of increased operating pressure is an option to improve performance potentially without the cost of reformulation. A technique has been developed to obtain burning rate data across a range of pressures from ambient to 345 MPa. The technique combines the use of a low loading density combustion bomb with a high loading density closed bomb technique. A series of nine ammonium perchlorate (AP) based propellants were used to demonstrate the use of the technique, and the results were compared to the neat AP burning rate "barrier". The effect of plasticizer, oxidizer particle size, catalyst, and binder type were investigated.

  10. Predicting sex offender recidivism. I. Correcting for item overselection and accuracy overestimation in scale development. II. Sampling error-induced attenuation of predictive validity over base rate information.

    PubMed

    Vrieze, Scott I; Grove, William M

    2008-06-01

    The authors demonstrate a statistical bootstrapping method for obtaining unbiased item selection and predictive validity estimates from a scale development sample, using data (N = 256) of Epperson et al. [2003 Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) technical paper: Development, validation, and recommended risk level cut scores. Retrieved November 18, 2006 from Iowa State University Department of Psychology web site: http://www.psychology.iastate.edu/ approximately dle/mnsost_download.htm] from which the Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) was developed. Validity (area under receiver operating characteristic curve) reported by Epperson et al. was .77 with 16 items selected. The present analysis yielded an asymptotically unbiased estimator AUC = .58. The present article also focused on the degree to which sampling error renders estimated cutting scores (appropriate to local [varying] recidivism base rates) nonoptimal, so that the long-run performance (measured by correct fraction, the total proportion of correct classifications) of these estimated cutting scores is poor, when they are applied to their parent populations (having assumed values for AUC and recidivism rate). This was investigated by Monte Carlo simulation over a range of AUC and recidivism rate values. Results indicate that, except for the AUC values higher than have ever been cross-validated, in combination with recidivism base rates severalfold higher than the literature average [Hanson and Morton-Bourgon, 2004, Predictors of sexual recidivism: An updated meta-analysis. (User report 2004-02.). Ottawa: Public Safety and Emergency Preparedness Canada], the user of an instrument similar in performance to the MnSOST-R cannot expect to achieve correct fraction performance notably in excess of what is achievable from knowing the population recidivism rate alone. The authors discuss the legal implications of their findings for procedural and substantive due process in

  11. Experimental investigation of bond strength under high loading rates

    NASA Astrophysics Data System (ADS)

    Michal, Mathias; Keuser, Manfred; Solomos, George; Peroni, Marco; Larcher, Martin; Esteban, Beatriz

    2015-09-01

    The structural behaviour of reinforced concrete is governed significantly by the transmission of forces between steel and concrete. The bond is of special importance for the overlapping joint and anchoring of the reinforcement, where rigid bond is required. It also plays an important role in the rotational capacity of plastic hinges, where a ductile bond behaviour is preferable. Similar to the mechanical properties of concrete and steel also the characteristics of their interaction changes with the velocity of the applied loading. For smooth steel bars with its main bond mechanisms of adhesion and friction, nearly no influence of loading rate is reported in literature. In contrast, a high rate dependence can be found for the nowadays mainly used deformed bars. For mechanical interlock, where ribs of the reinforcing steel are bracing concrete material surrounding the bar, one reason can be assumed to be in direct connection with the increase of concrete compressive strength. For splitting failure of bond, characterized by the concrete tensile strength, an even higher dynamic increase is observed. For the design of Structures exposed to blast or impact loading the knowledge of a rate dependent bond stress-slip relationship is required to consider safety and economical aspects at the same time. The bond behaviour of reinforced concrete has been investigated with different experimental methods at the University of the Bundeswehr Munich (UniBw) and the Joint Research Centre (JRC) in Ispra. Both static and dynamic tests have been carried out, where innovative experimental apparatuses have been used. The bond stress-slip relationship and maximum pull-out-forces for varying diameter of the bar, concrete compressive strength and loading rates have been obtained. It is expected that these experimental results will contribute to a better understanding of the rate dependent bond behaviour and will serve for calibration of numerical models.

  12. Handling high data rate detectors at Diamond Light Source

    NASA Astrophysics Data System (ADS)

    Pedersen, U. K.; Rees, N.; Basham, M.; Ferner, F. J. K.

    2013-03-01

    An increasing number of area detectors, in use at Diamond Light Source, produce high rates of data. In order to capture, store and process this data High Performance Computing (HPC) systems have been implemented. This paper will present the architecture and usage for handling high rate data: detector data capture, large volume storage and parallel processing. The EPICS area Detector frame work has been adopted to abstract the detectors for common tasks including live processing, file format and storage. The chosen data format is HDF5 which provides multidimensional data storage and NeXuS compatibility. The storage system and related computing infrastructure include: a centralised Lustre based parallel file system, a dedicated network and a HPC cluster. A well defined roadmap is in place for the evolution of this to meet demand as the requirements and technology advances. For processing the science data the HPC cluster allow efficient parallel computing, on a mixture of ×86 and GPU processing units. The nature of the Lustre storage system in combination with the parallel HDF5 library allow efficient disk I/O during computation jobs. Software developments, which include utilising optimised parallel file reading for a variety of post processing techniques, are being developed in collaboration as part of the Pan-Data EU Project (www.pan-data.eu). These are particularly applicable to tomographic reconstruction and processing of non crystalline diffraction data.

  13. High-rate anaerobic composting with biogas recovery

    SciTech Connect

    DeBaere, L.; Verstraete, W.

    1984-03-01

    In Belgium a novel high rate anaerobic composting process with biogas has been developed as an alternative to aerobic systems, producing a commercial dry compost and 60 to 95 cubic metres methane per ton of municipal solid waste. This is a high value energy source simultaneously yielding a stabilized end product. The process was developed so that digestion could take place at 25 to 35% total solids, thus reducing the amount of water needed to dilute the waste, decreasing the digestor volume and cutting transportation costs. The end product is odorless and stable. High rate anaerobic composting of MSW can be combined with sewage sludge stabilization. Manure, vegetable or fruit wastes can be co-treated in certain proportions as required. About 15 to 20% of the energy produced is transformed into electricity and heat and consumed as the waste disposal plant itself. 120 to 140 US $ worth of methane gas and compost can be produced per cubic metre of reactor per year, making anaerobic composting economically attractive.

  14. Measurement of fracture properties of concrete at high strain rates.

    PubMed

    Rey-De-Pedraza, V; Cendón, D A; Sánchez-Gálvez, V; Gálvez, F

    2017-01-28

    An analysis of the spalling technique of concrete bars using the modified Hopkinson bar was carried out. A new experimental configuration is proposed adding some variations to previous works. An increased length for concrete specimens was chosen and finite-element analysis was used for designing a conic projectile to obtain a suitable triangular impulse wave. The aim of this initial work is to establish an experimental framework which allows a simple and direct analysis of concrete subjected to high strain rates. The efforts and configuration of these primary tests, as well as the selected geometry and dimensions for the different elements, have been focused to achieve a simple way of identifying the fracture position and so the tensile strength of tested specimens. This dynamic tensile strength can be easily compared with previous values published in literature giving an idea of the accuracy of the method and technique proposed and the possibility to extend it in a near future to obtain other mechanical properties such as the fracture energy. The tests were instrumented with strain gauges, accelerometers and high-speed camera in order to validate the results by different ways. Results of the dynamic tensile strength of the tested concrete are presented.This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.

  15. GPU accelerated processing of astronomical high frame-rate videosequences

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav; Švihlík, Jan; Krasula, Lukáš; Fliegel, Karel; Páta, Petr

    2015-09-01

    Astronomical instruments located around the world are producing an incredibly large amount of possibly interesting scientific data. Astronomical research is expanding into large and highly sensitive telescopes. Total volume of data rates per night of operations also increases with the quality and resolution of state-of-the-art CCD/CMOS detectors. Since many of the ground-based astronomical experiments are placed in remote locations with limited access to the Internet, it is necessary to solve the problem of the data storage. It mostly means that current data acquistion, processing and analyses algorithm require review. Decision about importance of the data has to be taken in very short time. This work deals with GPU accelerated processing of high frame-rate astronomical video-sequences, mostly originating from experiment MAIA (Meteor Automatic Imager and Analyser), an instrument primarily focused to observing of faint meteoric events with a high time resolution. The instrument with price bellow 2000 euro consists of image intensifier and gigabite ethernet camera running at 61 fps. With resolution better than VGA the system produces up to 2TB of scientifically valuable video data per night. Main goal of the paper is not to optimize any GPU algorithm, but to propose and evaluate parallel GPU algorithms able to process huge amount of video-sequences in order to delete all uninteresting data.

  16. Diamond detector for high rate monitors of fast neutrons beams

    SciTech Connect

    Giacomelli, L.; Rebai, M.; Cippo, E. Perelli; Tardocchi, M.; Fazzi, A.; Andreani, C.; Pietropaolo, A.; Frost, C. D.; Rhodes, N.; Schooneveld, E.; Gorini, G.

    2012-06-19

    A fast neutron detection system suitable for high rate measurements is presented. The detector is based on a commercial high purity single crystal diamond (SDD) coupled to a fast digital data acquisition system. The detector was tested at the ISIS pulsed spallation neutron source. The SDD event signal was digitized at 1 GHz to reconstruct the deposited energy (pulse amplitude) and neutron arrival time; the event time of flight (ToF) was obtained relative to the recorded proton beam signal t{sub 0}. Fast acquisition is needed since the peak count rate is very high ({approx}800 kHz) due to the pulsed structure of the neutron beam. Measurements at ISIS indicate that three characteristics regions exist in the biparametric spectrum: i) background gamma events of low pulse amplitudes; ii) low pulse amplitude neutron events in the energy range E{sub dep}= 1.5-7 MeV ascribed to neutron elastic scattering on {sup 12}C; iii) large pulse amplitude neutron events with E{sub n} < 7 MeV ascribed to {sup 12}C(n,{alpha}){sup 9}Be and 12C(n,n')3{alpha}.

  17. New tool designs for high rate gravel pack operations

    SciTech Connect

    Ross, C.M.

    1995-12-31

    Fracturing of the wellbore to improve hydrocarbon recovery has been a universally accepted practice in the oilfield. The fracturing procedures reduce skin by breaking through or bypassing near wellbore damage that inhibits production. In loosely consolidated formations, a propped fracture can reduce fluid velocity in the near wellbore region, which subsequently reduces fines migration that can plug the wellbore. Fracturing also provides highly conductive paths for gas and oil production. Gravel packing is another operation that is often needed during a well`s productive cycle. When a highly conductive fracture is created before a gravel packing operation is run, it has been found that well productivity increases. Performing the operations separately, however, diminishes the productivity gains because of formation damage that can occur between completion operations. A method of gravel packing that includes a tip-screenout-design fracturing procedure, performed with the gravel pack packer, screen, and blank in the hole, was proposed to allow the procedures to be performed simultaneously. This paper will describe the various types of gravel packing tools that are currently in use, their specific application, and a new series of gravel packing tools that was developed to resolve the difficulties that arose when the operations of fracturing and gravel packing were combined. Also discussed is the need that arose for tools that could sustain high flow rates in small casing diameters. Test results will be used to provide acceptable flow rates for different bore sizes.

  18. New tool designs for high rate gravel pack operations

    SciTech Connect

    Ross, C.M.

    1995-10-01

    A universally accepted practice in the oilfield has been fracturing of the wellbore to improve hydrocarbon recovery. Fracturing procedures reduce skin by breaking through or bypassing near wellbore damage that inhibits production. In loosely consolidated formations, a propped fracture can reduce fluid velocity in the near wellbore region, which subsequently reduces fines migration that can plug the wellbore. Fracturing also provides highly conductive paths for gas and oil production. Gravel packing is another operation that is often needed during a well`s productive cycle. When a highly-conductive fracture is created before a gravel packing operation is run, it has been found that well productivity increases. Performing the operations separately however, diminishes the productivity gains because of formation damage that can occur between completion operations. A method of gravel packing that includes a tip-screen-out-design fracturing procedure, performed with the gravel pack packer, screen, and blank in the hole, was proposed to allow the procedures to be performed simultaneously. This paper will describe the various types of gravel packing tools that are currently in use, their specific application, and a new series of gravel packing tools that was developed to resolve the difficulties that arose when the operations of fracturing and gravel packing were combined. Also discussed is the need that arose for tools that could sustain high flow rates in small casing diameters. Test results will be used to provide acceptable flow rates for different bore sizes.

  19. Measurement of fracture properties of concrete at high strain rates

    NASA Astrophysics Data System (ADS)

    Rey-De-Pedraza, V.; Cendón, D. A.; Sánchez-Gálvez, V.; Gálvez, F.

    2017-01-01

    An analysis of the spalling technique of concrete bars using the modified Hopkinson bar was carried out. A new experimental configuration is proposed adding some variations to previous works. An increased length for concrete specimens was chosen and finite-element analysis was used for designing a conic projectile to obtain a suitable triangular impulse wave. The aim of this initial work is to establish an experimental framework which allows a simple and direct analysis of concrete subjected to high strain rates. The efforts and configuration of these primary tests, as well as the selected geometry and dimensions for the different elements, have been focused to achieve a simple way of identifying the fracture position and so the tensile strength of tested specimens. This dynamic tensile strength can be easily compared with previous values published in literature giving an idea of the accuracy of the method and technique proposed and the possibility to extend it in a near future to obtain other mechanical properties such as the fracture energy. The tests were instrumented with strain gauges, accelerometers and high-speed camera in order to validate the results by different ways. Results of the dynamic tensile strength of the tested concrete are presented. This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.

  20. Comparison of pulse rate variability and heart rate variability for high frequency content estimation.

    PubMed

    Logier, R; De Jonckheere, J; Dassonneville, A; Jeanne, M

    2016-08-01

    Heart Rate Variability (HRV) analysis can be of precious help in most of clinical situations because it is able to quantify the Autonomic Nervous System (ANS) activity. The HRV high frequency (HF) content, related to the parasympathetic tone, reflects the ANS response to an external stimulus responsible of pain, stress or various emotions. We have previously developed the Analgesia Nociception Index (ANI), based on HRV high frequency content estimation, which quantifies continuously the vagal tone in order to guide analgesic drug administration during general anesthesia. This technology has been largely validated during the peri-operative period. Currently, ANI is obtained from a specific algorithm analyzing a time series representing successive heart periods measured on the electrocardiographic (ECG) signal. In the perspective of widening the application fields of this technology, in particular for homecare monitoring, it has become necessary to simplify signal acquisition by using e.g. a pulse plethysmographic (PPG) sensor. Even if Pulse Rate Variability (PRV) analysis issued from PPG sensors has been shown to be unreliable and a bad predictor of HRV analysis results, we have compared PRV and HRV both estimated by ANI as well as HF and HF/(HF+LF) spectral analysis on both signals.