Science.gov

Sample records for modified frequency error

  1. A Modified Error in Constitutive Equation Approach for Frequency-Domain Viscoelasticity Imaging Using Interior Data

    PubMed Central

    Diaz, Manuel I.; Aquino, Wilkins; Bonnet, Marc

    2015-01-01

    This paper presents a methodology for the inverse identification of linearly viscoelastic material parameters in the context of steady-state dynamics using interior data. The inverse problem of viscoelasticity imaging is solved by minimizing a modified error in constitutive equation (MECE) functional, subject to the conservation of linear momentum. The treatment is applicable to configurations where boundary conditions may be partially or completely underspecified. The MECE functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses, and also incorporates the measurement data in a quadratic penalty term. Regularization of the problem is achieved through a penalty parameter in combination with the discrepancy principle due to Morozov. Numerical results demonstrate the robust performance of the method in situations where the available measurement data is incomplete and corrupted by noise of varying levels. PMID:26388656

  2. Error Analysis of Modified Langevin Dynamics

    NASA Astrophysics Data System (ADS)

    Redon, Stephane; Stoltz, Gabriel; Trstanova, Zofia

    2016-08-01

    We consider Langevin dynamics associated with a modified kinetic energy vanishing for small momenta. This allows us to freeze slow particles, and hence avoid the re-computation of inter-particle forces, which leads to computational gains. On the other hand, the statistical error may increase since there are a priori more correlations in time. The aim of this work is first to prove the ergodicity of the modified Langevin dynamics (which fails to be hypoelliptic), and next to analyze how the asymptotic variance on ergodic averages depends on the parameters of the modified kinetic energy. Numerical results illustrate the approach, both for low-dimensional systems where we resort to a Galerkin approximation of the generator, and for more realistic systems using Monte Carlo simulations.

  3. Frequency analysis of nonlinear oscillations via the global error minimization

    NASA Astrophysics Data System (ADS)

    Kalami Yazdi, M.; Hosseini Tehrani, P.

    2016-06-01

    The capacity and effectiveness of a modified variational approach, namely global error minimization (GEM) is illustrated in this study. For this purpose, the free oscillations of a rod rocking on a cylindrical surface and the Duffing-harmonic oscillator are treated. In order to validate and exhibit the merit of the method, the obtained result is compared with both of the exact frequency and the outcome of other well-known analytical methods. The corollary reveals that the first order approximation leads to an acceptable relative error, specially for large initial conditions. The procedure can be promisingly exerted to the conservative nonlinear problems.

  4. The Relative Frequency of Spanish Pronunciation Errors.

    ERIC Educational Resources Information Center

    Hammerly, Hector

    Types of hierarchies of pronunciation difficulty are discussed, and a hierarchy based on contrastive analysis plus informal observation is proposed. This hierarchy is less one of initial difficulty than of error persistence. One feature of this hierarchy is that, because of lesser learner awareness and very limited functional load, errors…

  5. Compensation Low-Frequency Errors in TH-1 Satellite

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Renxiang; Hu, Xin

    2016-06-01

    The topographic mapping products at 1:50,000 scale can be realized using satellite photogrammetry without ground control points (GCPs), which requires the high accuracy of exterior orientation elements. Usually, the attitudes of exterior orientation elements are obtained from the attitude determination system on the satellite. Based on the theoretical analysis and practice, the attitude determination system exists not only the high-frequency errors, but also the low-frequency errors related to the latitude of satellite orbit and the time. The low-frequency errors would affect the location accuracy without GCPs, especially to the horizontal accuracy. In SPOT5 satellite, the latitudinal model was proposed to correct attitudes using approximately 20 calibration sites data, and the location accuracy was improved. The low-frequency errors are also found in Tian Hui 1 (TH-1) satellite. Then, the method of compensation low-frequency errors is proposed in ground image processing of TH-1, which can detect and compensate the low-frequency errors automatically without using GCPs. This paper deal with the low-frequency errors in TH-1: First, the analysis about low-frequency errors of the attitude determination system is performed. Second, the compensation models are proposed in bundle adjustment. Finally, the verification is tested using data of TH-1. The testing results show: the low-frequency errors of attitude determination system can be compensated during bundle adjustment, which can improve the location accuracy without GCPs and has played an important role in the consistency of global location accuracy.

  6. Frequency of Consonant Articulation Errors in Dysarthric Speech

    ERIC Educational Resources Information Center

    Kim, Heejin; Martin, Katie; Hasegawa-Johnson, Mark; Perlman, Adrienne

    2010-01-01

    This paper analyses consonant articulation errors in dysarthric speech produced by seven American-English native speakers with cerebral palsy. Twenty-three consonant phonemes were transcribed with diacritics as necessary in order to represent non-phoneme misarticulations. Error frequencies were examined with respect to six variables: articulatory…

  7. RCCS operation with a resonant frequency error in the KOMAC

    NASA Astrophysics Data System (ADS)

    Seo, Dong-Hyuk

    2015-10-01

    The resonance control cooling systems (RCCSs) of the Korea Multi-purpose Accelerator Complex have been operated for cooling the drift tubes (DT) and controlling the resonant frequency of the drift tube linac (DTL). The DTL should maintain a resonant frequency of 350 MHz during operation. A RCCS can control the temperature of the cooling water to within ±0.1 °C by using a 3-way valve opening and has a constant-cooling-water-temperature control mode and resonant-frequency-control mode. In the case of the resonant-frequency control, the error in the frequency is measured by using the low-level radio-frequency control system, and the RCCS uses a proportional-integral-derivative control algorithm to compensate for the error by controlling the temperature of the cooling water to the DT.

  8. Evaluation of stress wave propagation through rock mass using a modified dominate frequency method

    NASA Astrophysics Data System (ADS)

    Fan, L. F.; Wu, Z. J.

    2016-09-01

    This paper presents an evaluation of stress wave propagation through rock mass using a modified dominate frequency method. The effective velocity and transmission coefficient of stress wave propagation through rock mass with different joint stiffnesses are investigated. The results are validated by the theoretical method and the effects of incident frequency on the calculation accuracy are discussed. The results show that the modified dominate frequency method can be used to predict the effective velocity when the frequency of stress waves is within the low frequency range or high frequency range. However, the error cannot be ignored when the frequency is in the transitional frequency range. On the other hand, the modified dominate frequency method can be used to predict the transmission coefficient when the frequency of stress wave is within the low frequency range or optimal frequency range. However, the error cannot be ignored when the wave is within the high frequency range, which approaches 40% when the frequency is sufficiently large. Finally, the optimal stiffness-frequency relationship for the maximum calculation errors of effective velocity and the minimum calculation errors of transmission coefficient are proposed.

  9. Identifying Modifiable Barriers to Medication Error Reporting in the Nursing Home Setting

    PubMed Central

    Handler, Steven M.; Perera, Subashan; Olshansky, Ellen F.; Studenski, Stephanie A.; Nace, David A.; Fridsma, Douglas B.; Hanlon, Joseph T.

    2007-01-01

    Objectives To have healthcare professionals in nursing homes identify organizational-level and individual-level modifiable barriers to medication error reporting. Design Nominal group technique sessions to identify potential barriers, followed by development and administration of a 20-item cross-sectional mailed survey. Participants and Setting Representatives of 4 professions (physicians, pharmacists, advanced practitioners, and nurses) from 4 independently owned, nonprofit nursing homes that had an average bed size of 150, were affiliated with an academic medical center, and were located in urban and suburban areas. Measurements Barriers identified in the nominal group technique sessions were used to design a 20-item survey. Survey respondents used 5-point Likert scales to score factors in terms of their likelihood of posing a barrier (“very unlikely” to “very likely”) and their modifiability (“not modifiable” to “very modifiable”). Immediate action factors were identified as factors with mean scores of <3.0 on the likelihood and modifiability scales, and represent barriers that should be addressed to increase medication error reporting frequency. Results In 4 nominal group technique sessions, 28 professionals identified factors to include in the survey. The survey was mailed to all 154 professionals in the 4 nursing homes, and 104 (67.5%) responded. Response rates by facility ranged from 55.8% to 92.9%, and rates by profession ranged from 52.0% for physicians to 100% for pharmacists. Most respondents (75.0%) were women. Respondents had worked for a mean of 9.8 years in nursing homes and 5.4 years in their current facility. Of 20 survey items, 14 (70%) had scores that categorized them as immediate action factors, 9 (64%) of which were organizational barriers. Of these factors, the 3 considered most modifiable were (1) lack of a readily available medication error reporting system or forms, (2) lack of information on how to report a medication error

  10. Hope modified the association between distress and incidence of self-perceived medical errors among practicing physicians: prospective cohort study.

    PubMed

    Hayashino, Yasuaki; Utsugi-Ozaki, Makiko; Feldman, Mitchell D; Fukuhara, Shunichi

    2012-01-01

    The presence of hope has been found to influence an individual's ability to cope with stressful situations. The objective of this study is to evaluate the relationship between medical errors, hope and burnout among practicing physicians using validated metrics. Prospective cohort study was conducted among hospital based physicians practicing in Japan (N = 836). Measures included the validated Burnout Scale, self-assessment of medical errors and Herth Hope Index (HHI). The main outcome measure was the frequency of self-perceived medical errors, and Poisson regression analysis was used to evaluate the association between hope and medical error. A total of 361 errors were reported in 836 physician-years. We observed a significant association between hope and self-report of medical errors. Compared with the lowest tertile category of HHI, incidence rate ratios (IRRs) of self-perceived medical errors of physicians in the highest category were 0.44 (95%CI, 0.34 to 0.58) and 0.54 (95%CI, 0.42 to 0.70) respectively, for the 2(nd) and 3(rd) tertile. In stratified analysis by hope score, among physicians with a low hope score, those who experienced higher burnout reported higher incidence of errors; physicians with high hope scores did not report high incidences of errors, even if they experienced high burnout. Self-perceived medical errors showed a strong association with physicians' hope, and hope modified the association between physicians' burnout and self-perceived medical errors.

  11. Effect of photogrammetric reading error on slope-frequency distributions. [obtained from Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Wu, S. C.

    1973-01-01

    The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.

  12. Machining Error Compensation Based on 3D Surface Model Modified by Measured Accuracy

    NASA Astrophysics Data System (ADS)

    Abe, Go; Aritoshi, Masatoshi; Tomita, Tomoki; Shirase, Keiichi

    Recently, a demand for precision machining of dies and molds with complex shapes has been increasing. Although CNC machine tools are utilized widely for machining, still machining error compensation is required to meet the increasing demand of machining accuracy. However, the machining error compensation is an operation which takes huge amount of skill, time and cost. This paper deals with a new method of the machining error compensation. The 3D surface data of the machined part is modified according to the machining error measured by CMM (Coordinate Measuring Machine). A compensated NC program is generated from the modified 3D surface data for the machining error compensation.

  13. Susceptibility of biallelic haplotype and genotype frequencies to genotyping error.

    PubMed

    Moskvina, Valentina; Schmidt, Karl Michael

    2006-12-01

    With the availability of fast genotyping methods and genomic databases, the search for statistical association of single nucleotide polymorphisms with a complex trait has become an important methodology in medical genetics. However, even fairly rare errors occurring during the genotyping process can lead to spurious association results and decrease in statistical power. We develop a systematic approach to study how genotyping errors change the genotype distribution in a sample. The general M-marker case is reduced to that of a single-marker locus by recognizing the underlying tensor-product structure of the error matrix. Both method and general conclusions apply to the general error model; we give detailed results for allele-based errors of size depending both on the marker locus and the allele present. Multiple errors are treated in terms of the associated diffusion process on the space of genotype distributions. We find that certain genotype and haplotype distributions remain unchanged under genotyping errors, and that genotyping errors generally render the distribution more similar to the stable one. In case-control association studies, this will lead to loss of statistical power for nondifferential genotyping errors and increase in type I error for differential genotyping errors. Moreover, we show that allele-based genotyping errors do not disturb Hardy-Weinberg equilibrium in the genotype distribution. In this setting we also identify maximally affected distributions. As they correspond to situations with rare alleles and marker loci in high linkage disequilibrium, careful checking for genotyping errors is advisable when significant association based on such alleles/haplotypes is observed in association studies.

  14. Modified McLeod pressure gage eliminates measurement errors

    NASA Technical Reports Server (NTRS)

    Kells, M. C.

    1966-01-01

    Modification of a McLeod gage eliminates errors in measuring absolute pressure of gases in the vacuum range. A valve which is internal to the gage and is magnetically actuated is positioned between the mercury reservoir and the sample gas chamber.

  15. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  16. A Study of the Frequency and Communicative Effects of Errors in Spanish

    ERIC Educational Resources Information Center

    Guntermann, Gail

    1978-01-01

    A study conducted in El Salvador was designed to: determine which kinds of errors may be most frequently committed by learners who have reached a basic level of proficiency: discover which high-frequency errors most impede comprehension; and develop a procedure for eliciting evaluational reactions to errors from native listeners. (SW)

  17. Generation of linear frequency modulation signal with reduced round-off error using pulse-output Direct Digital Synthesis technique.

    PubMed

    Peng, Cheng Y; Ma, Xiao C; Yan, She F; Yang, Li

    2014-02-01

    The pulse-output Direct Digital Synthesis (DDS), in which the overflow signal of the phase accumulator is used for the pulse output, can be easily implemented due to its simple hardware architecture and low algorithm complexity. This paper introduces the fundamentals for generating Linear Frequency Modulation (LFM) pulse using pulse-output DDS technique. Error introducing mechanisms that affect the accuracy of signal's duration, initial phase, and frequency are studied. Extensive analysis of round-off error is given. A modified hardware architecture for LFM pulse generation with reduced round-off error is proposed. Experiment results are given, which shows that the proposed generator is promising in applications such as sonar transmitters.

  18. Theory of point-spread function artifacts due to structured mid-spatial frequency surface errors.

    PubMed

    Tamkin, John M; Dallas, William J; Milster, Tom D

    2010-09-01

    Optical design and tolerancing of aspheric or free-form surfaces require attention to surface form, structured surface errors, and nonstructured errors. We describe structured surface error profiles and effects on the image point-spread function using harmonic (Fourier) decomposition. Surface errors over the beam footprint map onto the pupil, where multiple structured surface frequencies mix to create sum and difference diffraction orders in the image plane at each field point. Difference frequencies widen the central lobe of the point-spread function and summation frequencies create ghost images.

  19. Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth

    PubMed Central

    Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan

    2015-01-01

    Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars. PMID:26347779

  20. Bounding higher-order ionosphere errors for the dual-frequency GPS user

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Walter, T.; Blanch, J.; Enge, P.

    2008-10-01

    Civil signals at L2 and L5 frequencies herald a new phase of Global Positioning System (GPS) performance. Dual-frequency users typically assume a first-order approximation of the ionosphere index of refraction, combining the GPS observables to eliminate most of the ranging delay, on the order of meters, introduced into the pseudoranges. This paper estimates the higher-order group and phase errors that occur from assuming the ordinary first-order dual-frequency ionosphere model using data from the Federal Aviation Administration's Wide Area Augmentation System (WAAS) network on a solar maximum quiet day and an extremely stormy day postsolar maximum. We find that during active periods, when ionospheric storms may introduce slant range delays at L1 as high as 100 m, the higher-order group errors in the L1-L2 or L1-L5 dual-frequency combination can be tens of centimeters. The group and phase errors are no longer equal and opposite, so these errors accumulate in carrier smoothing of the dual-frequency code observable. We show the errors in the carrier-smoothed code are due to higher-order group errors and, to a lesser extent, to higher-order phase rate errors. For many applications, this residual error is sufficiently small as to be neglected. However, such errors can impact geodetic applications as well as the error budgets of GPS Augmentation Systems providing Category III precision approach.

  1. Reward prediction error signals associated with a modified time estimation task.

    PubMed

    Holroyd, Clay B; Krigolson, Olave E

    2007-11-01

    The feedback error-related negativity (fERN) is a component of the human event-related brain potential (ERP) elicited by feedback stimuli. A recent theory holds that the fERN indexes a reward prediction error signal associated with the adaptive modification of behavior. Here we present behavioral and ERP data recorded from participants engaged in a modified time estimation task. As predicted by the theory, our results indicate that fERN amplitude reflects a reward prediction error signal and that the size of this error signal is correlated across participants with changes in task performance.

  2. "Coded and Uncoded Error Feedback: Effects on Error Frequencies in Adult Colombian EFL Learners' Writing"

    ERIC Educational Resources Information Center

    Sampson, Andrew

    2012-01-01

    This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…

  3. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  4. Frequency of medication errors in an emergency department of a large teaching hospital in southern Iran.

    PubMed

    Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid

    2014-01-01

    This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391

  5. Frequency of medication errors in an emergency department of a large teaching hospital in southern Iran

    PubMed Central

    Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid

    2014-01-01

    This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391

  6. Disentangling the impacts of outcome valence and outcome frequency on the post-error slowing.

    PubMed

    Wang, Lijun; Tang, Dandan; Zhao, Yuanfang; Hitchman, Glenn; Wu, Shanshan; Tan, Jinfeng; Chen, Antao

    2015-01-01

    Post-error slowing (PES) reflects efficient outcome monitoring, manifested as slower reaction time after errors. Cognitive control account assumes that PES depends on error information, whereas orienting account posits that it depends on error frequency. This raises the question how the outcome valence and outcome frequency separably influence the generation of PES. To address this issue, we varied the probability of observation errors (50/50 and 20/80, correct/error) the "partner" committed by employing an observation-execution task and investigated the corresponding behavioral and neural effects. On each trial, participants first viewed the outcome of a flanker-run that was supposedly performed by a 'partner', and then performed a flanker-run themselves afterwards. We observed PES in the two error rate conditions. However, electroencephalographic data suggested error-related potentials (oERN and oPe) and rhythmic oscillation associated with attentional process (alpha band) were respectively sensitive to outcome valence and outcome frequency. Importantly, oERN amplitude was positively correlated with PES. Taken together, these findings support the assumption of the cognitive control account, suggesting that outcome valence and outcome frequency are both involved in PES. Moreover, the generation of PES is indexed by oERN, whereas the modulation of PES size could be reflected on the alpha band.

  7. Disentangling the impacts of outcome valence and outcome frequency on the post-error slowing

    PubMed Central

    Wang, Lijun; Tang, Dandan; Zhao, Yuanfang; Hitchman, Glenn; Wu, Shanshan; Tan, Jinfeng; Chen, Antao

    2015-01-01

    Post-error slowing (PES) reflects efficient outcome monitoring, manifested as slower reaction time after errors. Cognitive control account assumes that PES depends on error information, whereas orienting account posits that it depends on error frequency. This raises the question how the outcome valence and outcome frequency separably influence the generation of PES. To address this issue, we varied the probability of observation errors (50/50 and 20/80, correct/error) the “partner” committed by employing an observation-execution task and investigated the corresponding behavioral and neural effects. On each trial, participants first viewed the outcome of a flanker-run that was supposedly performed by a ‘partner’, and then performed a flanker-run themselves afterwards. We observed PES in the two error rate conditions. However, electroencephalographic data suggested error-related potentials (oERN and oPe) and rhythmic oscillation associated with attentional process (alpha band) were respectively sensitive to outcome valence and outcome frequency. Importantly, oERN amplitude was positively correlated with PES. Taken together, these findings support the assumption of the cognitive control account, suggesting that outcome valence and outcome frequency are both involved in PES. Moreover, the generation of PES is indexed by oERN, whereas the modulation of PES size could be reflected on the alpha band. PMID:25732237

  8. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  9. Analysis of bit error rate for modified T-APPM under weak atmospheric turbulence channel

    NASA Astrophysics Data System (ADS)

    Liu, Zhe; Zhang, Qi; Wang, Yong-jun; Liu, Bo; Zhang, Li-jia; Wang, Kai-min; Xiao, Fei; Deng, Chao-gong

    2013-12-01

    T-APPM is combined of TCM (trellis-coded modulation) and APPM (Amplitude-Pulse-position modulation) and has broad application prospects in space optical communication. Set partitioning in standard T-APPM algorithm has the optimal performance in a multi-carrier system, but whether this method has the optimal performance in APPM which is a single-carrier system is unknown. To solve this problem, we first research the atmospheric channel model with weak turbulence; then a modified T-APPM algorithm was proposed, compared to the standard T-APPM algorithm, modified algorithm uses Gray code mapping instead of set partitioning mapping; finally, simulate the two algorithms with Monte-Carlo method. Simulation results showed that, when bit error rate at 10-4, the modified T-APPM algorithm achieved 0.4dB in SNR, effectively improve the system error performance.

  10. Reducing the error of geoid undulation computations by modifying Stokes' function

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1980-01-01

    The truncation theory as it pertains to the calculation of geoid undulations based on Stokes' integral, but from limited gravity data, is reexamined. Specifically, the improved procedures of Molodenskii et al. are shown through numerical investigations to yield substantially smaller errors than the conventional method that is often applied in practice. In this improved method, as well as in a simpler alternative to the conventional approach, the Stokes' kernel is suitably modified in order to accelerate the rate of convergence of the error series. These modified methods, however, effect a reduction in the error only if a set of low-degree potential harmonic coefficients is utilized in the computation. Consider, for example, the situation in which gravity anomalies are given in a cap of radius 10 deg and the GEM 9 (20,20) potential field is used. Then, typically, the error in the computed undulation (aside from the spherical approximation and errors in the gravity anomaly data) according to the conventional truncation theory is 1.09 m; with Meissl's modification it reduces to 0.41m, while Molodenskii's improved method gives 0.45 m. A further alteration of Molodenskii's method is developed and yields an RMS error of 0.33 m. These values reflect the effect of the truncation, as well as the errors in the GEM 9 harmonic coefficients. The considerable improvement, suggested by these results, of the modified methods over the conventional procedure is verified with actual gravity anomaly data in two oceanic regions, where the GEOS-3 altimeter geoid serves as the basis for comparison. The optimal method of truncation, investigated by Colombo, is extremely ill-conditioned. It is shown that with no corresponding regularization, this procedure is inapplicable.

  11. Frequency-domain correction of sensor dynamic error for step response.

    PubMed

    Yang, Shuang-Long; Xu, Ke-Jun

    2012-11-01

    To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly. PMID:23206091

  12. Frequency-domain correction of sensor dynamic error for step response.

    PubMed

    Yang, Shuang-Long; Xu, Ke-Jun

    2012-11-01

    To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.

  13. Frequency-domain correction of sensor dynamic error for step response

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Xu, Ke-Jun

    2012-11-01

    To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.

  14. Optical millimeter-wave generation with modified frequency quadrupling scheme

    NASA Astrophysics Data System (ADS)

    Zhao, Shanghong; Zhu, Zihang; Li, Yongjun; Chu, Xingchun; Li, Xuan

    2013-11-01

    A dispersion-tolerant full-duplex radio-over-fiber (RoF) system based on modified quadrupling-frequency optical millimeter (mm)-wave generation using an integrated nested Mach-Zehnder modulator (MZM), an electrical phase modulator, and an electrical gain is proposed. Not only does the scheme reduce the cost and complexity of base station by reusing the downlink optical carrier, but also the generated optical mm-wave signal with base-band data carried only by 1-s order sideband can overcome both the fading effect and bit walk-off effect caused by the fiber dispersion. Simulation results show that the eye diagram keeps open and clear even when the quadrupling-frequency optical mm-wave is transmitted over 120-km single-mode fiber, and the bidirectional 2.5 Gbit/s data are successfully transmitted over 40 km for both upstream and downstream channels with <1-dB power penalty.

  15. Inverse Material Identification in Coupled Acoustic-Structure Interaction using a Modified Error in Constitutive Equation Functional

    PubMed Central

    Warner, James E.; Diaz, Manuel I.; Aquino, Wilkins; Bonnet, Marc

    2014-01-01

    This work focuses on the identification of heterogeneous linear elastic moduli in the context of frequency-domain, coupled acoustic-structure interaction (ASI), using either solid displacement or fluid pressure measurement data. The approach postulates the inverse problem as an optimization problem where the solution is obtained by minimizing a modified error in constitutive equation (MECE) functional. The latter measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses, while incorporating the measurement data as additional quadratic error terms. We demonstrate two strategies for selecting the MECE weighting coefficient to produce regularized solutions to the ill-posed identification problem: 1) the discrepancy principle of Morozov, and 2) an error-balance approach that selects the weight parameter as the minimizer of another functional involving the ECE and the data misfit. Numerical results demonstrate that the proposed methodology can successfully recover elastic parameters in 2D and 3D ASI systems from response measurements taken in either the solid or fluid subdomains. Furthermore, both regularization strategies are shown to produce accurate reconstructions when the measurement data is polluted with noise. The discrepancy principle is shown to produce nearly optimal solutions, while the error-balance approach, although not optimal, remains effective and does not need a priori information on the noise level. PMID:25339790

  16. Inverse material identification in coupled acoustic-structure interaction using a modified error in constitutive equation functional

    NASA Astrophysics Data System (ADS)

    Warner, James E.; Diaz, Manuel I.; Aquino, Wilkins; Bonnet, Marc

    2014-09-01

    This work focuses on the identification of heterogeneous linear elastic moduli in the context of frequency-domain, coupled acoustic-structure interaction (ASI), using either solid displacement or fluid pressure measurement data. The approach postulates the inverse problem as an optimization problem where the solution is obtained by minimizing a modified error in constitutive equation (MECE) functional. The latter measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses, while incorporating the measurement data as additional quadratic error terms. We demonstrate two strategies for selecting the MECE weighting coefficient to produce regularized solutions to the ill-posed identification problem: 1) the discrepancy principle of Morozov, and 2) an error-balance approach that selects the weight parameter as the minimizer of another functional involving the ECE and the data misfit. Numerical results demonstrate that the proposed methodology can successfully recover elastic parameters in 2D and 3D ASI systems from response measurements taken in either the solid or fluid subdomains. Furthermore, both regularization strategies are shown to produce accurate reconstructions when the measurement data is polluted with noise. The discrepancy principle is shown to produce nearly optimal solutions, while the error-balance approach, although not optimal, remains effective and does not need a priori information on the noise level.

  17. Online public reactions to frequency of diagnostic errors in US outpatient care

    PubMed Central

    Giardina, Traber Davis; Sarkar, Urmimala; Gourley, Gato; Modi, Varsha; Meyer, Ashley N.D.; Singh, Hardeep

    2016-01-01

    Background Diagnostic errors pose a significant threat to patient safety but little is known about public perceptions of diagnostic errors. A study published in BMJ Quality & Safety in 2014 estimated that diagnostic errors affect at least 5% of US adults (or 12 million) per year. We sought to explore online public reactions to media reports on the reported frequency of diagnostic errors in the US adult population. Methods We searched the World Wide Web for any news article reporting findings from the study. We then gathered all the online comments made in response to the news articles to evaluate public reaction to the newly reported diagnostic error frequency (n=241). Two coders conducted content analyses of the comments and an experienced qualitative researcher resolved differences. Results Overall, there were few comments made regarding the frequency of diagnostic errors. However, in response to the media coverage, 44 commenters shared personal experiences of diagnostic errors. Additionally, commentary centered on diagnosis-related quality of care as affected by two emergent categories: (1) US health care providers (n=79; 63 commenters) and (2) US health care reform-related policies, most commonly the Affordable Care Act (ACA) and insurance/reimbursement issues (n=62; 47 commenters). Conclusion The public appears to have substantial concerns about the impact of the ACA and other reform initiatives on the diagnosis-related quality of care. However, policy discussions on diagnostic errors are largely absent from the current national conversation on improving quality and safety. Because outpatient diagnostic errors have emerged as a major safety concern, researchers and policymakers should consider evaluating the effects of policy and practice changes on diagnostic accuracy. PMID:27347474

  18. Comparison of Aseptic Compounding Errors Before and After Modified Laboratory and Introductory Pharmacy Practice Experiences

    PubMed Central

    Owora, Arthur H.; Kirkpatrick, Alice E.

    2015-01-01

    Objective. To determine whether aseptic compounding errors were reduced at the end of the third professional year after modifying pharmacy practice laboratories and implementing an institutional introductory pharmacy practice experience (IPPE). Design. An aseptic compounding laboratory, previously occurring during the third-year spring semester, was added to the second-year spring semester. An 80-hour institutional IPPE was also added in the summer between the second and third years. Instructors recorded aseptic compounding errors using a grading checklist for second-year and third-year student assessments. Third-year student aseptic compounding errors were assessed prior to the curricular changes and for 2 subsequent years for students on the Oklahoma City and Tulsa campuses of the University of Oklahoma. Assessment. Both third-year cohorts committed fewer aseptic technique errors than they did during their second years, and the probability was significantly lower for students on the Oklahoma City campus. The probability of committing major aseptic technique errors was significantly lower for 2 consecutive third-year cohorts after the curricular changes. Conclusion. The addition of second-year aseptic compounding laboratory experiences and third-year institutional IPPE content reduced instructor-assessed errors at the end of the third year. PMID:26889070

  19. Design methodology accounting for fabrication errors in manufactured modified Fresnel lenses for controlled LED illumination.

    PubMed

    Shim, Jongmyeong; Kim, Joongeok; Lee, Jinhyung; Park, Changsu; Cho, Eikhyun; Kang, Shinill

    2015-07-27

    The increasing demand for lightweight, miniaturized electronic devices has prompted the development of small, high-performance optical components for light-emitting diode (LED) illumination. As such, the Fresnel lens is widely used in applications due to its compact configuration. However, the vertical groove angle between the optical axis and the groove inner facets in a conventional Fresnel lens creates an inherent Fresnel loss, which degrades optical performance. Modified Fresnel lenses (MFLs) have been proposed in which the groove angles along the optical paths are carefully controlled; however, in practice, the optical performance of MFLs is inferior to the theoretical performance due to fabrication errors, as conventional design methods do not account for fabrication errors as part of the design process. In this study, the Fresnel loss and the loss area due to microscopic fabrication errors in the MFL were theoretically derived to determine optical performance. Based on this analysis, a design method for the MFL accounting for the fabrication errors was proposed. MFLs were fabricated using an ultraviolet imprinting process and an injection molding process, two representative processes with differing fabrication errors. The MFL fabrication error associated with each process was examined analytically and experimentally to investigate our methodology. PMID:26367631

  20. Random Numbers Demonstrate the Frequency of Type I Errors: Three Spreadsheets for Class Instruction

    ERIC Educational Resources Information Center

    Duffy, Sean

    2010-01-01

    This paper describes three spreadsheet exercises demonstrating the nature and frequency of type I errors using random number generation. The exercises are designed specifically to address issues related to testing multiple relations using correlation (Demonstration I), t tests varying in sample size (Demonstration II) and multiple comparisons…

  1. Correction of electrode modelling errors in multi-frequency EIT imaging.

    PubMed

    Jehl, Markus; Holder, David

    2016-06-01

    The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.

  2. Effects of structured mid-spatial frequency surface errors on image performance.

    PubMed

    Tamkin, John M; Milster, Tom D

    2010-11-20

    Optical designers are encouraged to adopt aspheric and free-form surfaces into an increasing number of design spaces because of their improved performance. However, residual tooling marks from advanced aspheric fabrication techniques are difficult to remove. These marks, typically in the mid-spatial frequency (MSF) regime, give rise to structured image artifacts. Using a theory developed in previous publications, this paper applies the fundamentals of MSF modeling to demonstrate how MSF errors are evaluated and toleranced in an optical system. Examples of as-built components with MSF errors are analyzed using commercial optical design software.

  3. Where is the effect of frequency in word production? Insights from aphasic picture naming errors

    PubMed Central

    Kittredge, Audrey K.; Dell, Gary S.; Verkuilen, Jay; Schwartz, Myrna F.

    2010-01-01

    Some theories of lexical access in production locate the effect of lexical frequency at the retrieval of a word’s phonological characteristics, as opposed to the prior retrieval of a holistic representation of the word from its meaning. Yet there is evidence from both normal and aphasic individuals that frequency may influence both of these retrieval processes. This inconsistency is especially relevant in light of recent attempts to determine the representation of another lexical property, age of acquisition or AoA, whose effect is similar to that of frequency. To further explore the representations of these lexical variables in the word retrieval system, we performed hierarchical, multinomial logistic regression analyses of 50 aphasic patients’ picture-naming responses. While both log frequency and AoA had a significant influence on patient accuracy and led to fewer phonologically related errors and omissions, only log frequency had an effect on semantically related errors. These results provide evidence for a lexical access process sensitive to frequency at all stages, but with AoA having a more limited effect. PMID:18704797

  4. Estimate error of frequency-dependent Q introduced by linear regression and its nonlinear implementation

    NASA Astrophysics Data System (ADS)

    Li, Guofa; Huang, Wei; Zheng, Hao; Zhang, Baoqing

    2016-02-01

    The spectral ratio method (SRM) is widely used to estimate quality factor Q via the linear regression of seismic attenuation under the assumption of a constant Q. However, the estimate error will be introduced when this assumption is violated. For the frequency-dependent Q described by a power-law function, we derived the analytical expression of estimate error as a function of the power-law exponent γ and the ratio of the bandwidth to the central frequency σ . Based on the theoretical analysis, we found that the estimate errors are mainly dominated by the exponent γ , and less affected by the ratio σ . This phenomenon implies that the accuracy of the Q estimate can hardly be improved by adjusting the width and range of the frequency band. Hence, we proposed a two-parameter regression method to estimate the frequency-dependent Q from the nonlinear seismic attenuation. The proposed method was tested using the direct waves acquired by a near-surface cross-hole survey, and its reliability was evaluated in comparison with the result of SRM.

  5. A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory

    NASA Technical Reports Server (NTRS)

    Elander, Valjean; Koshak, William; Phanord, Dieudonne

    2004-01-01

    The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."

  6. Robust nonstationary jammer mitigation for GPS receivers with instantaneous frequency error tolerance

    NASA Astrophysics Data System (ADS)

    Wang, Ben; Zhang, Yimin D.; Qin, Si; Amin, Moeness G.

    2016-05-01

    In this paper, we propose a nonstationary jammer suppression method for GPS receivers when the signals are sparsely sampled. Missing data samples induce noise-like artifacts in the time-frequency (TF) distribution and ambiguity function of the received signals, which lead to reduced capability and degraded performance in jammer signature estimation and excision. In the proposed method, a data-dependent TF kernel is utilized to mitigate the artifacts and sparse reconstruction methods are then applied to obtain instantaneous frequency (IF) estimation of the jammers. In addition, an error tolerance of the IF estimate is applied is applied to achieve robust jammer suppression performance in the presence of IF estimation inaccuracy.

  7. Compensation of body shake errors in terahertz beam scanning single frequency holography for standoff personnel screening

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Chao; Sun, Zhao-Yang; Zhao, Yu; Wu, Shi-You; Fang, Guang-You

    2016-08-01

    In the terahertz (THz) band, the inherent shake of the human body may strongly impair the image quality of a beam scanning single frequency holography system for personnel screening. To realize accurate shake compensation in imaging processing, it is quite necessary to develop a high-precision measure system. However, in many cases, different parts of a human body may shake to different extents, resulting in greatly increasing the difficulty in conducting a reasonable measurement of body shake errors for image reconstruction. In this paper, a body shake error compensation algorithm based on the raw data is proposed. To analyze the effect of the body shake on the raw data, a model of echoed signal is rebuilt with considering both the beam scanning mode and the body shake. According to the rebuilt signal model, we derive the body shake error estimated method to compensate for the phase error. Simulation on the reconstruction of point targets with shake errors and proof-of-principle experiments on the human body in the 0.2-THz band are both performed to confirm the effectiveness of the body shake compensation algorithm proposed. Project supported by the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant No. YYYJ-1123).

  8. PULSAR TIMING ERRORS FROM ASYNCHRONOUS MULTI-FREQUENCY SAMPLING OF DISPERSION MEASURE VARIATIONS

    SciTech Connect

    Lam, M. T.; Cordes, J. M.; Chatterjee, S.; Dolch, T.

    2015-03-10

    Free electrons in the interstellar medium cause frequency-dependent delays in pulse arrival times due to both scattering and dispersion. Multi-frequency measurements are used to estimate and remove dispersion delays. In this paper, we focus on the effect of any non-simultaneity of multi-frequency observations on dispersive delay estimation and removal. Interstellar density variations combined with changes in the line of sight from pulsar and observer motions cause dispersion measure (DM) variations with an approximately power-law power spectrum, augmented in some cases by linear trends. We simulate time series, estimate the magnitude and statistical properties of timing errors that result from non-simultaneous observations, and derive prescriptions for data acquisition that are needed in order to achieve a specified timing precision. For nearby, highly stable pulsars, measurements need to be simultaneous to within about one day in order for the timing error from asynchronous DM correction to be less than about 10 ns. We discuss how timing precision improves when increasing the number of dual-frequency observations used in DM estimation for a given epoch. For a Kolmogorov wavenumber spectrum, we find about a factor of two improvement in precision timing when increasing from two to three observations but diminishing returns thereafter.

  9. Knowledge of results for motor learning: relationship between error estimation and knowledge of results frequency.

    PubMed

    Guadagnoli, M A; Kohl, R M

    2001-06-01

    The authors of the present study investigated the apparent contradiction between early and more recent views of knowledge of results (KR), the idea that how one is engaged before receiving KR may not be independent of how one uses that KR. In a 2 ×: 2 factorial design, participants (N = 64) practiced a simple force-production task and (a) were required, or not required, to estimate error about their previous response and (b) were provided KR either after every response (100%) or after every 5th response (20%) during acquisition. A no-KR retention test revealed an interaction between acquisition error estimation and KR frequencies. The group that received 100% KR and was required to error estimate during acquisition performed the best during retention. The 2 groups that received 20% KR performed less well. Finally, the group that received 100% KR and was not required to error estimate during acquisition performed the poorest during retention. One general interpretation of that pattern of results is that motor learning is an increasing function of the degree to which participants use KR to test response hypotheses (J. A. Adams, 1971; R. A. Schmidt, 1975). Practicing simple responses coupled with error estimation may embody response hypotheses that can be tested with KR, thus benefiting motor learning most under a 100% KR condition. Practicing simple responses without error estimation is less likely to embody response hypothesis, however, which may increase the probability that participants will use KR to guide upcoming responses, thus attenuating motor learning under a 100% KR condition. The authors conclude, therefore, that how one is engaged before receiving KR may not be independent of how one uses KR. PMID:11404216

  10. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    SciTech Connect

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  11. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGESBeta

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  12. Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint

    SciTech Connect

    Florita, A.; Hodge, B. M.; Milligan, M.

    2012-08-01

    The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.

  13. A Modified Frequency Estimation Equating Method for the Common-Item Nonequivalent Groups Design

    ERIC Educational Resources Information Center

    Wang, Tianyou; Brennan, Robert L.

    2009-01-01

    Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…

  14. Influence of nonhomogeneous earth on the rms phase error and beam-pointing errors of large, sparse high-frequency receiving arrays

    NASA Astrophysics Data System (ADS)

    Weiner, M. M.

    1994-01-01

    The performance of ground-based high-frequency (HF) receiving arrays is reduced when the array elements have electrically small ground planes. The array rms phase error and beam-pointing errors, caused by multipath rays reflected from a nonhomogeneous Earth, are determined for a sparse array of elements that are modeled as Hertzian dipoles in close proximity to Earth with no ground planes. Numerical results are presented for cases of randomly distributed and systematically distributed Earth nonhomogeneities where one-half of vertically polarized array elements are located in proximity to one type of Earth and the remaining half are located in proximity to a second type of Earth. The maximum rms phase errors, for the cases examined, are 18 deg and 9 deg for randomly distributed and systematically distributed nonhomogeneities, respectively. The maximum beampointing errors are 0 and 0.3 beam widths for randomly distributed and systematically distributed nonhomogeneities, respectively.

  15. Frequency Domain Analysis of Errors in Cross-Correlations of Ambient Seismic Noise

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri

    2016-09-01

    We analyze random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these preprocessing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing preprocessing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜ 35 km) and dense linear array (˜ 20 m) across the plate-boundary faults. A block bootstrap resampling method

  16. Demonstration of the frequency offset errors introduced by an incorrect setting of the Zeeman/magnetic field adjustment on the cesium beam frequency standard

    NASA Technical Reports Server (NTRS)

    Kaufmann, D. C.

    1976-01-01

    The fine frequency setting of a cesium beam frequency standard is accomplished by adjusting the C field control with the appropriate Zeeman frequency applied to the harmonic generator. A novice operator in the field, even when using the correct Zeeman frequency input, may mistakenly set the C field to any one of seven major Beam I peaks (fingers) represented by the Ramsey curve. This can result in frequency offset errors of as much as 2.5 parts in ten to the tenth. The effects of maladjustment are demonstrated and suggestions are discussed on how to avoid the subtle traps associated with C field adjustments.

  17. Analysis of 454 sequencing error rate, error sources, and artifact recombination for detection of Low-frequency drug resistance mutations in HIV-1 DNA

    PubMed Central

    2013-01-01

    Background 454 sequencing technology is a promising approach for characterizing HIV-1 populations and for identifying low frequency mutations. The utility of 454 technology for determining allele frequencies and linkage associations in HIV infected individuals has not been extensively investigated. We evaluated the performance of 454 sequencing for characterizing HIV populations with defined allele frequencies. Results We constructed two HIV-1 RT clones. Clone A was a wild type sequence. Clone B was identical to clone A except it contained 13 introduced drug resistant mutations. The clones were mixed at ratios ranging from 1% to 50% and were amplified by standard PCR conditions and by PCR conditions aimed at reducing PCR-based recombination. The products were sequenced using 454 pyrosequencing. Sequence analysis from standard PCR amplification revealed that 14% of all sequencing reads from a sample with a 50:50 mixture of wild type and mutant DNA were recombinants. The majority of the recombinants were the result of a single crossover event which can happen during PCR when the DNA polymerase terminates synthesis prematurely. The incompletely extended template then competes for primer sites in subsequent rounds of PCR. Although less often, a spectrum of other distinct crossover patterns was also detected. In addition, we observed point mutation errors ranging from 0.01% to 1.0% per base as well as indel (insertion and deletion) errors ranging from 0.02% to nearly 50%. The point errors (single nucleotide substitution errors) were mainly introduced during PCR while indels were the result of pyrosequencing. We then used new PCR conditions designed to reduce PCR-based recombination. Using these new conditions, the frequency of recombination was reduced 27-fold. The new conditions had no effect on point mutation errors. We found that 454 pyrosequencing was capable of identifying minority HIV-1 mutations at frequencies down to 0.1% at some nucleotide positions. Conclusion

  18. Flood Frequency Analyses Using a Modified Stochastic Storm Transposition Method

    NASA Astrophysics Data System (ADS)

    Fang, N. Z.; Kiani, M.

    2015-12-01

    Research shows that areas with similar topography and climatic environment have comparable precipitation occurrences. Reproduction and realization of historical rainfall events provide foundations for frequency analysis and the advancement of meteorological studies. Stochastic Storm Transposition (SST) is a method for such a purpose and enables us to perform hydrologic frequency analyses by transposing observed historical storm events to the sites of interest. However, many previous studies in SST reveal drawbacks from simplified Probability Density Functions (PDFs) without considering restrictions for transposing rainfalls. The goal of this study is to stochastically examine the impacts of extreme events on all locations in a homogeneity zone. Since storms with the same probability of occurrence on homogenous areas do not have the identical hydrologic impacts, the authors utilize detailed precipitation parameters including the probability of occurrence of certain depth and the number of occurrence of extreme events, which are both incorporated into a joint probability function. The new approach can reduce the bias from uniformly transposing storms which erroneously increases the probability of occurrence of storms in areas with higher rainfall depths. This procedure is iterated to simulate storm events for one thousand years as the basis for updating frequency analysis curves such as IDF and FFA. The study area is the Upper Trinity River watershed including the Dallas-Fort Worth metroplex with a total area of 6,500 mi2. It is the first time that SST method is examined in such a wide scale with 20 years of radar rainfall data.

  19. A Research on Errors in Two-way Satellite Time and Frequency Transfer

    NASA Astrophysics Data System (ADS)

    Wu, W. J.

    2013-07-01

    The two-way satellite time and frequency transfer (TWSTFT) is one of the most accurate means for remote clock comparison with an uncertainty in time of less than 1 ns and with a relative uncertainty in frequency of about 10^{-14} d^{-1}. The transmission paths of signals between two stations are almost symmetrical in the TWSTFT. In principal, most of all kinds of path delays are canceled out, which guarantees the high accuracy of TWSTFT. With the development of TWSTFT and the increase in the frequence of observations, it is showed that the diurnal variation of systematic errors is about 1˜3 ns in the TWSTFT. This problem has become a hot topic of research around the world. By using the data of Transfer Satellite Orbit Determination Net (TSODN) and international TWSTFT links, the systematic errors are studied in detail as follows: (1) The atmospheric effect. This includes ionospheric and tropospheric effects. The tropospheric effect is very small, and it can be ignored. The ionospheric error can be corrected by using the IGS ionosphere product. The variations of ionospheric effect are about 0˜0.05 ns and 0˜0.7 ns at KU band and C band, respectively, and have the diurnal variation characteristics. (2) The equipment time delay. The equipment delay is closely related with temperature, presenting a linear relation at the normal temperature. Its outdoor part indicates the characteristics of the diurnal variation with the environment temperature. The various kinds of effects related with the modem are studied. Some resolutions are proposed. (3) The satellite transponder effect. This effect is studied by using the data of international TWSTFT links. It is analyzed that different satellite transponders can highly increase the amplitude of the diurnal variation in one TWSTFT link. This is the major reason of the diurnal variation in the TWSTFT. The function fitting method is used to basically solve this problem. (4) The satellite motion effect. The geostationary

  20. Effect of Discourse Context and Modifier Relation Frequency on Conceptual Combination

    ERIC Educational Resources Information Center

    Gagne, Christina L.; Spalding, Thomas L.

    2004-01-01

    The present experiments investigate the influence of modifier relation frequency and discourse context on the interpretation of novel noun-noun phrases (as measured by both the ease of interpretation and the types of interpretations that are provided). We assess whether people access knowledge about the relations with which the modifier is…

  1. Error analysis for intrinsic quality factor measurement in superconducting radio frequency resonators

    NASA Astrophysics Data System (ADS)

    Melnychuk, O.; Grassellino, A.; Romanenko, A.

    2014-12-01

    In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].

  2. Joint Impact of Frequency Synchronization Errors and Intermodulation Distortion on the Performance of Multicarrier DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Rugini, Luca; Banelli, Paolo

    2005-12-01

    The performance of multicarrier systems is highly impaired by intercarrier interference (ICI) due to frequency synchronization errors at the receiver and by intermodulation distortion (IMD) introduced by a nonlinear amplifier (NLA) at the transmitter. In this paper, we evaluate the bit-error rate (BER) of multicarrier direct-sequence code-division multiple-access (MC-DS-CDMA) downlink systems subject to these impairments in frequency-selective Rayleigh fading channels, assuming quadrature amplitude modulation (QAM). The analytical findings allow to establish the sensitivity of MC-DS-CDMA systems to carrier frequency offset (CFO) and NLA distortions, to identify the maximum CFO that is tolerable at the receiver side in different scenarios, and to find out the optimum value of the NLA output power backoff for a given CFO. Simulation results show that the approximated analysis is quite accurate in several conditions.

  3. Theory of modulation transfer function artifacts due to mid-spatial-frequency errors and its application to optical tolerancing.

    PubMed

    Tamkin, John M; Milster, Tom D; Dallas, William

    2010-09-01

    Aspheric and free-form surfaces are powerful surface forms that allow designers to achieve better performance with fewer lenses and smaller packages. Unlike spheres, these surfaces are fabricated with processes that leave a signature, or "structure," that is primarily in the mid-spatial-frequency region. These structured surface errors create ripples in the modulation transfer function (MTF) profile. Using Fourier techniques with generalized functions, the drop in MTF is derived and shown to exhibit a nonlinear relationship with the peak-to-valley height of the structured surface error.

  4. Time-frequency representation of a highly nonstationary signal via the modified Wigner distribution

    NASA Technical Reports Server (NTRS)

    Zoladz, T. F.; Jones, J. H.; Jong, J.

    1992-01-01

    A new signal analysis technique called the modified Wigner distribution (MWD) is presented. The new signal processing tool has been very successful in determining time frequency representations of highly non-stationary multicomponent signals in both simulations and trials involving actual Space Shuttle Main Engine (SSME) high frequency data. The MWD departs from the classic Wigner distribution (WD) in that it effectively eliminates the cross coupling among positive frequency components in a multiple component signal. This attribute of the MWD, which prevents the generation of 'phantom' spectral peaks, will undoubtedly increase the utility of the WD for real world signal analysis applications which more often than not involve multicomponent signals.

  5. Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES

    NASA Astrophysics Data System (ADS)

    Sarkar, B.; Bhunia, C. T.; Maulik, U.

    2012-06-01

    Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.

  6. Time-frequency analysis of spike-wave discharges using a modified wavelet transform.

    PubMed

    Bosnyakova, Daria; Gabova, Alexandra; Kuznetsova, Galina; Obukhov, Yuri; Midzyanovskaya, Inna; Salonin, Dmitrij; van Rijn, Clementina; Coenen, Anton; Tuomisto, Leene; van Luijtelaar, Gilles

    2006-06-30

    The continuous Morlet wavelet transform was used for the analysis of the time-frequency pattern of spike-wave discharges (SWD) as can be recorded in a genetic animal model of absence epilepsy (rats of the WAG/Rij strain). We developed a new wavelet transform that allows to obtain the time-frequency dynamics of the dominating rhythm during the discharges. SWD were analyzed pre- and post-administration of certain drugs. SWD recorded predrug demonstrate quite uniform time-frequency dynamics of the dominant rhythm. The beginning of the discharge has a short period with the highest frequency value (up to 15 Hz). Then the frequency decreases to 7-9 Hz and frequency modulation occurs during the discharge in this range with a period of 0.5-0.7 s. Specific changes of SWD time-frequency dynamics were found after the administration of psychoactive drugs, addressing different brain mediator and modulator systems. Short multiple SWDs appeared under low (0.5 mg/kg) doses of haloperidol, they are characterized by a fast frequency decrease to 5-6 Hz at the end of every discharge. The frequency of the dominant frequency of SWD was not stable in long lasting SWD after 1.0 mg/kg or more haloperidol: then two periodicities were found. Long lasting SWD seen after the administration of vigabatrin showed a stable frequency of the discharge. The EEG after Ketamin showed a distinct 5 s quasiperiodicity. No clear changes of time-frequency dynamics of SWD were found after perilamine. It can be concluded that the use of the modified Morlet wavelet transform allows to describe significant parameters of the dynamics in the time-frequency domain of the dominant rhythm of SWD that were not previously detected.

  7. To Err is Normable: The Computation of Frequency-Domain Error Bounds from Time-Domain Data

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Veillette, Robert J.; DeAbreuGarcia, J. Alexis; Chicatelli, Amy; Hartmann, Richard

    1998-01-01

    This paper exploits the relationships among the time-domain and frequency-domain system norms to derive information useful for modeling and control design, given only the system step response data. A discussion of system and signal norms is included. The proposed procedures involve only simple numerical operations, such as the discrete approximation of derivatives and integrals, and the calculation of matrix singular values. The resulting frequency-domain and Hankel-operator norm approximations may be used to evaluate the accuracy of a given model, and to determine model corrections to decrease the modeling errors.

  8. STATISTICAL DISTRIBUTIONS OF PARTICULATE MATTER AND THE ERROR ASSOCIATED WITH SAMPLING FREQUENCY. (R828678C010)

    EPA Science Inventory

    The distribution of particulate matter (PM) concentrations has an impact on human health effects and the setting of PM regulations. Since PM is commonly sampled on less than daily schedules, the magnitude of sampling errors needs to be determined. Daily PM data from Spokane, W...

  9. Accurate van der Waals coefficients between fullerenes and fullerene-alkali atoms and clusters: Modified single-frequency approximation

    NASA Astrophysics Data System (ADS)

    Tao, Jianmin; Mo, Yuxiang; Tian, Guocai; Ruzsinszky, Adrienn

    2016-08-01

    Long-range van der Waals (vdW) interaction is critically important for intermolecular interactions in molecular complexes and solids. However, accurate modeling of vdW coefficients presents a great challenge for nanostructures, in particular for fullerene clusters, which have huge vdW coefficients but also display very strong nonadditivity. In this work, we calculate the coefficients between fullerenes, fullerene and sodium clusters, and fullerene and alkali atoms with the hollow-sphere model within the modified single-frequency approximation (MSFA). In the MSFA, we assume that the electron density is uniform in a molecule and that only valence electrons in the outmost subshell of atoms contribute. The input to the model is the static multipole polarizability, which provides a sharp cutoff for the plasmon contribution outside the effective vdW radius. We find that the model can generate C6 in excellent agreement with expensive wave-function-based ab initio calculations, with a mean absolute relative error of only 3 % , without suffering size-dependent error. We show that the nonadditivities of the coefficients C6 between fullerenes and C60 and sodium clusters Nan revealed by the model agree remarkably well with those based on the accurate reference values. The great flexibility, simplicity, and high accuracy make the model particularly suitable for the study of the nonadditivity of vdW coefficients between nanostructures, advancing the development of better vdW corrections to conventional density functional theory.

  10. Modified Smith predictor for frequency identification and disturbance rejection of single sinusoidal signal.

    PubMed

    Zheng, Da; Fang, Jian'an; Ren, Zhengyun

    2010-01-01

    This paper presents a frequency identification and disturbance rejection scheme for open loop stable time delay systems with disturbance containing a constant signal and a single sinusoidal signal. Astrom's modified Smith predictor is employed to maintain good setpoint tracking performance. Disturbance rejection controller is designed via internal model control principle and functions as a finite dimensional repetitive controller. Extended Kalman filter is designed to track the frequency of unknown periodic disturbance. The simulation results demonstrate the successful performance of the proposed disturbance rejection method for controlling a linear system with time delays, subjected to both step and sinusoidal disturbances.

  11. Mass measurement errors caused by 'local" frequency perturbations in FTICR mass spectrometry.

    PubMed

    Masselon, Christophe; Tolmachev, Aleksey V; Anderson, Gordon A; Harkewicz, Richard; Smith, Richard D

    2002-01-01

    One of the key qualities of mass spectrometric measurements for biomolecules is the mass measurement accuracy (MMA) obtained. FTICR presently provides the highest MMA over a broad m/z range. However, due to space charge effects, the achievable MMA crucially depends on the number of ions trapped in the ICR cell for a measurement. Thus, beyond some point, as the effective sensitivity and dynamic range of a measurement increase, MMA tends to decrease. While analyzing deviations from the commonly used calibration law in FTICR we have found systematic errors which are not accounted for by a "global" space charge correction approach. The analysis of these errors and their dependence on charge population and post-excite radius have led us to conclude that each ion cloud experiences a different interaction with other ion clouds. We propose a novel calibration function which is shown to provide an improvement in MMA for all the spectra studied.

  12. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  13. Estimation of errors in measurement of stationary signals from a continuous frequency band spectrum

    NASA Technical Reports Server (NTRS)

    Ivanov, V. A.

    1973-01-01

    The design of an apparatus for frequency analyses on signals with continuous spectra is reported. Filter statistical characteristics are used to expand the dynamic range to 80 db and more or to limit the input signal spectra. A series connection of several band filters gives the most effective results.

  14. Frequency and Distribution of Refractive Error in Adult Life: Methodology and Findings of the UK Biobank Study

    PubMed Central

    Cumberland, Phillippa M.; Bao, Yanchun; Hysi, Pirro G.; Foster, Paul J.; Hammond, Christopher J.; Rahi, Jugnoo S.

    2015-01-01

    Purpose To report the methodology and findings of a large scale investigation of burden and distribution of refractive error, from a contemporary and ethnically diverse study of health and disease in adults, in the UK. Methods U K Biobank, a unique contemporary resource for the study of health and disease, recruited more than half a million people aged 40–69 years. A subsample of 107,452 subjects undertook an enhanced ophthalmic examination which provided autorefraction data (a measure of refractive error). Refractive error status was categorised using the mean spherical equivalent refraction measure. Information on socio-demographic factors (age, gender, ethnicity, educational qualifications and accommodation tenure) was reported at the time of recruitment by questionnaire and face-to-face interview. Results Fifty four percent of participants aged 40–69 years had refractive error. Specifically 27% had myopia (4% high myopia), which was more common amongst younger people, those of higher socio-economic status, higher educational attainment, or of White or Chinese ethnicity. The frequency of hypermetropia increased with age (7% at 40–44 years increasing to 46% at 65–69 years), was higher in women and its severity was associated with ethnicity (moderate or high hypermetropia at least 30% less likely in non-White ethnic groups compared to White). Conclusions Refractive error is a significant public health issue for the UK and this study provides contemporary data on adults for planning services, health economic modelling and monitoring of secular trends. Further investigation of risk factors is necessary to inform strategies for prevention. There is scope to do this through the planned longitudinal extension of the UK Biobank study. PMID:26430771

  15. Lower Bounds on the Frequency Estimation Error in Magnetically Coupled MEMS Resonant Sensors.

    PubMed

    Paden, Brad E

    2016-02-01

    MEMS inductor-capacitor (LC) resonant pressure sensors have revolutionized the treatment of abdominal aortic aneurysms. In contrast to electrostatically driven MEMS resonators, these magnetically coupled devices are wireless so that they can be permanently implanted in the body and can communicate to an external coil via pressure-induced frequency modulation. Motivated by the importance of these sensors in this and other applications, this paper develops relationships among sensor design variables, system noise levels, and overall system performance. Specifically, new models are developed that express the Cramér-Rao lower bound for the variance of resonator frequency estimates in terms of system variables through a system of coupled algebraic equations, which can be used in design and optimization. Further, models are developed for a novel mechanical resonator in addition to the LC-type resonators.

  16. An analysis of perceptual errors in reading mammograms using quasi-local spatial frequency spectra.

    PubMed

    Mello-Thoms, C; Dunn, S M; Nodine, C F; Kundel, H L

    2001-09-01

    In this pilot study the authors examined areas on a mammogram that attracted the visual attention of experienced mammographers and mammography fellows, as well as areas that were reported to contain a malignant lesion, and, based on their spatial frequency spectrum, they characterized these areas by the type of decision outcome that they yielded: true-positives (TP), false-positives (FP), true-negatives (TN), and false-negatives (FN). Five 2-view (craniocaudal and medial-lateral oblique) mammogram cases were examined by 8 experienced observers, and the eye position of the observers was tracked. The observers were asked to report the location and nature of any malignant lesions present in the case. The authors analyzed each area in which either the observer made a decision or in which the observer had prolonged (>1,000 ms) visual dwell using wavelet packets, and characterized these areas in terms of the energy contents of each spatial frequency band. It was shown that each decision outcome is characterized by a specific profile in the spatial frequency domain, and that these profiles are significantly different from one another. As a consequence of these differences, the profiles can be used to determine which type of decision a given observer will make when examining the area. Computer-assisted perception correctly predicted up to 64% of the TPs made by the observers, 77% of the FPs, and 70% of the TNs.

  17. A high-frequency analysis of radome-induced radar pointing error

    NASA Astrophysics Data System (ADS)

    Burks, D. G.; Graf, E. R.; Fahey, M. D.

    1982-09-01

    An analysis is presented of the effect of a tangent ogive radome on the pointing accuracy of a monopulse radar employing an aperture antenna. The radar is assumed to be operating in the receive mode, and the incident fields at the antenna are found by a ray tracing procedure. Rays entering the antenna aperture by direct transmission through the radome and by single reflection from the radome interior are considered. The radome wall is treated as being locally planar. The antenna can be scanned in two angular directions, and two orthogonal polarization states which produce an arbitrarily polarized incident field are considered. Numerical results are presented for both in-plane and cross-plane errors as a function of scan angle and polarization.

  18. Magnitude error bounds for sampled-data frequency response obtained from the truncation of an infinite series, and compensator improvement program

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.

  19. Measurement error in frequency measured using wavelength meter due to residual moisture in interferometer and a simple method to avoid it

    NASA Astrophysics Data System (ADS)

    Hashiguchi, Koji; Abe, Hisashi

    2016-11-01

    We have experimentally evaluated the accuracy of the frequency measured using a commonly used wavelength meter in the near-infrared region, which was calibrated in the visible region. An error of approximately 50 MHz was observed in the frequency measurement using the wavelength meter in the near-infrared region although the accuracy specified in the catalogue was 20 MHz. This error was attributable to residual moisture inside the Fizeau interferometer of the wavelength meter. A simple method to avoid the error is proposed.

  20. The use of ionospheric tomography and elevation masks to reduce the overall error in single-frequency GPS timing applications

    NASA Astrophysics Data System (ADS)

    Rose, Julian A. R.; Tong, Jenna R.; Allain, Damien J.; Mitchell, Cathryn N.

    2011-01-01

    Signals from Global Positioning System (GPS) satellites at the horizon or at low elevations are often excluded from a GPS solution because they experience considerable ionospheric delays and multipath effects. Their exclusion can degrade the overall satellite geometry for the calculations, resulting in greater errors; an effect known as the Dilution of Precision (DOP). In contrast, signals from high elevation satellites experience less ionospheric delays and multipath effects. The aim is to find a balance in the choice of elevation mask, to reduce the propagation delays and multipath whilst maintaining good satellite geometry, and to use tomography to correct for the ionosphere and thus improve single-frequency GPS timing accuracy. GPS data, collected from a global network of dual-frequency GPS receivers, have been used to produce four GPS timing solutions, each with a different ionospheric compensation technique. One solution uses a 4D tomographic algorithm, Multi-Instrument Data Analysis System (MIDAS), to compensate for the ionospheric delay. Maps of ionospheric electron density are produced and used to correct the single-frequency pseudorange observations. This method is compared to a dual-frequency solution and two other single-frequency solutions: one does not include any ionospheric compensation and the other uses the broadcast Klobuchar model. Data from the solar maximum year 2002 and October 2003 have been investigated to display results when the ionospheric delays are large and variable. The study focuses on Europe and results are produced for the chosen test site, VILL (Villafranca, Spain). The effects of excluding all of the GPS satellites below various elevation masks, ranging from 5° to 40°, on timing solutions for fixed (static) and mobile (moving) situations are presented. The greatest timing accuracies when using the fixed GPS receiver technique are obtained by using a 40° mask, rather than a 5° mask. The mobile GPS timing solutions are most

  1. The Relative Importance of Random Error and Observation Frequency in Detecting Trends in Upper Tropospheric Water Vapor

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-01-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  2. The relative importance of random error and observation frequency in detecting trends in upper tropospheric water vapor

    NASA Astrophysics Data System (ADS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-11-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  3. Super-hydrophobicity and oleophobicity of silicone rubber modified by CF 4 radio frequency plasma

    NASA Astrophysics Data System (ADS)

    Gao, Song-Hua; Gao, Li-Hua; Zhou, Ke-Sheng

    2011-03-01

    Owing to excellent electric properties, silicone rubber (SIR) has been widely employed in outdoor insulator. For further improving its hydrophobicity and service life, the SIR samples are treated by CF 4 radio frequency (RF) capacitively coupled plasma. The hydrophobic and oleophobic properties are characterized by static contact angle method. The surface morphology of modified SIR is observed by atom force microscope (AFM). X-ray photoelectron spectroscopy (XPS) is used to test the variation of the functional groups on the SIR surface due to the treatment by CF 4 plasma. The results indicate that the static contact angle of SIR surface is improved from 100.7° to 150.2° via the CF 4 plasma modification, and the super-hydrophobic surface of modified SIR, which the corresponding static contact angle is 150.2°, appears at RF power of 200 W for a 5 min treatment time. It is found that the super-hydrophobic surface ascribes to the coaction of the increase of roughness created by the ablation action and the formation of [-SiF x(CH 3) 2- x-O-] n ( x = 1, 2) structure produced by F atoms replacement methyl groups reaction, more importantly, the formation of [-SiF 2-O-] n structure is the major factor for super-hydrophobic surface, and it is different from the previous studies, which proposed the fluorocarbon species such as C-F, C-F 2, C-F 3, CF-CF n, and C-CF n, were largely introduced to the polymer surface and responsible for the formation of low surface energy.

  4. Effects of voltage errors caused by gap-voltage and automatic-frequency tuning in an alternating-phase-focused linac

    NASA Astrophysics Data System (ADS)

    Iwata, Y.; Yamada, S.; Murakami, T.; Fujimoto, T.; Fujisawa, T.; Ogawa, H.; Miyahara, N.; Yamamoto, K.; Hojo, S.; Sakamoto, Y.; Muramatsu, M.; Takeuchi, T.; Mitsumoto, T.; Tsutsui, H.; Watanabe, T.; Ueda, T.

    2008-05-01

    A compact injector for a heavy-ion medical-accelerator complex was developed. It consists of an electron-cyclotron-resonance ion-source (ECRIS) and two linacs, which are a radio-frequency-quadrupole (RFQ) linac and an Interdigital H-mode drift-tube-linac (IH-DTL). Beam acceleration tests of the compact injector were performed, and the designed beam quality was verified by the measured results, as reported earlier. Because the method of alternating-phase-focusing (APF) was used for beam focusing of the IH-DTL, the motion of beam ions would be sensitive to gap-voltage errors, caused during tuning of the gap-voltage distribution and by automatic-frequency tuning in actual operation. To study the effects of voltage errors to beam quality, further measurements were performed during acceleration tests. In this report, the effects of voltage errors for the APF IH-DTL are discussed.

  5. Application of a modified complementary filtering technique for increased aircraft control system frequency bandwidth in high vibration environment

    NASA Technical Reports Server (NTRS)

    Garren, J. F., Jr.; Niessen, F. R.; Abbott, T. S.; Yenni, K. R.

    1977-01-01

    A modified complementary filtering technique for estimating aircraft roll rate was developed and flown in a research helicopter to determine whether higher gains could be achieved. Use of this technique did, in fact, permit a substantial increase in system frequency bandwidth because, in comparison with first-order filtering, it reduced both noise amplification and control limit-cycle tendencies.

  6. Analysis on error of laser frequency locking for fiber optical receiver in direct detection wind lidar based on Fabry-Perot interferometer and improvements

    NASA Astrophysics Data System (ADS)

    Zhang, Feifei; Dou, Xiankang; Sun, Dongsong; Shu, Zhifeng; Xia, Haiyun; Gao, Yuanyuan; Hu, Dongdong; Shangguan, Mingjia

    2014-12-01

    Direct detection Doppler wind lidar (DWL) has been demonstrated for its capability of atmospheric wind detection ranging from the troposphere to stratosphere with high temporal and spatial resolution. We design and describe a fiber-based optical receiver for direct detection DWL. Then the locking error of the relative laser frequency is analyzed and the dependent variables turn out to be the relative error of the calibrated constant and the slope of the transmission function. For high accuracy measurement of the calibrated constant for a fiber-based system, an integrating sphere is employed for its uniform scattering. What is more, the feature of temporally widening the pulse laser allows more samples be acquired for the analog-to-digital card of the same sampling rate. The result shows a relative error of 0.7% for a calibrated constant. For the latter, a new improved locking filter for a Fabry-Perot Interferometer was considered and designed with a larger slope. With these two strategies, the locking error for the relative laser frequency is calculated to be about 3 MHz, which is equivalent to a radial velocity of about 0.53 m/s and demonstrates the effective improvements of frequency locking for a robust DWL.

  7. High frequency detection of different T-cell subsets in mice by a modified virus plaque assay.

    PubMed Central

    Fujisawa, H; Kumazawa, Y; Ohtani, A; Nishimura, C

    1983-01-01

    Different T-cell subsets participating in immune responses were detected at a high frequency by a modified virus plaque assay (VPA). By using the modified VPA, different activated T-cell subsets generated in primary immune responses, helper and suppressor T cells participating in antibody formation, and effector T cells involved in the delayed-type hypersensitivity (DTH) reaction were enumerated directly without in vitro antigen stimulation. The frequency of detection in immune systems used was 7.5-17.7 V-PFC/10(3) spleen cells. Although neither helper T cells for antibody formation nor effector T cells for DTH reaction were detected as V-PFC at a high frequency by the original VPA, it was also found in secondary immune response that Lyt 1 positive, antigen-specific helper and effector T-cell subsets, and cyclophosphamide (CY)-resistant precursors were enumerated at a high frequency by the modified VPA when the received in vitro antigen stimulation, and that the proliferative stage of these cells was critical for the development of V-PFC. PMID:6601613

  8. Size-Dependent Resonant Frequency and Flexural Sensitivity of Atomic Force Microscope Microcantilevers Based on the Modified Strain Gradient Theory

    NASA Astrophysics Data System (ADS)

    Ansari, R.; Pourashraf, T.; Gholami, R.; Sahmani, S.; Ashrafi, M. A.

    2015-04-01

    In the present study, the resonant frequency and flexural sensitivity of atomic force microscope (AFM) microcantilevers are predicted incorporating size effects. To this end, the modified strain gradient elasticity theory is applied to the classical Euler-Bernoulli beam theory to develop a non-classical beam model which has the capability to capture size-dependent behavior of microcantilevers. On the basis of Hamilton's principle, the size-dependent analytical expressions corresponding to the frequency response and sensitivity of AFM cantilevers are derived. It is observed that by increasing the contact stiffness, the resonant frequencies of AFM cantilevers firstly increase and then tend to remain constant at an especial value. Moreover, the resonant frequencies of AFM cantilevers obtained via the developed non-classical model is higher than those of the classical beam theory, especially for the values of beam thickness close to the internal material length scale parameter.

  9. The Impact of a Modified Cooperative Learning Technique on the Grade Frequencies Observed in a Preparatory Chemistry Course

    NASA Astrophysics Data System (ADS)

    Hayes Russell, Bridget J.

    This dissertation explored the impact of a modified cooperative learning technique on the final grade frequencies observed in a large preparatory chemistry course designed for pre-science majors. Although the use of cooperative learning at all educational levels is well researched and validated in the literature, traditional lectures still dominate as the primary methodology of teaching. This study modified cooperative learning techniques by addressing commonly cited reasons for not using the methodology. Preparatory chemistry students were asked to meet in cooperative groups outside of class time to complete homework assignments. A chi-square goodness-of-fit revealed that the final grade frequency distributions observed were different than expected. Although the distribution was significantly different, the resource investment using this particular design challenged the practical significance of the findings. Further, responses from a survey revealed that the students did not use the suggested group functioning methods that empirically are known to lead to more practically significant results.

  10. A new modified differential evolution algorithm scheme-based linear frequency modulation radar signal de-noising

    NASA Astrophysics Data System (ADS)

    Dawood Al-Dabbagh, Mohanad; Dawoud Al-Dabbagh, Rawaa; Raja Abdullah, R. S. A.; Hashim, F.

    2015-06-01

    The main intention of this study was to investigate the development of a new optimization technique based on the differential evolution (DE) algorithm, for the purpose of linear frequency modulation radar signal de-noising. As the standard DE algorithm is a fixed length optimizer, it is not suitable for solving signal de-noising problems that call for variability. A modified crossover scheme called rand-length crossover was designed to fit the proposed variable-length DE, and the new DE algorithm is referred to as the random variable-length crossover differential evolution (rvlx-DE) algorithm. The measurement results demonstrate a highly efficient capability for target detection in terms of frequency response and peak forming that was isolated from noise distortion. The modified method showed significant improvements in performance over traditional de-noising techniques.

  11. A modified cable formalism for modeling neuronal membranes at high frequencies.

    PubMed

    Bédard, Claude; Destexhe, Alain

    2008-02-15

    Intracellular recordings of cortical neurons in vivo display intense subthreshold membrane potential (V(m)) activity. The power spectral density of the V(m) displays a power-law structure at high frequencies (>50 Hz) with a slope of approximately -2.5. This type of frequency scaling cannot be accounted for by traditional models, as either single-compartment models or models based on reconstructed cell morphologies display a frequency scaling with a slope close to -4. This slope is due to the fact that the membrane resistance is short-circuited by the capacitance for high frequencies, a situation which may not be realistic. Here, we integrate nonideal capacitors in cable equations to reflect the fact that the capacitance cannot be charged instantaneously. We show that the resulting nonideal cable model can be solved analytically using Fourier transforms. Numerical simulations using a ball-and-stick model yield membrane potential activity with similar frequency scaling as in the experiments. We also discuss the consequences of using nonideal capacitors on other cellular properties such as the transmission of high frequencies, which is boosted in nonideal cables, or voltage attenuation in dendrites. These results suggest that cable equations based on nonideal capacitors should be used to capture the behavior of neuronal membranes at high frequencies. PMID:17921220

  12. A Modified Cable Formalism for Modeling Neuronal Membranes at High Frequencies

    PubMed Central

    Bédard, Claude; Destexhe, Alain

    2008-01-01

    Intracellular recordings of cortical neurons in vivo display intense subthreshold membrane potential (Vm) activity. The power spectral density of the Vm displays a power-law structure at high frequencies (>50 Hz) with a slope of ∼−2.5. This type of frequency scaling cannot be accounted for by traditional models, as either single-compartment models or models based on reconstructed cell morphologies display a frequency scaling with a slope close to −4. This slope is due to the fact that the membrane resistance is short-circuited by the capacitance for high frequencies, a situation which may not be realistic. Here, we integrate nonideal capacitors in cable equations to reflect the fact that the capacitance cannot be charged instantaneously. We show that the resulting nonideal cable model can be solved analytically using Fourier transforms. Numerical simulations using a ball-and-stick model yield membrane potential activity with similar frequency scaling as in the experiments. We also discuss the consequences of using nonideal capacitors on other cellular properties such as the transmission of high frequencies, which is boosted in nonideal cables, or voltage attenuation in dendrites. These results suggest that cable equations based on nonideal capacitors should be used to capture the behavior of neuronal membranes at high frequencies. PMID:17921220

  13. Quantification of landfill methane using modified Intergovernmental Panel on Climate Change's waste model and error function analysis.

    PubMed

    Govindan, Siva Shangari; Agamuthu, P

    2014-10-01

    Waste management can be regarded as a cross-cutting environmental 'mega-issue'. Sound waste management practices support the provision of basic needs for general health, such as clean air, clean water and safe supply of food. In addition, climate change mitigation efforts can be achieved through reduction of greenhouse gas emissions from waste management operations, such as landfills. Landfills generate landfill gas, especially methane, as a result of anaerobic degradation of the degradable components of municipal solid waste. Evaluating the mode of generation and collection of landfill gas has posted a challenge over time. Scientifically, landfill gas generation rates are presently estimated using numerical models. In this study the Intergovernmental Panel on Climate Change's Waste Model is used to estimate the methane generated from a Malaysian sanitary landfill. Key parameters of the model, which are the decay rate and degradable organic carbon, are analysed in two different approaches; the bulk waste approach and waste composition approach. The model is later validated using error function analysis and optimum decay rate, and degradable organic carbon for both approaches were also obtained. The best fitting values for the bulk waste approach are a decay rate of 0.08 y(-1) and degradable organic carbon value of 0.12; and for the waste composition approach the decay rate was found to be 0.09 y(-1) and degradable organic carbon value of 0.08. From this validation exercise, the estimated error was reduced by 81% and 69% for the bulk waste and waste composition approach, respectively. In conclusion, this type of modelling could constitute a sensible starting point for landfills to introduce careful planning for efficient gas recovery in individual landfills.

  14. A modified homotopy perturbation method and the axial secular frequencies of a non-linear ion trap.

    PubMed

    Doroudi, Alireza

    2012-01-01

    In this paper, a modified version of the homotopy perturbation method, which has been applied to non-linear oscillations by V. Marinca, is used for calculation of axial secular frequencies of a non-linear ion trap with hexapole and octopole superpositions. The axial equation of ion motion in a rapidly oscillating field of an ion trap can be transformed to a Duffing-like equation. With only octopole superposition the resulted non-linear equation is symmetric; however, in the presence of hexapole and octopole superpositions, it is asymmetric. This modified homotopy perturbation method is used for solving the resulting non-linear equations. As a result, the ion secular frequencies as a function of non-linear field parameters are obtained. The calculated secular frequencies are compared with the results of the homotopy perturbation method and the exact results. With only hexapole superposition, the results of this paper and the homotopy perturbation method are the same and with hexapole and octopole superpositions, the results of this paper are much more closer to the exact results compared with the results of the homotopy perturbation method. PMID:22792612

  15. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  16. Q estimation from reflection seismic data for hydrocarbon detection using a modified frequency shift method

    NASA Astrophysics Data System (ADS)

    Li, Fangyu; Zhou, Huailai; Jiang, Nan; Bi, Jianxia; Marfurt, Kurt J.

    2015-08-01

    As a powerfully diagnostic tool for structural interpretation, reservoir characterization, and hydrocarbon detection, quality factor Q provides useful information in seismic processing and interpretation. Popular methods, like the spectral ratio (SR) method, central frequency shift (CFS) method and peak frequency shift (PFS) method, have their respective limitations in dealing with field seismic data. The lack of a reliable method for estimating Q from reflection seismic data is an issue when utilizing the Q value for hydrocarbon detection. In this article, we derive an approximate equation and propose a dominant and central frequency shift (DCFS) method by combining the quality factor Q, the travel time, and dominant and central frequencies of two successive seismic signals along the wave propagating direction. Based on multi-layered analysis, we then proposed a method to obtain continuous volumetric Q estimation results. A test using synthetic data and statistical experiments showed the proposed method can achieve higher accuracy and robustness compared with existing methods. Application of field data also shows its potential and effectiveness to estimate seismic attenuation.

  17. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors—Air Gap Effect

    PubMed Central

    Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  18. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors--Air Gap Effect.

    PubMed

    Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  19. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors--Air Gap Effect.

    PubMed

    Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-04-18

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.

  20. Low frequency vibrational modes of oxygenated myoglobin, hemoglobins, and modified derivatives.

    PubMed

    Jeyarajah, S; Proniewicz, L M; Bronder, H; Kincaid, J R

    1994-12-01

    The low frequency resonance Raman spectra of the dioxygen adducts of myoglobin, hemoglobin, its isolated subunits, mesoheme-substituted hemoglobin, and several deuteriated heme derivatives are reported. The observed oxygen isotopic shifts are used to assign the iron-oxygen stretching (approximately 570 cm-1) and the heretofore unobserved delta (Fe-O-O) bending (approximately 420 cm-1) modes. Although the delta (Fe-O-O) is not enhanced in the case of oxymyoglobin, it is observed for all the hemoglobin derivatives, its exact frequency being relatively invariable among the derivatives. The lack of sensitivity to H2O/D2O buffer exchange is consistent with our previous interpretation of H2O/D2O-induced shifts of v(O-O) in the resonance Raman spectra of dioxygen adducts of cobalt-substituted heme proteins; namely, that those shifts are associated with alterations in vibrational coupling of v(O-O) with internal modes of proximal histidyl imidazole rather than to steric or electronic effects of H/D exchange at the active site. No evidence is obtained for enhancement of the v(Fe-N) stretching frequency of the linkage between the heme iron and the imidazole group of the proximal histidine. PMID:7983043

  1. Thermal emission at radio frequencies from supernova remnants and a modified theoretical Σ- D relation

    NASA Astrophysics Data System (ADS)

    Urošević, Dejan; Pannuti, Thomas G.

    2005-07-01

    In this paper, we discuss known discrepancies between theoretically derived and empirically measured relations between the radio surface brightness Σ and the diameter D of supernova remnants (SNRs): these relations are commonly known as the Σ- D relations. We argue that these discrepancies may be at least partially explained by taking into account thermal emission at radio frequencies from SNRs at particular evolutionary stages and located in particular environments. The major contributions of this paper may be summarized as follows: (i) we consider thermal emission at radio frequencies from SNRs in the following scenarios: a relatively young SNR evolving in a dense molecular cloud environment ( n ˜ 100-1000 cm -3) and an extremely evolved SNR expanding in a dense warm medium ( n ˜ 1-10 cm -3). Both of these SNRs are assumed to be in the adiabatic phase of evolution. We develop models of the radio emission from both of these types of SNRs and each of these models demonstrate that through the thermal bremsstrahlung process significant thermal emission at radio frequencies is expected from both types of SNR. Based on a literature search, we claim that thermal absorption or emission at radio frequencies has been detected for one evolved Galactic SNR and four young Galactic SNRs with similar properties to our modelled evolved and young SNRs. (ii) We construct artificial radio spectra for both of these two types of SNRs: in particular, we discuss our simulated spectrum for the evolved Galactic SNR OA 184. By including thermal emission in our simulated spectra, we obtain different slopes in Σ- D relations: these new slopes are in closer agreement to empirically obtained relations than the theoretically derived relations which do not take thermal emission into account. (iii) Lastly, we present an additional modification to the theoretical Σ- D relation for SNRs in the adiabatic expansion phase. This modification is based on the convolution of the synchrotron

  2. Cognitive training modifies frequency EEG bands and neuropsychological measures in Rett syndrome.

    PubMed

    Fabio, Rosa Angela; Billeci, Lucia; Crifaci, Giulia; Troise, Emilia; Tortorella, Gaetano; Pioggia, Giovanni

    2016-01-01

    Rett syndrome (RS) is a childhood neurodevelopmental disorder characterized by a primary disturbance in neuronal development. Neurological abnormalities in RS are reflected in several behavioral and cognitive impairments such as stereotypies, loss of speech and hand skills, gait apraxia, irregular breathing with hyperventilation while awake, and frequent seizures. Cognitive training can enhance both neuropsychological and neurophysiological parameters. The aim of this study was to investigate whether behaviors and brain activity were modified by training in RS. The modifications were assessed in two phases: (a) after a short-term training (STT) session, i.e., after 30 min of training and (b) after long-term training (LTT), i.e., after 5 days of training. Thirty-four girls with RS were divided into two groups: a training group (21 girls) who underwent the LTT and a control group (13 girls) that did not undergo LTT. The gaze and quantitative EEG (QEEG) data were recorded during the administration of the tasks. A gold-standard eye-tracker and a wearable EEG equipment were used. Results suggest that the participants in the STT task showed a habituation effect, decreased beta activity and increased right asymmetry. The participants in the LTT task looked faster and longer at the target, and show increased beta activity and decreased theta activity, while a leftward asymmetry was re-established. The overall result of this study indicates a positive effect of long-term cognitive training on brain and behavioral parameters in subject with RS. PMID:26859707

  3. CD133 is a modifier of hematopoietic progenitor frequencies but is dispensable for the maintenance of mouse hematopoietic stem cells

    PubMed Central

    Arndt, Kathrin; Grinenko, Tatyana; Mende, Nicole; Reichert, Doreen; Portz, Melanie; Ripich, Tatsiana; Carmeliet, Peter; Corbeil, Denis; Waskow, Claudia

    2013-01-01

    Pentatransmembrane glycoprotein prominin-1 (CD133) is expressed at the cell surface of multiple somatic stem cells, and it is widely used as a cell surface marker for the isolation and characterization of human hematopoietic stem cells (HSCs) and cancer stem cells. CD133 has been linked on a cell biological basis to stem cell-fate decisions in human HSCs and emerges as an important physiological regulator of stem cell maintenance and expansion. Its expression and physiological relevance in the murine hematopoietic system is nevertheless elusive. We show here that CD133 is expressed by bone marrow-resident murine HSCs and myeloid precursor cells with the developmental propensity to give rise to granulocytes and monocytes. However, CD133 is dispensable for the pool size and function of HSCs during steady-state hematopoiesis and after transplantation, demonstrating a substantial species difference between mouse and man. Blood cell numbers in the periphery are normal; however, CD133 appears to be a modifier for the development of growth-factor responsive myeloerythroid precursor cells in the bone marrow under steady state and mature red blood cells after hematopoietic stress. Taken together, these studies show that CD133 is not a critical regulator of hematopoietic stem cell function in mouse but that it modifies frequencies of growth-factor responsive hematopoietic progenitor cells during steady state and after myelotoxic stress in vivo. PMID:23509298

  4. Photocatalytic characteristic and photodegradation kinetics of toluene using N-doped TiO2 modified by radio frequency plasma.

    PubMed

    Shie, Je-Lueng; Lee, Chiu-Hsuan; Chiou, Chyow-San; Chen, Yi-Hung; Chang, Ching-Yuan

    2014-01-01

    This study investigates the feasibility of applications of the plasma surface modification of photocatalysts and the removal of toluene from indoor environments. N-doped TiO2 is prepared by precipitation methods and calcined using a muffle furnace (MF) and modified by radio frequency plasma (RF) at different temperatures with light sources from a visible light lamp (VLL), a white light-emitting diode (WLED) and an ultraviolet light-emitting diode (UVLED). The operation parameters and influential factors are addressed and prepared for characteristic analysis and photo-decomposition examination. Furthermore, related kinetic models are established and used to simulate the experimental data. The characteristic analysis results show that the RF plasma-calcination method enhanced the Brunauer Emmett Teller surface area of the modified photocatalysts effectively. For the elemental analysis, the mass percentages of N for the RF-modified photocatalyst are larger than those of MF by six times. The aerodynamic diameters of the RF-modifiedphotocatalyst are all smaller than those of MF. Photocatalytic decompositions of toluene are elucidated according to the Langmuir-Hinshelwood model. Decomposition efficiencies (eta) of toluene for RF-calcined methods are all higher than those of commercial TiO2 (P25). Reaction kinetics ofphoto-decomposition reactions using RF-calcined methods with WLED are proposed. A comparison of the simulation results with experimental data is also made and indicates good agreement. All the results provide useful information and design specifications. Thus, this study shows the feasibility and potential use of plasma modification via LED in photocatalysis.

  5. Detecting the frequency of aminoglycoside modifying enzyme encoding genes among clinical isolates of methicillin-resistant Staphylococcus aureus

    PubMed Central

    Shokravi, Zahra; Mehrad, Laleh; Ramazani, Ali

    2015-01-01

    Introduction: Methicillin-resistant Staphylococcus aureus (MRSA) plays an important role in causing many serious nosocomial infections. In this study, the antimicrobial susceptibility and the frequency of aminoglycoside modifying enzyme encoding genes among clinical isolates of methicillin-resistant Staphylococcus aureus was investigated from two university hospitals of Zanjan province of Iran. Methods: In this study, the antimicrobial susceptibility of MRSA isolates to various antibiotics was investigated by the disk diffusion method. Multiplex PCR assays were used for the determination of aminoglycoside modifying enzyme (AME) genes and staphylococcal cassette chromosome mec (SCCmec) types in MRSA strains. Results: All 58 MRSA isolates were sensitive to vancomycin. Resistance to penicillin G, oxacilin, gentamicin, erythromycin, clindamycin, kanamycin, and tobramycin was found in 96.4%, 98.3%, 51.7%, 53.4%, 55.2%, 62% and 58.6% of the isolates, respectively. The most prevalent AME genes were aac(6′)/aph(2′′) (48.3 %) followed by ant(4)-Ia (24%). The aph(3′)-Ia gene was the least frequent AME gene among MRSA isolates (19%). Of the 58 tested MRSA isolates, 5 (8.6%) were harboured SCCmec type I, 11 (19%) SCCmec type II, 20 (34.5%) SCCmec type III, 17 (29.3%) SCCmec type IVa, 1 (1.7%) SCCmec type IVb, 2 (3.4%) SCCmec type IVc, 11 (19%) SCCmec type IVd, and, 18 (31%) SCCmec type V. Nineteen isolates were not typeable. Conclusion: In conclusion, the aac (6′)/aph (2′′) was the most common aminoglycoside modifying enzyme gene and SCCmec type II and V were the most frequent types detected in hospital isolates, respectively. PMID:26191502

  6. Photocatalytic characteristic and photodegradation kinetics of toluene using N-doped TiO2 modified by radio frequency plasma.

    PubMed

    Shie, Je-Lueng; Lee, Chiu-Hsuan; Chiou, Chyow-San; Chen, Yi-Hung; Chang, Ching-Yuan

    2014-01-01

    This study investigates the feasibility of applications of the plasma surface modification of photocatalysts and the removal of toluene from indoor environments. N-doped TiO2 is prepared by precipitation methods and calcined using a muffle furnace (MF) and modified by radio frequency plasma (RF) at different temperatures with light sources from a visible light lamp (VLL), a white light-emitting diode (WLED) and an ultraviolet light-emitting diode (UVLED). The operation parameters and influential factors are addressed and prepared for characteristic analysis and photo-decomposition examination. Furthermore, related kinetic models are established and used to simulate the experimental data. The characteristic analysis results show that the RF plasma-calcination method enhanced the Brunauer Emmett Teller surface area of the modified photocatalysts effectively. For the elemental analysis, the mass percentages of N for the RF-modified photocatalyst are larger than those of MF by six times. The aerodynamic diameters of the RF-modifiedphotocatalyst are all smaller than those of MF. Photocatalytic decompositions of toluene are elucidated according to the Langmuir-Hinshelwood model. Decomposition efficiencies (eta) of toluene for RF-calcined methods are all higher than those of commercial TiO2 (P25). Reaction kinetics ofphoto-decomposition reactions using RF-calcined methods with WLED are proposed. A comparison of the simulation results with experimental data is also made and indicates good agreement. All the results provide useful information and design specifications. Thus, this study shows the feasibility and potential use of plasma modification via LED in photocatalysis. PMID:24645445

  7. Reducing epistemic errors in water quality modelling through high-frequency data and stakeholder collaboration: the case of an industrial spill

    NASA Astrophysics Data System (ADS)

    Krueger, Tobias; Inman, Alex; Paling, Nick

    2014-05-01

    Catchment management, as driven by legislation such as the EU WFD or grassroots initiatives, requires the apportionment of in-stream pollution to point and diffuse sources so that mitigation measures can be targeted and costs and benefits shared. Source apportionment is typically done via modelling. Given model imperfections and input data errors, it has become state-of-the-art to employ an uncertainty framework. However, what is not easily incorporated in such a framework, and currently much discussed in hydrology, are epistemic uncertainties, i.e. those uncertainties that relate to lack of knowledge about processes and data. For example, what if an otherwise negligible source suddenly matters because of an accidental pollution incident? In this paper we present such a case of epistemic error, an industrial spill ignored in a water quality model, demonstrate the bias of the resulting model simulations, and show how the error was discovered somewhat incidentally by auxiliary high-frequency data and finally corrected through the collective intelligence of a stakeholder network. We suggest that accidental pollution incidents like this are a wide-spread, though largely ignored, problem. Hence our discussion will reflect on the practice of catchment monitoring, modelling and management in general. The case itself occurred as part of ongoing modelling support in the Tamar catchment, one of the priority catchments of the UK government's new approach to managing water resources more decentralised and collaboratively. An Extended Export Coefficient Model (ECM+) had been developed with stakeholders to simulate transfers of nutrients (N & P), sediment and Faecal Coliforms from land to water and down the river network as a function of sewage treatment options, land use, livestock densities and farm management practices. In the process of updating the model for the hydrological years 2008-2012 an over-prediction of the annual average P concentration by the model was found at

  8. High frequency electromagnetic properties of interstitial-atom-modified Ce2Fe17NX and its composites

    NASA Astrophysics Data System (ADS)

    Li, L. Z.; Wei, J. Z.; Xia, Y. H.; Wu, R.; Yun, C.; Yang, Y. B.; Yang, W. Y.; Du, H. L.; Han, J. Z.; Liu, S. Q.; Yang, Y. C.; Wang, C. S.; Yang, J. B.

    2014-07-01

    The magnetic and microwave absorption properties of the interstitial atom modified intermetallic compound Ce2Fe17NX have been investigated. The Ce2Fe17NX compound shows a planar anisotropy with saturation magnetization of 1088 kA/m at room temperature. The Ce2Fe17NX paraffin composite with a mass ratio of 1:1 exhibits a permeability of μ ' = 2.7 at low frequency, together with a reflection loss of -26 dB at 6.9 GHz with a thickness of 1.5 mm and -60 dB at 2.2 GHz with a thickness of 4.0 mm. It was found that this composite increases the Snoek limit and exhibits both high working frequency and permeability due to its high saturation magnetization and high ratio of the c-axis anisotropy field to the basal plane anisotropy field. Hence, it is possible that this composite can be used as a high-performance thin layer microwave absorber.

  9. Modified approach for high frequency dielectric characterization of thinly metallized soft polymer film using grounded coplanar waveguide

    NASA Astrophysics Data System (ADS)

    Baron, Samuel; Nadaud, Kevin; Guiffard, Benoit; Sharaiha, Ala; Seveyrat, Laurence

    2015-08-01

    In this paper, we introduce the dielectric characterization of soft polymer, polyurethane (PU), between 1 and 31 GHz frequency band using Grounded CoPlanar Waveguide (GCPW) lines with a modified analytical method. The unavoidable thin metallization (1 μm) of GCPW lines on polyurethane yields high conductor losses, which contribute to the extracted global losses up to 58% at 4 GHz. In order to get more precisely the dielectric losses, a modification of an already existing analytical model by coupling it with 3D electromagnetic simulations is proposed, which allows to estimate and subtract quickly the conductor losses. The measurements indicated that polyurethane relative permittivity ranges from 3.49 to 2.65 and the loss tangent was about 0.08, which is in agreement with the state of the art on this grade of PU as well as the Metal-Insulator-Metal capacitors characterizations (from 10-1 to 107 Hz and from 2 × 108 to 5 × 109 Hz). The proposed approach may open a fast and simple way for precisely determining the microwave dielectric properties of (ultra) soft polymers in a large bandwidth.

  10. In vitro culture increases the frequency of stochastic epigenetic errors at imprinted genes in placental tissues from mouse concepti produced through assisted reproductive technologies.

    PubMed

    de Waal, Eric; Mak, Winifred; Calhoun, Sondra; Stein, Paula; Ord, Teri; Krapp, Christopher; Coutifaris, Christos; Schultz, Richard M; Bartolomei, Marisa S

    2014-02-01

    Assisted reproductive technologies (ART) have enabled millions of couples with compromised fertility to conceive children. Nevertheless, there is a growing concern regarding the safety of these procedures due to an increased incidence of imprinting disorders, premature birth, and low birth weight in ART-conceived offspring. An integral aspect of ART is the oxygen concentration used during in vitro development of mammalian embryos, which is typically either atmospheric (~20%) or reduced (5%). Both oxygen tension levels have been widely used, but 5% oxygen improves preimplantation development in several mammalian species, including that of humans. To determine whether a high oxygen tension increases the frequency of epigenetic abnormalities in mouse embryos subjected to ART, we measured DNA methylation and expression of several imprinted genes in both embryonic and placental tissues from concepti generated by in vitro fertilization (IVF) and exposed to 5% or 20% oxygen during culture. We found that placentae from IVF embryos exhibit an increased frequency of abnormal methylation and expression profiles of several imprinted genes, compared to embryonic tissues. Moreover, IVF-derived placentae exhibit a variety of epigenetic profiles at the assayed imprinted genes, suggesting that these epigenetic defects arise by a stochastic process. Although culturing embryos in both of the oxygen concentrations resulted in a significant increase of epigenetic defects in placental tissues compared to naturally conceived controls, we did not detect significant differences between embryos cultured in 5% and those cultured in 20% oxygen. Thus, further optimization of ART should be considered to minimize the occurrence of epigenetic errors in the placenta. PMID:24337315

  11. Proofreading for word errors.

    PubMed

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  12. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  13. Error-related electrocorticographic activity in humans during continuous movements.

    PubMed

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2012-04-01

    Brain-machine interface (BMI) devices make errors in decoding. Detecting these errors online from neuronal activity can improve BMI performance by modifying the decoding algorithm and by correcting the errors made. Here, we study the neuronal correlates of two different types of errors which can both be employed in BMI: (i) the execution error, due to inaccurate decoding of the subjects' movement intention; (ii) the outcome error, due to not achieving the goal of the movement. We demonstrate that, in electrocorticographic (ECoG) recordings from the surface of the human brain, strong error-related neural responses (ERNRs) for both types of errors can be observed. ERNRs were present in the low and high frequency components of the ECoG signals, with both signal components carrying partially independent information. Moreover, the observed ERNRs can be used to discriminate between error types, with high accuracy (≥83%) obtained already from single electrode signals. We found ERNRs in multiple cortical areas, including motor and somatosensory cortex. As the motor cortex is the primary target area for recording control signals for a BMI, an adaptive motor BMI utilizing these error signals may not require additional electrode implants in other brain areas.

  14. Tunable error-free optical frequency conversion of a 4ps optical short pulse over 25 nm by four-wave mixing in a polarisation-maintaining optical fibre

    NASA Astrophysics Data System (ADS)

    Morioka, T.; Kawanishi, S.; Saruwatari, M.

    1994-05-01

    Error-free, tunable optical frequency conversion of a transform-limited 4.0 ps optical pulse signalis demonstrated at 6.3 Gbit/s using four-wave mixing in a polarization-maintaining optical fibre. The process generates 4.0-4.6 ps pulses over a 25nm range with time-bandwidth products of 0.31-0.43 and conversion power penalties of less than 1.5 dB.

  15. Thermal acclimation and thyroxine treatment modify the electric organ discharge frequency in an electric fish, Apteronotus leptorhynchus.

    PubMed

    Dunlap, K D; Ragazzi, M A

    2015-11-01

    In ectotherms, the rate of many neural processes is determined externally, by the influence of the thermal environment on body temperature, and internally, by hormones secreted from the thyroid gland. Through thermal acclimation, animals can buffer the influence of the thermal environment by adjusting their physiology to stabilize certain processes in the face of environmental temperature change. The electric organ discharge (EOD) used by weak electric fish for electrocommunication and electrolocation is highly temperature sensitive. In some temperate species that naturally experience large seasonal fluctuations in environmental temperature, the thermal sensitivity (Q10) of the EOD shifts after long-term temperature change. We examined thermal acclimation of EOD frequency in a tropical electric fish, Apteronotus leptorhynchus that naturally experiences much less temperature change. We transferred fish between thermal environments (25.3 and 27.8 °C) and measured EOD frequency and its thermal sensitivity (Q10) over 11 d. After 6d, fish exhibited thermal acclimation to both warming and cooling, adjusting the thermal dependence of EOD frequency to partially compensate for the small change (2.5 °C) in water temperature. In addition, we evaluated the thyroid influence on EOD frequency by treating fish with thyroxine or the anti-thyroid compound propylthiouricil (PTU) to stimulate or inhibit thyroid activity, respectively. Thyroxine treatment significantly increased EOD frequency, but PTU had no effect. Neither thyroxine nor PTU treatment influenced the thermal sensitivity (Q10) of EOD frequency during acute temperature change. Thus, the EOD of Apteronotus shows significant thermal acclimation and responds to elevated thyroxine.

  16. Errors of Omission in English-Speaking Children's Production of Plurals and the Past Tense: The Effects of Frequency, Phonology, and Competition

    ERIC Educational Resources Information Center

    Matthews, Danielle E.; Theakston, Anna L.

    2006-01-01

    How do English-speaking children inflect nouns for plurality and verbs for the past tense? We assess theoretical answers to this question by considering errors of omission, which occur when children produce a stem in place of its inflected counterpart (e.g., saying "dress" to refer to 5 dresses). A total of 307 children (aged 3;11-9;9)…

  17. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  18. Wound healing treatment by high frequency ultrasound, microcurrent, and combined therapy modifies the immune response in rats

    PubMed Central

    Korelo, Raciele I. G.; Kryczyk, Marcelo; Garcia, Carolina; Naliwaiko, Katya; Fernandes, Luiz C.

    2016-01-01

    BACKGROUND: Therapeutic high-frequency ultrasound, microcurrent, and a combination of the two have been used as potential interventions in the soft tissue healing process, but little is known about their effect on the immune system. OBJECTIVE: To evaluate the effects of therapeutic high frequency ultrasound, microcurrent, and the combined therapy of the two on the size of the wound area, peritoneal macrophage function, CD4+ and CD8+, T lymphocyte populations, and plasma concentration of interleukins (ILs). METHOD: Sixty-five Wistar rats were randomized into five groups, as follows: uninjured control (C, group 1), lesion and no treatment (L, group 2), lesion treated with ultrasound (LU, group 3), lesion treated with microcurrent (LM, group 4), and lesion treated with combined therapy (LUM, group 5). For groups 3, 4 and 5, treatment was initiated 24 hours after surgery under anesthesia and each group was allocated into three different subgroups (n=5) to allow for the use of the different therapy resources at on days 3, 7 and 14 Photoplanimetry was performed daily. After euthanasia, blood was collected for immune analysis. RESULTS: Ultrasound increased the phagocytic capacity and the production of nitric oxide by macrophages and induced the reduction of CD4+ cells, the CD4+/CD8+ ratio, and the plasma concentration of IL-1β. Microcurrent and combined therapy decreased the production of superoxide anion, nitric oxide, CD4+-positive cells, the CD4+/CD8+ ratio, and IL-1β concentration. CONCLUSIONS: Therapeutic high-frequency ultrasound, microcurrent, and combined therapy changed the activity of the innate and adaptive immune system during healing process but did not accelerate the closure of the wound. PMID:26786082

  19. Modified structural and frequency dependent impedance formalism of nanoscale BaTiO3 due to Tb inclusion

    NASA Astrophysics Data System (ADS)

    Borah, Manjit; Mohanta, Dambarudhar

    2016-05-01

    We report the effect of Tb-doping on the structural and high frequency impedance response of the nanoscale BaTiO3 (BT) systems. While exhibiting a mixed phase crystal structure, the nano-BT systems are found to evolve with edges, and facets. The interplanar spacing of crystal lattice fringes is ~0.25 nm. The Cole-Cole plots, in the impedance formalism, have demonstrated semicircles which are the characteristic feature of grain boundary resistance of several MΩ. A lowering of ac conductivity with doping was believed to be due to the manifestation of oxygen vacancies and vacancy ordering.

  20. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  1. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  2. Exposure to an extremely low-frequency electromagnetic field only slightly modifies the proteome of Chromobacterium violaceumATCC 12472

    PubMed Central

    Baraúna, Rafael A.; Santos, Agenor V.; Graças, Diego A.; Santos, Daniel M.; Ghilardi, Rubens; Pimenta, Adriano M. C.; Carepo, Marta S. P.; Schneider, Maria P.C.; Silva, Artur

    2015-01-01

    Several studies of the physiological responses of different organisms exposed to extremely low-frequency electromagnetic fields (ELF-EMF) have been described. In this work, we report the minimal effects of in situ exposure to ELF-EMF on the global protein expression of Chromobacterium violaceum using a gel-based proteomic approach. The protein expression profile was only slightly altered, with five differentially expressed proteins detected in the exposed cultures; two of these proteins (DNA-binding stress protein, Dps, and alcohol dehydrogenase) were identified by MS/MS. The enhanced expression of Dps possibly helped to prevent physical damage to DNA. Although small, the changes in protein expression observed here were probably beneficial in helping the bacteria to adapt to the stress generated by the electromagnetic field. PMID:26273227

  3. New Gear Transmission Error Measurement System Designed

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2001-01-01

    The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.

  4. Analysis of ionospheric refraction error corrections for GRARR systems

    NASA Technical Reports Server (NTRS)

    Mallinckrodt, A. J.; Parker, H. C.; Berbert, J. H.

    1971-01-01

    A determination is presented of the ionospheric refraction correction requirements for the Goddard range and range rate (GRARR) S-band, modified S-band, very high frequency (VHF), and modified VHF systems. The relation ships within these four systems are analyzed to show that the refraction corrections are the same for all four systems and to clarify the group and phase nature of these corrections. The analysis is simplified by recognizing that the range rate is equivalent to a carrier phase range change measurement. The equation for the range errors are given.

  5. Axial Eye Growth and Refractive Error Development Can Be Modified by Exposing the Peripheral Retina to Relative Myopic or Hyperopic Defocus

    PubMed Central

    Benavente-Pérez, Alexandra; Nour, Ann; Troilo, David

    2014-01-01

    Purpose. Bifocal contact lenses were used to impose hyperopic and myopic defocus on the peripheral retina of marmosets. Eye growth and refractive state were compared with untreated animals and those treated with single-vision or multizone contact lenses from earlier studies. Methods. Thirty juvenile marmosets wore one of three experimental annular bifocal contact lens designs on their right eyes and a plano contact lens on the left eye as a control for 10 weeks from 70 days of age (10 marmosets/group). The experimental designs had plano center zones (1.5 or 3 mm) and +5 diopters [D] or −5 D in the periphery (referred to as +5 D/1.5 mm, +5 D/3 mm and −5 D/3 mm). We measured the central and peripheral mean spherical refractive error (MSE), vitreous chamber depth (VC), pupil diameter (PD), calculated eye growth, and myopia progression rates prior to and during treatment. The results were compared with age-matched untreated (N = 25), single-vision positive (N = 19), negative (N = 16), and +5/−5 D multizone lens-reared marmosets (N = 10). Results. At the end of treatment, animals in the −5 D/3 mm group had larger (P < 0.01) and more myopic eyes (P < 0.05) than animals in the +5 D/1.5 mm group. There was a dose-dependent relationship between the peripheral treatment zone area and the treatment-induced changes in eye growth and refractive state. Pretreatment ocular growth rates and baseline peripheral refraction accounted for 40% of the induced refraction and axial growth rate changes. Conclusions. Eye growth and refractive state can be manipulated by altering peripheral retinal defocus. Imposing peripheral hyperopic defocus produces axial myopia, whereas peripheral myopic defocus produces axial hyperopia. The effects are smaller than using single-vision contact lenses that impose full-field defocus, but support the use of bifocal or multifocal contact lenses as an effective treatment for myopia control. PMID:25190657

  6. Error estimation for ORION baseline vector determination

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1980-01-01

    Effects of error sources on Operational Radio Interferometry Observing Network (ORION) baseline vector determination are studied. Partial derivatives of delay observations with respect to each error source are formulated. Covariance analysis is performed to estimate the contribution of each error source to baseline vector error. System design parameters such as antenna sizes, system temperatures and provision for dual frequency operation are discussed.

  7. Design and implementation of a new modified sliding mode controller for grid-connected inverter to controlling the voltage and frequency.

    PubMed

    Ghanbarian, Mohammad Mehdi; Nayeripour, Majid; Rajaei, Amirhossein; Mansouri, Mohammad Mahdi

    2016-03-01

    As the output power of a microgrid with renewable energy sources should be regulated based on the grid conditions, using robust controllers to share and balance the power in order to regulate the voltage and frequency of microgrid is critical. Therefore a proper control system is necessary for updating the reference signals and determining the proportion of each inverter in the microgrid control. This paper proposes a new adaptive method which is robust while the conditions are changing. This controller is based on a modified sliding mode controller which provides adapting conditions in linear and nonlinear loads. The performance of the proposed method is validated by representing the simulation results and experimental lab results. PMID:26704720

  8. Design and implementation of a new modified sliding mode controller for grid-connected inverter to controlling the voltage and frequency.

    PubMed

    Ghanbarian, Mohammad Mehdi; Nayeripour, Majid; Rajaei, Amirhossein; Mansouri, Mohammad Mahdi

    2016-03-01

    As the output power of a microgrid with renewable energy sources should be regulated based on the grid conditions, using robust controllers to share and balance the power in order to regulate the voltage and frequency of microgrid is critical. Therefore a proper control system is necessary for updating the reference signals and determining the proportion of each inverter in the microgrid control. This paper proposes a new adaptive method which is robust while the conditions are changing. This controller is based on a modified sliding mode controller which provides adapting conditions in linear and nonlinear loads. The performance of the proposed method is validated by representing the simulation results and experimental lab results.

  9. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  10. Large decrease of characteristic frequency of dielectric relaxation associated with domain-wall motion in Sb5+-modified (K,Na)NbO3-based ceramics

    NASA Astrophysics Data System (ADS)

    Zhang, Jialiang; Hao, Wentao; Gao, Yong; Qin, Yalin; Tan, Yongqiang; Wang, Chunlei

    2012-12-01

    The (K,Na)NbO3-based ceramics have drawn considerable attention as a type of promising lead-free piezoelectric materials in recent years. However, investigations on the dielectric dispersion spectra in the microwave range have rarely been conducted so far. Dielectric dispersion spectra of several representative (K,Na)NbO3-based ceramics were measured and compared in this study. An interesting physical phenomenon that the Sb5+-modified (K,Na)NbO3-based ceramics differ distinctly from those without Sb5+-modification in the aspect of characteristic frequency fp of dielectric relaxation associated with the domain-wall motion has been found. The former group shows the fp values in several tens of MHz at room temperature, whereas the latter group has generally the fp values of several GHz at least. For the Sb5+-modified (K,Na)NbO3-based ceramics, the change of fp with temperature follows roughly a thermally activated character. In contrast, the ones without Sb5+-modification exhibit a temperature-insensitive character in fp. Analysis showed that the results could be understood from the viewpoint of domain-wall vibration. It is speculated that a large change occurring in the damping constant due to the incorporation of Sb5+ is possibly the origin.

  11. Accurate identification of the frequency response functions for the rotor-bearing-foundation system using the modified pseudo mode shape method

    NASA Astrophysics Data System (ADS)

    Chen, Yeong-Shu; Cheng, Ye-Dar; Yang, Tachung; Koai, Kwang-Lu

    2010-03-01

    In this paper, an identification technique in the dynamic analyses of rotor-bearing-foundation systems called the pseudo mode shape method (PMSM) was improved in order to enhance the accuracy of the identified dynamic characteristic matrices of its foundation models. Two procedures, namely, phase modification and numerical optimisation, were proposed in the algorithm of PMSM to effectively improve its accuracy. Generally, it is always necessary to build the whole foundation model in studying the dynamics of a rotor system through the finite element analysis method. This is either unfeasible or impractical when the foundation is too complicated. Instead, the PMSM uses the frequency response function (FRF) data of joint positions between the rotor and the foundation to establish the equivalent mass, damping, and stiffness matrices of the foundation without having to build the physical model. However, the accuracy of the obtained system's FRF is still unsatisfactory, especially at those higher modes. In order to demonstrate the effectiveness of the presented methods, a solid foundation was solved for its FRF by using both the original and modified PMSM, as well as the finite element (ANSYS) model for comparisons. The results showed that the accuracy of the obtained FRF was improved remarkably with the modified PMSM based on the results of the ANSYS. In addition, an induction motor resembling a rotor-bearing-foundation system, with its housing treated as the foundation, was taken as an example to verify the algorithm experimentally. The FRF curves at the bearing supports of the rotor (armature) were obtained through modal testing to estimate the above-mentioned equivalent matrices of the housing. The FRF of the housing, which was calculated from the equivalent matrices with the modified PMSM, showed satisfactory consistency with that from the modal testing.

  12. High frequency electromagnetic properties of interstitial-atom-modified Ce{sub 2}Fe{sub 17}N{sub X} and its composites

    SciTech Connect

    Li, L. Z.; Wei, J. Z.; Xia, Y. H.; Wu, R.; Yun, C.; Yang, Y. B.; Yang, W. Y.; Du, H. L.; Han, J. Z.; Liu, S. Q.; Yang, Y. C.; Wang, C. S. E-mail: jbyang@pku.edu.cn; Yang, J. B. E-mail: jbyang@pku.edu.cn

    2014-07-14

    The magnetic and microwave absorption properties of the interstitial atom modified intermetallic compound Ce{sub 2}Fe{sub 17}N{sub X} have been investigated. The Ce{sub 2}Fe{sub 17}N{sub X} compound shows a planar anisotropy with saturation magnetization of 1088 kA/m at room temperature. The Ce{sub 2}Fe{sub 17}N{sub X} paraffin composite with a mass ratio of 1:1 exhibits a permeability of μ′ = 2.7 at low frequency, together with a reflection loss of −26 dB at 6.9 GHz with a thickness of 1.5 mm and −60 dB at 2.2 GHz with a thickness of 4.0 mm. It was found that this composite increases the Snoek limit and exhibits both high working frequency and permeability due to its high saturation magnetization and high ratio of the c-axis anisotropy field to the basal plane anisotropy field. Hence, it is possible that this composite can be used as a high-performance thin layer microwave absorber.

  13. Potential-dependent structures investigated at the perchloric acid solution/iodine modified Au(111) interface by electrochemical frequency-modulation atomic force microscopy.

    PubMed

    Utsunomiya, Toru; Tatsumi, Shoko; Yokota, Yasuyuki; Fukui, Ken-ichi

    2015-05-21

    Electrochemical frequency-modulation atomic force microscopy (EC-FM-AFM) was adopted to analyze the electrified interface between an iodine modified Au(111) and a perchloric acid solution. Atomic resolution imaging of the electrode was strongly dependent on the electrode potential within the electrochemical window: each iodine atom was imaged in the cathodic range of the electrode potential, but not in the more anodic range where the tip is retracted by approximately 0.1 nm compared to the cathodic case for the same imaging parameters. The frequency shift versus tip-to-sample distance curves obtained in the electric double layer region on the iodine adlayer indicated that the water structuring became weaker at the anodic potential, where the atomic resolution images could not be obtained, and immediately recovered at the original cathodic potential. The reversible hydration structures were consistent with the reversible topographic images and the cyclic voltammetry results. These results indicate that perchlorate anions concentrated at the anodic potential affect the interface hydration without any irreversible changes to the interface under these conditions.

  14. Exploration of MR-guided head and neck hyperthermia by phantom testing of a modified prototype applicator for use with proton resonance frequency shift thermometry.

    PubMed

    Numan, Wouter C M; Hofstetter, Lorne W; Kotek, Gyula; Bakker, Jurriaan F; Fiveland, Eric W; Houston, Gavin C; Kudielka, Guido; Yeo, Desmond T B; Paulides, Margarethus M

    2014-05-01

    Magnetic resonance thermometry (MRT) offers non-invasive temperature imaging and can greatly contribute to the effectiveness of head and neck hyperthermia. We therefore wish to redesign the HYPERcollar head and neck hyperthermia applicator for simultaneous radio frequency (RF) heating and magnetic resonance thermometry. In this work we tested the feasibility of this goal through an exploratory experiment, in which we used a minimally modified applicator prototype to heat a neck model phantom and used an MR scanner to measure its temperature distribution. We identified several distorting factors of our current applicator design and experimental methods to be addressed during development of a fully MR compatible applicator. To allow MR imaging of the electromagnetically shielded inside of the applicator, only the lower half of the HYPERcollar prototype was used. Two of its antennas radiated a microwave signal (150 W, 434 MHz) for 11 min into the phantom, creating a high gradient temperature profile (ΔTmax = 5.35 °C). Thermal distributions were measured sequentially, using drift corrected proton resonance frequency shift-based MRT. Measurement accuracy was assessed using optical probe thermometry and found to be about 0.4 °C (0.1-0.7 °C). Thermal distribution size and shape were verified by thermal simulations and found to have a good correlation (r(2 )= 0.76). PMID:24773040

  15. Surprise and error: common neuronal architecture for the processing of errors and novelty.

    PubMed

    Wessel, Jan R; Danielmeier, Claudia; Morton, J Bruce; Ullsperger, Markus

    2012-05-30

    According to recent accounts, the processing of errors and generally infrequent, surprising (novel) events share a common neuroanatomical substrate. Direct empirical evidence for this common processing network in humans is, however, scarce. To test this hypothesis, we administered a hybrid error-monitoring/novelty-oddball task in which the frequency of novel, surprising trials was dynamically matched to the frequency of errors. Using scalp electroencephalographic recordings and event-related functional magnetic resonance imaging (fMRI), we compared neural responses to errors with neural responses to novel events. In Experiment 1, independent component analysis of scalp ERP data revealed a common neural generator implicated in the generation of both the error-related negativity (ERN) and the novelty-related frontocentral N2. In Experiment 2, this pattern was confirmed by a conjunction analysis of event-related fMRI, which showed significantly elevated BOLD activity following both types of trials in the posterior medial frontal cortex, including the anterior midcingulate cortex (aMCC), the neuronal generator of the ERN. Together, these findings provide direct evidence of a common neural system underlying the processing of errors and novel events. This appears to be at odds with prominent theories of the ERN and aMCC. In particular, the reinforcement learning theory of the ERN may need to be modified because it may not suffice as a fully integrative model of aMCC function. Whenever course and outcome of an action violates expectancies (not necessarily related to reward), the aMCC seems to be engaged in evaluating the necessity of behavioral adaptation.

  16. Surprise and error: common neuronal architecture for the processing of errors and novelty.

    PubMed

    Wessel, Jan R; Danielmeier, Claudia; Morton, J Bruce; Ullsperger, Markus

    2012-05-30

    According to recent accounts, the processing of errors and generally infrequent, surprising (novel) events share a common neuroanatomical substrate. Direct empirical evidence for this common processing network in humans is, however, scarce. To test this hypothesis, we administered a hybrid error-monitoring/novelty-oddball task in which the frequency of novel, surprising trials was dynamically matched to the frequency of errors. Using scalp electroencephalographic recordings and event-related functional magnetic resonance imaging (fMRI), we compared neural responses to errors with neural responses to novel events. In Experiment 1, independent component analysis of scalp ERP data revealed a common neural generator implicated in the generation of both the error-related negativity (ERN) and the novelty-related frontocentral N2. In Experiment 2, this pattern was confirmed by a conjunction analysis of event-related fMRI, which showed significantly elevated BOLD activity following both types of trials in the posterior medial frontal cortex, including the anterior midcingulate cortex (aMCC), the neuronal generator of the ERN. Together, these findings provide direct evidence of a common neural system underlying the processing of errors and novel events. This appears to be at odds with prominent theories of the ERN and aMCC. In particular, the reinforcement learning theory of the ERN may need to be modified because it may not suffice as a fully integrative model of aMCC function. Whenever course and outcome of an action violates expectancies (not necessarily related to reward), the aMCC seems to be engaged in evaluating the necessity of behavioral adaptation. PMID:22649231

  17. Analyzing the properties of acceptor mode in two-dimensional plasma photonic crystals based on a modified finite-difference frequency-domain method

    SciTech Connect

    Zhang, Hai-Feng; Ding, Guo-Wen; Lin, Yi-Bing; Chen, Yu-Qing

    2015-05-15

    In this paper, the properties of acceptor mode in two-dimensional plasma photonic crystals (2D PPCs) composed of the homogeneous and isotropic dielectric cylinders inserted into nonmagnetized plasma background with square lattices under transverse-magnetic wave are theoretically investigated by a modified finite-difference frequency-domain (FDFD) method with supercell technique, whose symmetry of every supercell is broken by removing a central rod. A new FDFD method is developed to calculate the band structures of such PPCs. The novel FDFD method adopts a general function to describe the distribution of dielectric in the present PPCs, which can easily transform the complicated nonlinear eigenvalue equation to the simple linear equation. The details of convergence and effectiveness of proposed FDFD method are analyzed using a numerical example. The simulated results demonstrate that the enough accuracy of the proposed FDFD method can be observed compared to the plane wave expansion method, and the good convergence can also be obtained if the number of meshed grids is large enough. As a comparison, two different configurations of photonic crystals (PCs) but with similar defect are theoretically investigated. Compared to the conventional dielectric-air PCs, not only the acceptor mode has a higher frequency but also an additional photonic bandgap (PBG) can be found in the low frequency region. The calculated results also show that PBGs of proposed PPCs can be enlarged as the point defect is introduced. The influences of the parameters for present PPCs on the properties of acceptor mode are also discussed in detail. Numerical simulations reveal that the acceptor mode in the present PPCs can be easily tuned by changing those parameters. Those results can hold promise for designing the tunable applications in the signal process or time delay devices based on the present PPCs.

  18. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day

  19. Phase Errors and the Capture Effect

    SciTech Connect

    Blair, J., and Machorro, E.

    2011-11-01

    This slide-show presents analysis of spectrograms and the phase error of filtered noise in a signal. When the filtered noise is smaller than the signal amplitude, the phase error can never exceed 90{deg}, so the average phase error over many cycles is zero: this is called the capture effect because the largest signal captures the phase and frequency determination.

  20. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  1. Combination of modified mixing technique and low frequency ultrasound to control the elution profile of vancomycin-loaded acrylic bone cement

    PubMed Central

    Wendling, A.; Mar, D.; Wischmeier, N.; Anderson, D.

    2016-01-01

    provides a reasonable means for increasing both short- and long-term antibiotic elution without affecting mechanical strength. Cite this article: Dr. T. McIff. Combination of modified mixing technique and low frequency ultrasound to control the elution profile of vancomycin-loaded acrylic bone cement. Bone Joint Res 2016;5:26–32. DOI: 10.1302/2046-3758.52.2000412 PMID:26843512

  2. Error diffusion with a more symmetric error distribution

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang

    1994-05-01

    In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.

  3. Using a modified time-reverse imaging technique to locate low-frequency earthquakes on the San Andreas Fault near Cholame, California

    USGS Publications Warehouse

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2015-01-01

    We present a new method to locate low-frequency earthquakes (LFEs) within tectonic tremor episodes based on time-reverse imaging techniques. The modified time-reverse imaging technique presented here is the first method that locates individual LFEs within tremor episodes within 5 km uncertainty without relying on high-amplitude P-wave arrivals and that produces similar hypocentral locations to methods that locate events by stacking hundreds of LFEs without having to assume event co-location. In contrast to classic time-reverse imaging algorithms, we implement a modification to the method that searches for phase coherence over a short time period rather than identifying the maximum amplitude of a superpositioned wavefield. The method is independent of amplitude and can help constrain event origin time. The method uses individual LFE origin times, but does not rely on a priori information on LFE templates and families.We apply the method to locate 34 individual LFEs within tremor episodes that occur between 2010 and 2011 on the San Andreas Fault, near Cholame, California. Individual LFE location accuracies range from 2.6 to 5 km horizontally and 4.8 km vertically. Other methods that have been able to locate individual LFEs with accuracy of less than 5 km have mainly used large-amplitude events where a P-phase arrival can be identified. The method described here has the potential to locate a larger number of individual low-amplitude events with only the S-phase arrival. Location accuracy is controlled by the velocity model resolution and the wavelength of the dominant energy of the signal. Location results are also dependent on the number of stations used and are negligibly correlated with other factors such as the maximum gap in azimuthal coverage, source–station distance and signal-to-noise ratio.

  4. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  5. Systems Error versus Physicians' Error: Finding the Balance in Medical Education.

    ERIC Educational Resources Information Center

    Casarett, David; Helms, Charles

    1999-01-01

    When physicians ascribe errors to systemic causes, they may be less likely to modify future behaviors and more likely to repeat past errors. Academic medical centers should balance protecting patients from errors that a systems approach can identify against providing optimal education for house officers by teaching them to focus also on personal…

  6. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  7. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  8. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  9. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  10. Error in radiology.

    PubMed

    Goddard, P; Leslie, A; Jones, A; Wakeley, C; Kabala, J

    2001-10-01

    The level of error in radiology has been tabulated from articles on error and on "double reporting" or "double reading". The level of error varies depending on the radiological investigation, but the range is 2-20% for clinically significant or major error. The greatest reduction in error rates will come from changes in systems.

  11. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.

  12. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. PMID:23999403

  13. Immediate error correction process following sleep deprivation.

    PubMed

    Hsieh, Shulan; Cheng, I-Chen; Tsai, Ling-Ling

    2007-06-01

    Previous studies have suggested that one night of sleep deprivation decreases frontal lobe metabolic activity, particularly in the anterior cingulated cortex (ACC), resulting in decreased performance in various executive function tasks. This study thus attempted to address whether sleep deprivation impaired the executive function of error detection and error correction. Sixteen young healthy college students (seven women, nine men, with ages ranging from 18 to 23 years) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event-related potentials (ERPs) during the flanker task were obtained using a within-subject, repeated-measure design. The error negativity or error-related negativity (Ne/ERN) and the error positivity (Pe) seen immediately after errors were analyzed. The results show that the amplitude of the Ne/ERN was reduced significantly following sleep deprivation. Reduction also occurred for error trials with subsequent correction, indicating that sleep deprivation influenced error correction ability. This study further demonstrated that the impairment in immediate error correction following sleep deprivation was confined to specific stimulus types, with both Ne/ERN and behavioral correction rates being reduced only for trials in which flanker stimuli were incongruent with the target stimulus, while the response to the target was compatible with that of the flanker stimuli following sleep deprivation. The results thus warrant future systematic investigation of the interaction between stimulus type and error correction following sleep deprivation. PMID:17542943

  14. A Review of Errors in the Journal Abstract

    ERIC Educational Resources Information Center

    Lee, Eunpyo; Kim, Eun-Kyung

    2013-01-01

    (percentage) of abstracts that involved with errors, the most erroneous part of the abstract, and the types and frequency of errors. Also the purpose expanded to compare the results with those of the previous…

  15. Errors associated with outpatient computerized prescribing systems

    PubMed Central

    Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G

    2011-01-01

    Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428

  16. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  17. Modified TM and TE waveguide modes and reflectivity by graphene layer in coupled-graphene-metal multilayer structure in sub-terahertz frequency

    NASA Astrophysics Data System (ADS)

    Shkerdin, Gennady; Alkorre, Hameda; Stiens, Johan; Vounckx, Roger

    2015-05-01

    In this paper we focus on the dispersion characteristics of TM and TE waveguide modes and the reflectivity of plane waves, incident on the four-layer structure consisting of air/graphene monolayer/dielectric buffer layer/metal substrate. The TM waveguide modes split up into two branches for small frequencies, one of the branches (cutoff waveguide branch) undergoes cutoff at a certain cutoff buffer thicknesses. There is no splitting of TE waveguide modes. However, these modes can be converted into short-range waves for smaller buffer thickness with subsequent modes cutoff depending on frequency and graphene electron concentration. Reflection coefficients of the TM polarized incident electromagnetic waves from air on the multi-layer structure vanish in the vicinity of cutoff buffer thicknesses. It is demonstrated that waveguide mode propagation constants and reflectivity in the multilayer structure can be considerably influenced by the presence of a graphene layer in the vicinity of the cutoff thicknesses of the waveguide modes.

  18. Decoding and synchronization of error correcting codes

    NASA Astrophysics Data System (ADS)

    Madkour, S. A.

    1983-01-01

    Decoding devices for hard quantization and soft decision error correcting codes are discussed. A Meggit decoder for Reed-Solomon polynominal codes was implemented and tested. It uses 49 TTL logic IC. A maximum binary frequency of 30 Mbit/sec is demonstrated. A soft decision decoding approach was applied to hard decision decoding, using the principles of threshold decoding. Simulation results indicate that the proposed schema achieves satisfactory performance using only a small number of parity checks. The combined correction of substitution and synchronization errors is analyzed. The algorithm presented shows the capability of convolutional codes to correct synchronization errors as well as independent additive errors without any additional redundancy.

  19. Nonlinear amplification of side-modes in frequency combs.

    PubMed

    Probst, R A; Steinmetz, T; Wilken, T; Hundertmark, H; Stark, S P; Wong, G K L; Russell, P St J; Hänsch, T W; Holzwarth, R; Udem, Th

    2013-05-20

    We investigate how suppressed modes in frequency combs are modified upon frequency doubling and self-phase modulation. We find, both experimentally and by using a simplified model, that these side-modes are amplified relative to the principal comb modes. Whereas frequency doubling increases their relative strength by 6 dB, the growth due to self-phase modulation can be much stronger and generally increases with nonlinear propagation length. Upper limits for this effect are derived in this work. This behavior has implications for high-precision calibration of spectrographs with frequency combs used for example in astronomy. For this application, Fabry-Pérot filter cavities are used to increase the mode spacing to exceed the resolution of the spectrograph. Frequency conversion and/or spectral broadening after non-perfect filtering reamplify the suppressed modes, which can lead to calibration errors. PMID:23736390

  20. Long-term spinal cord stimulation modifies canine intrinsic cardiac neuronal properties and ganglionic transmission during high-frequency repetitive activation.

    PubMed

    Smith, Frank M; Vermeulen, Michel; Cardinal, René

    2016-07-01

    Long-term spinal cord stimulation (SCS) applied to cranial thoracic SC segments exerts antiarrhythmic and cardioprotective actions in the canine heart in situ. We hypothesized that remodeling of intrinsic cardiac neuronal and synaptic properties occur in canines subjected to long-term SCS, specifically that synaptic efficacy may be preferentially facilitated at high presynaptic nerve stimulation frequencies. Animals subjected to continuous SCS for 5-8 weeks (long-term SCS: n = 17) or for 1 h (acute SCS: n = 4) were compared with corresponding control animals (long-term: n = 15, acute: n = 4). At termination, animals were anesthetized, the heart was excised and neurones from the right atrial ganglionated plexus were identified and studied in vitro using standard intracellular microelectrode technique. Main findings were as follows: (1) a significant reduction in whole cell membrane input resistance and acceleration of the course of AHP decay identified among phasic neurones from long-term SCS compared with controls, (2) significantly more robust synaptic transmission to rundown in long-term SCS during high-frequency (10-40 Hz) presynaptic nerve stimulation while recording from either phasic or accommodating postsynaptic neurones; this was associated with significantly greater posttrain excitatory postsynaptic potential (EPSP) numbers in long-term SCS than control, and (3) synaptic efficacy was significantly decreased by atropine in both groups. Such changes did not occur in acute SCS In conclusion, modification of intrinsic cardiac neuronal properties and facilitation of synaptic transmission at high stimulation frequency in long-term SCS could improve physiologically modulated vagal inputs to the heart. PMID:27401459

  1. Field error lottery

    NASA Astrophysics Data System (ADS)

    James Elliott, C.; McVey, Brian D.; Quimby, David C.

    1991-07-01

    The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.

  2. Field error lottery

    NASA Astrophysics Data System (ADS)

    Elliott, C. James; McVey, Brian D.; Quimby, David C.

    1990-11-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement, and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of (plus minus)25(mu)m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time.

  3. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  4. A Frequency and Error Analysis of the Use of Determiners, the Relationships between Noun Phrases, and the Structure of Discourse in English Essays by Native English Writers and Native Chinese, Taiwanese, and Korean Learners of English as a Second Language

    ERIC Educational Resources Information Center

    Gressang, Jane E.

    2010-01-01

    Second language (L2) learners notoriously have trouble using articles in their target languages (e.g., "a", "an", "the" in English). However, researchers disagree about the patterns and causes of these errors. Past studies have found that L2 English learners: (1) Predominantly omit articles (White 2003, Robertson 2000), (2) Overuse "the" (Huebner…

  5. Accepting error to make less error.

    PubMed

    Einhorn, H J

    1986-01-01

    In this article I argue that the clinical and statistical approaches rest on different assumptions about the nature of random error and the appropriate level of accuracy to be expected in prediction. To examine this, a case is made for each approach. The clinical approach is characterized as being deterministic, causal, and less concerned with prediction than with diagnosis and treatment. The statistical approach accepts error as inevitable and in so doing makes less error in prediction. This is illustrated using examples from probability learning and equal weighting in linear models. Thereafter, a decision analysis of the two approaches is proposed. Of particular importance are the errors that characterize each approach: myths, magic, and illusions of control in the clinical; lost opportunities and illusions of the lack of control in the statistical. Each approach represents a gamble with corresponding risks and benefits.

  6. Reduced discretization error in HZETRN

    NASA Astrophysics Data System (ADS)

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm2 exposed to both solar particle event and galactic cosmic ray environments.

  7. Reduced discretization error in HZETRN

    SciTech Connect

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.

  8. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  9. Drug Errors in Anaesthesiology

    PubMed Central

    Jain, Rajnish Kumar; Katiyar, Sarika

    2009-01-01

    Summary Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed. PMID:20640103

  10. [The influence of low-frequency pulsed electric and magnetic signals or their combination on the normal and modified fibroblasts (an experimental study)].

    PubMed

    Ulitko, M V; Medvedeva, S Yu; Malakhov, V V

    2016-01-01

    The results of clinical studies give evidence of the beneficial preventive and therapeutic effects of the «Tiline-EM» physiotherapeutic device designed for the combined specific treatment of the skin regions onto which both discomfort and pain sensations are directly projected, reflectively active sites and zones, as well as trigger zones with the use of low-frequency pulsed electric current and magnetic field. The efficient application of the device requires the understanding of the general mechanisms underlying such action on the living systems including those operating at the cellular and subcellular levels. The objective of the present study was the investigation of the specific and complex effects produced by the low-frequency pulses of electric current and magnetic field generated in the physiotherapeutic device «Tiline-EM» on the viability, proliferative activity, and morphofunctional characteristics of normal skin fibroblasts and the transformed fibroblast line K-22. It has been demonstrated that the biological effects of the electric and magnetic signals vary depending on the type of the cell culture and the mode of impact. The transformed fibroblasts proved to be more sensitive to the specific and complex effects of electric and magnetic pulses than the normal skin fibroblasts. The combined action of the electric and magnetic signals was shown to have the greatest influence on both varieties of fibroblasts. It manifests itself in the form of enhanced viability, elevated proliferative and synthetic activity in the cultures of transformed fibroblasts and as the acceleration of cell differentiation in the cultures of normal fibroblasts. The effect of stimulation of dermal fibroblast differentiation in response to the combined treatment by the electric and magnetic signals is of interest from the standpoint of the physiotherapeutic use of the «Tiline-EM» device for the purpose of obtaining fibroblasts cultures to be employed in regenerative therapy and

  11. [The influence of low-frequency pulsed electric and magnetic signals or their combination on the normal and modified fibroblasts (an experimental study)].

    PubMed

    Ulitko, M V; Medvedeva, S Yu; Malakhov, V V

    2016-01-01

    The results of clinical studies give evidence of the beneficial preventive and therapeutic effects of the «Tiline-EM» physiotherapeutic device designed for the combined specific treatment of the skin regions onto which both discomfort and pain sensations are directly projected, reflectively active sites and zones, as well as trigger zones with the use of low-frequency pulsed electric current and magnetic field. The efficient application of the device requires the understanding of the general mechanisms underlying such action on the living systems including those operating at the cellular and subcellular levels. The objective of the present study was the investigation of the specific and complex effects produced by the low-frequency pulses of electric current and magnetic field generated in the physiotherapeutic device «Tiline-EM» on the viability, proliferative activity, and morphofunctional characteristics of normal skin fibroblasts and the transformed fibroblast line K-22. It has been demonstrated that the biological effects of the electric and magnetic signals vary depending on the type of the cell culture and the mode of impact. The transformed fibroblasts proved to be more sensitive to the specific and complex effects of electric and magnetic pulses than the normal skin fibroblasts. The combined action of the electric and magnetic signals was shown to have the greatest influence on both varieties of fibroblasts. It manifests itself in the form of enhanced viability, elevated proliferative and synthetic activity in the cultures of transformed fibroblasts and as the acceleration of cell differentiation in the cultures of normal fibroblasts. The effect of stimulation of dermal fibroblast differentiation in response to the combined treatment by the electric and magnetic signals is of interest from the standpoint of the physiotherapeutic use of the «Tiline-EM» device for the purpose of obtaining fibroblasts cultures to be employed in regenerative therapy and

  12. Does isoniazid chemoprophylaxis increase the frequency of hepatotoxicity in patients receiving anti-TNF-α agent with a disease-modifying antirheumatic drug?

    PubMed Central

    Cansu, Döndü Üsküdar; Güncan, Sabri; Bilge, N. Şule Yaşar; Kaşifoğlu, Timuçin; Korkmaz, Cengiz

    2014-01-01

    Objective The aim of this study is to determine the incidence of isoniazid (INH)-related hepatotoxicity in patients with rheumatologic diseases receiving tumor necrosis factor-α (TNF-α) antagonists along with a disease-modifying antirheumatic drug (DMARD). Material and Methods We have retrospectively evaluated 87 patients receiving anti-TNFα agents who were followed up between June 2005 and February 2010 at our rheumatology department. Sixty-one of 87 patients have received INH prophylaxis for 9 months for latent tuberculosis infection. Results A total of 61 (70.1%) of 87 patients used INH prophylaxis (Group I), while the remaining 26 (29.9%) (Group II) had not; 53 patients had used any DMARD in Group I, while 21 patients had used in Group II. No significant differences were found among Group I and II with respect to clinical features. When two groups were compared, in Group I, elevations of liver enzymes were detected in five patients (8.1%) who had normal baseline values. Among these patients, hepatotoxicity developed in two patients. Hepatotoxicity developed one patient in Group II (p=0.85). Conclusion INH chemoprophylaxis was well tolerated in patients using anti-TNF-α agent and a DMARD. It seems not to be a strong risk factor for hepatotoxicity. However, comorbidities and other drugs used may be additional factors in the elevation of transaminases.

  13. Medication errors: definitions and classification.

    PubMed

    Aronson, Jeffrey K

    2009-06-01

    1. To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. 2. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey-Lewis method (based on an understanding of theory and practice). 3. A medication error is 'a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient'. 4. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is 'a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient'. The converse of this, 'balanced prescribing' is 'the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm'. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. 5. A prescription error is 'a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription'. The 'normal features' include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. 6. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies.

  14. Medication errors: definitions and classification

    PubMed Central

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  15. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  16. [Medical errors in obstetrics].

    PubMed

    Marek, Z

    1984-08-01

    Errors in medicine may fall into 3 main categories: 1) medical errors made only by physicians, 2) technical errors made by physicians and other health care specialists, and 3) organizational errors associated with mismanagement of medical facilities. This classification of medical errors, as well as the definition and treatment of them, fully applies to obstetrics. However, the difference between obstetrics and other fields of medicine stems from the fact that an obstetrician usually deals with healthy women. Conversely, professional risk in obstetrics is very high, as errors and malpractice can lead to very serious complications. Observations show that the most frequent obstetrical errors occur in induced abortions, diagnosis of pregnancy, selection of optimal delivery techniques, treatment of hemorrhages, and other complications. Therefore, the obstetrician should be prepared to use intensive care procedures similar to those used for resuscitation.

  17. The Nature of Error in Adolescent Student Writing

    ERIC Educational Resources Information Center

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  18. Performance Errors in Weight Training and Their Correction.

    ERIC Educational Resources Information Center

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  19. New Modified Band Limited Impedance (BLIMP) Inversion Method Using Envelope Attribute

    NASA Astrophysics Data System (ADS)

    Maulana, Z. L.; Saputro, O. D.; Latief, F. D. E.

    2016-01-01

    Earth attenuates high frequencies from seismic wavelet. Low frequency seismics cannot be obtained by low quality geophone. The low frequencies (0-10 Hz) that are not present in seismic data are important to obtain a good result in acoustic impedance (AI) inversion. AI is important to determine reservoir quality by converting AI to reservoir properties like porosity, permeability and water saturation. The low frequencies can be supplied from impedance log (AI logs), velocity analysis, and from the combination of both data. In this study, we propose that the low frequencies could be obtained from the envelope seismic attribute. This new proposed method is essentially a modified BLIMP (Band Limited Impedance) inversion method, in which the AI logs for BLIMP substituted with the envelope attribute. In low frequency domain (0-10 Hz), the envelope attribute produces high amplitude. This low frequency from the envelope attribute is utilized to replace low frequency from AI logs in BLIMP. Linear trend in this method is acquired from the AI logs. In this study, the method is applied on synthetic seismograms created from impedance log from well ‘X’. The mean squared error from the modified BLIMP inversion is 2-4% for each trace (variation in error is caused by different normalization constant), lower than the conventional BLIMP inversion which produces error of 8%. The new method is also applied on Marmousi2 dataset and show promising result. The modified BLIMP inversion result from Marmousi2 by using one log AI is better than the one produced from the conventional method.

  20. Grammatical Errors Produced by English Majors: The Translation Task

    ERIC Educational Resources Information Center

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  1. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  2. External laser frequency stabilizer

    SciTech Connect

    Hall, J.L.; Hansch, T.W.

    1987-10-13

    A frequency transducer for controlling or modulating the frequency of a light radiation system is described comprising: a source of radiation having a predetermined frequency; an electro-optic phase modulator for receiving the radiation and for changing the phase of the radiation in proportion to an applied error voltage; an acousto-optic modulator coupled to the electro-optic modulator for shifting the frequency of the output signal of the electro-optic modulator; a signal source for providing an error voltage representing undesirable fluctuations in the frequency of the light radiation; a first channel including a fast integrator coupled between the signal source and the input circuit of the electro-optic modulator; a second channel including a voltage controlled oscillator coupled between the signal source and the acousto-optic modulator; and a network including an electronic delay circuit coupled between the first and second channels for matching the delay of the acousto-optic modulator.

  3. CORRELATED ERRORS IN EARTH POINTING MISSIONS

    NASA Technical Reports Server (NTRS)

    Bilanow, Steve; Patt, Frederick S.

    2005-01-01

    Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor

  4. 21 CFR 17.48 - Harmless error.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Harmless error. 17.48 Section 17.48 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL CIVIL MONEY PENALTIES... of the parties is grounds for vacating, modifying, or otherwise disturbing an otherwise...

  5. 21 CFR 17.48 - Harmless error.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Harmless error. 17.48 Section 17.48 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL CIVIL MONEY PENALTIES... of the parties is grounds for vacating, modifying, or otherwise disturbing an otherwise...

  6. 21 CFR 17.48 - Harmless error.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Harmless error. 17.48 Section 17.48 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL CIVIL MONEY PENALTIES... of the parties is grounds for vacating, modifying, or otherwise disturbing an otherwise...

  7. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  8. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  9. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  10. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  11. Frequency-Offset Cartesian Feedback Based on Polyphase Difference Amplifiers

    PubMed Central

    Zanchi, Marta G.; Pauly, John M.; Scott, Greig C.

    2010-01-01

    A modified Cartesian feedback method called “frequency-offset Cartesian feedback” and based on polyphase difference amplifiers is described that significantly reduces the problems associated with quadrature errors and DC-offsets in classic Cartesian feedback power amplifier control systems. In this method, the reference input and feedback signals are down-converted and compared at a low intermediate frequency (IF) instead of at DC. The polyphase difference amplifiers create a complex control bandwidth centered at this low IF, which is typically offset from DC by 200–1500 kHz. Consequently, the loop gain peak does not overlap DC where voltage offsets, drift, and local oscillator leakage create errors. Moreover, quadrature mismatch errors are significantly attenuated in the control bandwidth. Since the polyphase amplifiers selectively amplify the complex signals characterized by a +90° phase relationship representing positive frequency signals, the control system operates somewhat like single sideband (SSB) modulation. However, the approach still allows the same modulation bandwidth control as classic Cartesian feedback. In this paper, the behavior of the polyphase difference amplifier is described through both the results of simulations, based on a theoretical analysis of their architecture, and experiments. We then describe our first printed circuit board prototype of a frequency-offset Cartesian feedback transmitter and its performance in open and closed loop configuration. This approach should be especially useful in magnetic resonance imaging transmit array systems. PMID:20814450

  12. Preventing errors in laterality.

    PubMed

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2015-04-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in separate colors. This allows the radiologist to correlate all detected laterality terms of the report with the images open in PACS and correct them before the report is finalized. The system is monitored every time an error in laterality was detected. The system detected 32 errors in laterality over a 7-month period (rate of 0.0007 %), with CT containing the highest error detection rate of all modalities. Significantly, more errors were detected in male patients compared with female patients. In conclusion, our study demonstrated that with our system, laterality errors can be detected and corrected prior to finalizing reports.

  13. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  14. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  15. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-01

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  16. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  17. Learned predictions of error likelihood in the anterior cingulate cortex.

    PubMed

    Brown, Joshua W; Braver, Todd S

    2005-02-18

    The anterior cingulate cortex (ACC) and the related medial wall play a critical role in recruiting cognitive control. Although ACC exhibits selective error and conflict responses, it has been unclear how these develop and become context-specific. With use of a modified stop-signal task, we show from integrated computational neural modeling and neuroimaging studies that ACC learns to predict error likelihood in a given context, even for trials in which there is no error or response conflict. These results support a more general error-likelihood theory of ACC function based on reinforcement learning, of which conflict and error detection are special cases.

  18. Asymmetric error field interaction with rotating conducting walls

    SciTech Connect

    Paz-Soldan, C.; Brookhart, M. I.; Hegna, C. C.; Forest, C. B.

    2012-07-15

    The interaction of error fields with a system of differentially rotating conducting walls is studied analytically and compared to experimental data. Wall rotation causes eddy currents to persist indefinitely, attenuating and rotating the original error field. Superposition of error fields from external coils and plasma currents are found to break the symmetry in wall rotation direction. The vacuum and plasma eigenmodes are modified by wall rotation, with the error field penetration time decreased and the kink instability stabilized, respectively. Wall rotation is also predicted to reduce error field amplification by the marginally stable plasma.

  19. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  20. Frequency synchronization of a frequency-hopped MFSK communication system

    NASA Technical Reports Server (NTRS)

    Huth, G. K.; Polydoros, A.; Simon, M. K.

    1981-01-01

    This paper presents the performance of fine-frequency synchronization. The performance degradation due to imperfect frequency synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of frequency hops used in the estimator. The effect of imperfect fine-time synchronization is also included in the calculation of fine-frequency synchronization performance to obtain the overall performance degradation due to synchronization errors.

  1. Multipath induced errors in meteorological Doppler/interferometer location systems

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.

    1984-01-01

    One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.

  2. Modified cyanobacteria

    DOEpatents

    Vermaas, Willem F J.

    2014-06-17

    Disclosed is a modified photoautotrophic bacterium comprising genes of interest that are modified in terms of their expression and/or coding region sequence, wherein modification of the genes of interest increases production of a desired product in the bacterium relative to the amount of the desired product production in a photoautotrophic bacterium that is not modified with respect to the genes of interest.

  3. Errors in neuroradiology.

    PubMed

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  4. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  5. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  6. Oral Reading Errors of Average and Superior Reading Ability Children.

    ERIC Educational Resources Information Center

    Geoffrion, Leo David

    Oral reading samples were gathered from a group of twenty normal boys from the fourth through sixth grades. All reading errors were coded and classified using a modified version of the taxonomies of Goodman and Burke. Through cluster analysis two distinct error patterns were found. One group consisted of students whose performance was limited…

  7. Reply to 'Comment on 'Quantum convolutional error-correcting codes''

    SciTech Connect

    Chau, H.F.

    2005-08-15

    In their Comment, de Almeida and Palazzo [Phys. Rev. A 72, 026301 (2005)] discovered an error in my earlier paper concerning the construction of quantum convolutional codes [Phys. Rev. A 58, 905 (1998)]. This error can be repaired by modifying the method of code construction.

  8. Error Detection Processes during Observational Learning

    ERIC Educational Resources Information Center

    Badets, Arnaud; Blandin, Yannick; Wright, David L.; Shea, Charles H.

    2006-01-01

    The purpose of this experiment was to determine whether a faded knowledge of results (KR) frequency during observation of a model's performance enhanced error detection capabilities. During the observation phase, participants observed a model performing a timing task and received KR about the model's performance on each trial or on one of two…

  9. Laser frequency offset synthesizer

    NASA Astrophysics Data System (ADS)

    Lewis, D. A.; Evans, R. M.; Finn, M. A.

    1985-01-01

    A method is reported for locking the frequency difference of two lasers with an accuracy of 0.5 kHz or less over a one-second interval which is simple, stable, and relatively free from systematic errors. Two 633 nm He-Ne lasers are used, one with a fixed frequency and the other tunable. The beat frequency between the lasers is controlled by a voltage applied to a piezoelectric device which varies the cavity length of the tunable laser. This variable beat frequency, scaled by a computer-controlled modulus, is equivalent to a synthesizer. This approach eliminates the need for a separate external frequency synthesizer; furthermore, the phase detection process occurs at a relatively low frequency, making the required electronics simple and straightforward.

  10. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  11. Estimation and Extraction of Radar Signal Features Using Modified B Distribution and Particle Filters

    NASA Astrophysics Data System (ADS)

    Mikluc, Davorin; Bujaković, Dimitrije; Andrić, Milenko; Simić, Slobodan

    2016-09-01

    The research analyses the application of particle filters in estimating and extracting the features of radar signal time-frequency energy distribution. Time-frequency representation is calculated using modified B distribution, where the estimation process model represents one time bin. An adaptive criterion for the calculation of particle weighted coefficients whose main parameters are frequency integral squared error and estimated maximum of mean power spectral density per one time bin is proposed. The analysis of the suggested estimation application has been performed on a generated signal in the absence of any noise, and consequently on modelled and recorded real radar signals. The advantage of the suggested method is in the solution of the issue of interrupted estimations of instantaneous frequencies which appears when these estimations are determined according to maximum energy distribution, as in the case of intersecting frequency components in a multicomponent signal.

  12. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  13. Software errors and complexity: An empirical investigation

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Perricone, Berry T.

    1983-01-01

    The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

  14. Alcohol and error processing.

    PubMed

    Holroyd, Clay B; Yeung, Nick

    2003-08-01

    A recent study indicates that alcohol consumption reduces the amplitude of the error-related negativity (ERN), a negative deflection in the electroencephalogram associated with error commission. Here, we explore possible mechanisms underlying this result in the context of two recent theories about the neural system that produces the ERN - one based on principles of reinforcement learning and the other based on response conflict monitoring.

  15. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  16. TOA/FOA geolocation error analysis.

    SciTech Connect

    Mason, John Jeffrey

    2008-08-01

    This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

  17. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  18. Gear Transmission Error Measurement System Made Operational

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2002-01-01

    A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.

  19. Surface errors in the course of machining precision optics

    NASA Astrophysics Data System (ADS)

    Biskup, H.; Haberl, A.; Rascher, R.

    2015-08-01

    Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.

  20. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  1. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  2. Error monitoring in musicians.

    PubMed

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  3. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  4. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  5. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  6. Filter induced errors in laser anemometer measurements using counter processors

    NASA Technical Reports Server (NTRS)

    Oberle, L. G.; Seasholtz, R. G.

    1985-01-01

    Simulations of laser Doppler anemometer (LDA) systems have focused primarily on noise studies or biasing errors. Another possible source of error is the choice of filter types and filter cutoff frequencies. Before it is applied to the counter portion of the signal processor, a Doppler burst is filtered to remove the pedestal and to reduce noise in the frequency bands outside the region in which the signal occurs. Filtering, however, introduces errors into the measurement of the frequency of the input signal which leads to inaccurate results. Errors caused by signal filtering in an LDA counter-processor data acquisition system are evaluated and filters for a specific application which will reduce these errors are chosen.

  7. Dialogues on prediction errors.

    PubMed

    Niv, Yael; Schoenbaum, Geoffrey

    2008-07-01

    The recognition that computational ideas from reinforcement learning are relevant to the study of neural circuits has taken the cognitive neuroscience community by storm. A central tenet of these models is that discrepancies between actual and expected outcomes can be used for learning. Neural correlates of such prediction-error signals have been observed now in midbrain dopaminergic neurons, striatum, amygdala and even prefrontal cortex, and models incorporating prediction errors have been invoked to explain complex phenomena such as the transition from goal-directed to habitual behavior. Yet, like any revolution, the fast-paced progress has left an uneven understanding in its wake. Here, we provide answers to ten simple questions about prediction errors, with the aim of exposing both the strengths and the limitations of this active area of neuroscience research.

  8. Diagnostic Errors and Laboratory Medicine – Causes and Strategies

    PubMed Central

    2015-01-01

    While the frequency of laboratory errors varies greatly, depending on the study design and steps of the total testing process (TTP) investigated, a series of papers published in the last two decades drew the attention of laboratory professionals to the pre- and post-analytical phases, which currently appear to be more vulnerable to errors than the analytical phase. In particular, a high frequency of errors and risk of errors that could harm patients has been described in both the pre-pre- and post-post-analytical steps of the cycle that usually are not under the laboratory control. In 2008, the release of a Technical Specification (ISO/TS 22367) by the International Organization for Standardization played a key role in collecting the evidence and changing the perspective on laboratory errors, emphasizing the need for a patient-centred approach to errors in laboratory testing.

  9. Diagnostic Errors and Laboratory Medicine - Causes and Strategies.

    PubMed

    Plebani, Mario

    2015-01-01

    While the frequency of laboratory errors varies greatly, depending on the study design and steps of the total testing process (TTP) investigated, a series of papers published in the last two decades drew the attention of laboratory professionals to the pre- and post-analytical phases, which currently appear to be more vulnerable to errors than the analytical phase. In particular, a high frequency of errors and risk of errors that could harm patients has been described in both the pre-pre- and post-post-analytical steps of the cycle that usually are not under the laboratory control. In 2008, the release of a Technical Specification (ISO/TS 22367) by the International Organization for Standardization played a key role in collecting the evidence and changing the perspective on laboratory errors, emphasizing the need for a patient-centred approach to errors in laboratory testing.

  10. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    SciTech Connect

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  11. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  12. Measuring Cyclic Error in Laser Heterodyne Interferometers

    NASA Technical Reports Server (NTRS)

    Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter

    2010-01-01

    An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-

  13. Optical linear algebra processors: noise and error-source modeling.

    PubMed

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  14. Parental Reports of Children's Scale Errors in Everyday Life

    ERIC Educational Resources Information Center

    Rosengren, Karl S.; Gutierrez, Isabel T.; Anderson, Kathy N.; Schein, Stevie S.

    2009-01-01

    Scale errors refer to behaviors where young children attempt to perform an action on an object that is too small to effectively accommodate the behavior. The goal of this study was to examine the frequency and characteristics of scale errors in everyday life. To do so, the researchers collected parental reports of children's (age range = 13-21…

  15. Optical linear algebra processors - Noise and error-source modeling

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  16. Speech Errors in Progressive Non-Fluent Aphasia

    ERIC Educational Resources Information Center

    Ash, Sharon; McMillan, Corey; Gunawardena, Delani; Avants, Brian; Morgan, Brianna; Khan, Alea; Moore, Peachie; Gee, James; Grossman, Murray

    2010-01-01

    The nature and frequency of speech production errors in neurodegenerative disease have not previously been precisely quantified. In the present study, 16 patients with a progressive form of non-fluent aphasia (PNFA) were asked to tell a story from a wordless children's picture book. Errors in production were classified as either phonemic,…

  17. [Medical device use errors].

    PubMed

    Friesdorf, Wolfgang; Marsolek, Ingo

    2008-01-01

    Medical devices define our everyday patient treatment processes. But despite the beneficial effect, every use can also lead to damages. Use errors are thus often explained by human failure. But human errors can never be completely extinct, especially in such complex work processes like those in medicine that often involve time pressure. Therefore we need error-tolerant work systems in which potential problems are identified and solved as early as possible. In this context human engineering uses the TOP principle: technological before organisational and then person-related solutions. But especially in everyday medical work we realise that error-prone usability concepts can often only be counterbalanced by organisational or person-related measures. Thus human failure is pre-programmed. In addition, many medical work places represent a somewhat chaotic accumulation of individual devices with totally different user interaction concepts. There is not only a lack of holistic work place concepts, but of holistic process and system concepts as well. However, this can only be achieved through the co-operation of producers, healthcare providers and clinical users, by systematically analyzing and iteratively optimizing the underlying treatment processes from both a technological and organizational perspective. What we need is a joint platform like medilab V of the TU Berlin, in which the entire medical treatment chain can be simulated in order to discuss, experiment and model--a key to a safe and efficient healthcare system of the future. PMID:19213452

  18. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  19. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  20. Power Measurement Errors on a Utility Aircraft

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  1. Elimination of error factors, affecting EM and seismic inversions

    NASA Astrophysics Data System (ADS)

    Magomedov, M.; Zuev, M. A.; Korneev, V. A.; Goloshubin, G.; Zuev, J.; Brovman, Y.

    2013-12-01

    EM or seismic data inversions are affected by many factors, which may conceal the responses from target objects. We address here the contributions from the following effects: 1) Pre-survey spectral sensitivity factor. Preliminary information about a target layer can be used for a pre-survey estimation of the required frequency domain and signal level. A universal approach allows making such estimations in real time, helping the survey crew to optimize an acquisition process. 2) Preliminary velocities' identification and their dispersions for all the seismic waves, arising in a stratified media became a fast working tool, based on the exact analytical solution. 3) Vertical gradients effect. For most layers the log data scatter, requiring an averaging pattern. A linear gradient within each representative layer is a reasonable compromise between required inversion accuracy and forward modeling complexity. 4) An effect from the seismic source's radial component becomes comparable with vertical part for explosive sources. If this effect is not taken into account, a serious modeling error takes place. This problem has an algorithmic solution. 5) Seismic modeling is often based on different representations for a source formulated either for a force or to a potential. The wave amplitudes depend on the formulation, making an inversion result sensitive to it. 6) Asymmetrical seismic waves (modified Rayleigh) in symmetrical geometry around liquid fracture come from S-wave and merge with the modified Krauklis wave at high frequencies. A detail analysis of this feature allows a spectral range optimization for the proper wave's extraction. 7) An ultrasonic experiment was conducted to show different waves appearance for a super-thin water-saturated fracture between two Plexiglas plates, being confirmed by comparison with theoretical computations. 8) A 'sandwich effect' was detected by comparison with averaged layer's effect. This opens an opportunity of the shale gas direct

  2. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  3. Inborn Errors of Metabolism.

    PubMed

    Ezgu, Fatih

    2016-01-01

    Inborn errors of metabolism are single gene disorders resulting from the defects in the biochemical pathways of the body. Although these disorders are individually rare, collectively they account for a significant portion of childhood disability and deaths. Most of the disorders are inherited as autosomal recessive whereas autosomal dominant and X-linked disorders are also present. The clinical signs and symptoms arise from the accumulation of the toxic substrate, deficiency of the product, or both. Depending on the residual activity of the deficient enzyme, the initiation of the clinical picture may vary starting from the newborn period up until adulthood. Hundreds of disorders have been described until now and there has been a considerable clinical overlap between certain inborn errors. Resulting from this fact, the definite diagnosis of inborn errors depends on enzyme assays or genetic tests. Especially during the recent years, significant achievements have been gained for the biochemical and genetic diagnosis of inborn errors. Techniques such as tandem mass spectrometry and gas chromatography for biochemical diagnosis and microarrays and next-generation sequencing for the genetic diagnosis have enabled rapid and accurate diagnosis. The achievements for the diagnosis also enabled newborn screening and prenatal diagnosis. Parallel to the development the diagnostic methods; significant progress has also been obtained for the treatment. Treatment approaches such as special diets, enzyme replacement therapy, substrate inhibition, and organ transplantation have been widely used. It is obvious that by the help of the preclinical and clinical research carried out for inborn errors, better diagnostic methods and better treatment approaches will high likely be available.

  4. Errors and mistakes in breast ultrasound diagnostics.

    PubMed

    Jakubowski, Wiesław; Dobruch-Sobczak, Katarzyna; Migda, Bartosz

    2012-09-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  5. Scientific Impacts of Wind Direction Errors

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Kim, Seung-Bum; Lee, Tong; Song, Y. Tony; Tang, Wen-Qing; Atlas, Robert

    2004-01-01

    An assessment on the scientific impact of random errors in wind direction (less than 45 deg) retrieved from space-based observations under weak wind (less than 7 m/s ) conditions was made. averages, and these weak winds cover most of the tropical, sub-tropical, and coastal oceans. Introduction of these errors in the semi-daily winds causes, on average, 5% changes of the yearly mean Ekman and Sverdrup volume transports computed directly from the winds, respectively. These poleward movements of water are the main mechanisms to redistribute heat from the warmer tropical region to the colder high- latitude regions, and they are the major manifestations of the ocean's function in modifying Earth's climate. Simulation by an ocean general circulation model shows that the wind errors introduce a 5% error in the meridional heat transport at tropical latitudes. The simulation also shows that the erroneous winds cause a pile-up of warm surface water in the eastern tropical Pacific, similar to the conditions during El Nino episode. Similar wind directional errors cause significant change in sea-surface temperature and sea-level patterns in coastal oceans in a coastal model simulation. Previous studies have shown that assimilation of scatterometer winds improves 3-5 day weather forecasts in the Southern Hemisphere. When directional information below 7 m/s was withheld, approximately 40% of the improvement was lost

  6. Tropical errors and convection

    NASA Astrophysics Data System (ADS)

    Bechtold, P.; Bauer, P.; Engelen, R. J.

    2012-12-01

    Tropical convection is analysed in the ECMWF Integrated Forecast System (IFS) through tropical errors and their evolution during the last decade as a function of model resolution and model changes. As the characterization of these errors is particularly difficult over tropical oceans due to sparse in situ upper-air data, more weight compared to the middle latitudes is given in the analysis to the underlying forecast model. Therefore, special attention is paid to available near-surface observations and to comparison with analysis from other Centers. There is a systematic lack of low-level wind convergence in the Inner Tropical Convergence Zone (ITCZ) in the IFS, leading to a spindown of the Hadley cell. Critical areas with strong cross-equatorial flow and large wind errors are the Indian Ocean with large interannual variations in forecast errors, and the East Pacific with persistent systematic errors that have evolved little during the last decade. The analysis quality in the East Pacific is affected by observation errors inherent to the atmospheric motion vector wind product. The model's tropical climate and its variability and teleconnections are also evaluated, with a particular focus on the Madden-Julian Oscillation (MJO) during the Year of Tropical Convection (YOTC). The model is shown to reproduce the observed tropical large-scale wave spectra and teleconnections, but overestimates the precipitation during the South-East Asian summer monsoon. The recent improvements in tropical precipitation, convectively coupled wave and MJO predictability are shown to be strongly related to improvements in the convection parameterization that realistically represents the convection sensitivity to environmental moisture, and the large-scale forcing due to the use of strong entrainment and a variable adjustment time-scale. There is however a remaining slight moistening tendency and low-level wind imbalance in the model that is responsible for the Asian Monsoon bias and for too

  7. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  8. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  9. Marking Errors: A Simple Strategy

    ERIC Educational Resources Information Center

    Timmons, Theresa Cullen

    1987-01-01

    Indicates that using highlighters to mark errors produced a 76% class improvement in removing comma errors and a 95.5% improvement in removing apostrophe errors. Outlines two teaching procedures, to be followed before introducing this tool to the class, that enable students to remove errors at this effective rate. (JD)

  10. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  11. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  12. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  13. Transmission errors and bearing contact of spur, helical and spiral bevel gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Zhang, J.; Lee, H.-T.; Handschuh, R. F.

    1988-01-01

    An investigation of transmission errors and bearing contact of spur, helical and spiral bevel gears was performed. Modified tooth surfaces for these gears have been proposed in order to absorb linear transmission errors caused by gear misalignment and localize the bearing contact. Numerical examples for spur, helical, and spiral bevel gears are presented to illustrate the behavior of the modified gear surfaces to misalignment and errors of assembly.The numerical results indicate that the modified surfaces will perform with a low level of transmission error in nonideal operating environment.

  14. A Simple Approach to Experimental Errors

    ERIC Educational Resources Information Center

    Phillips, M. D.

    1972-01-01

    Classifies experimental error into two main groups: systematic error (instrument, personal, inherent, and variational errors) and random errors (reading and setting errors) and presents mathematical treatments for the determination of random errors. (PR)

  15. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  16. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  17. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  18. Pulse Shaping Entangling Gates and Error Supression

    NASA Astrophysics Data System (ADS)

    Hucul, D.; Hayes, D.; Clark, S. M.; Debnath, S.; Quraishi, Q.; Monroe, C.

    2011-05-01

    Control of spin dependent forces is important for generating entanglement and realizing quantum simulations in trapped ion systems. Here we propose and implement a composite pulse sequence based on the Molmer-Sorenson gate to decrease gate infidelity due to frequency and timing errors. The composite pulse sequence uses an optical frequency comb to drive Raman transitions simultaneously detuned from trapped ion transverse motional red and blue sideband frequencies. The spin dependent force displaces the ions in phase space, and the resulting spin-dependent geometric phase depends on the detuning. Voltage noise on the rf electrodes changes the detuning between the trapped ions' motional frequency and the laser, decreasing the fidelity of the gate. The composite pulse sequence consists of successive pulse trains from counter-propagating frequency combs with phase control of the microwave beatnote of the lasers to passively suppress detuning errors. We present the theory and experimental data with one and two ions where a gate is performed with a composite pulse sequence. This work supported by the U.S. ARO, IARPA, the DARPA OLE program, the MURI program; the NSF PIF Program; the NSF Physics Frontier Center at JQI; the European Commission AQUTE program; and the IC postdoc program administered by the NGA.

  19. Frequency division multiplex technique

    NASA Technical Reports Server (NTRS)

    Brey, H. (Inventor)

    1973-01-01

    A system for monitoring a plurality of condition responsive devices is described. It consists of a master control station and a remote station. The master control station is capable of transmitting command signals which includes a parity signal to a remote station which transmits the signals back to the command station so that such can be compared with the original signals in order to determine if there are any transmission errors. The system utilizes frequency sources which are 1.21 multiples of each other so that no linear combination of any harmonics will interfere with another frequency.

  20. Cognitive control of conscious error awareness: error awareness and error positivity (Pe) amplitude in moderate-to-severe traumatic brain injury (TBI).

    PubMed

    Logan, Dustin M; Hill, Kyle R; Larson, Michael J

    2015-01-01

    Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI. PMID:26217212

  1. Cognitive control of conscious error awareness: error awareness and error positivity (Pe) amplitude in moderate-to-severe traumatic brain injury (TBI)

    PubMed Central

    Logan, Dustin M.; Hill, Kyle R.; Larson, Michael J.

    2015-01-01

    Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI. PMID:26217212

  2. Improving OFDR spatial resolution by reducing external clock sampling error

    NASA Astrophysics Data System (ADS)

    Feng, Bowen; Liu, Kun; Liu, Tiegen; Jiang, Junfeng; Du, Yang

    2016-03-01

    Utilizing an auxiliary interferometer to produce external clock signals as the data acquirement clock is widely used to compensate the nonlinearity of the tunable laser source (TLS) in optical frequency domain reflectometry (OFDR). However, this method is not always accurate because of the large optical length difference of both arms in the auxiliary interferometer. To investigate the deviation, we study the source and influence of the external clock sampling error in OFDR system. Based on the model, we find that the sampling error declines with the increase of the TLS's optical frequency tuning rate. The spatial resolution can be as high as 4.8 cm and the strain sensing location accuracy can be up to 0.15 m at the measurement length of 310 m under the minimum sampling error with the optical frequency tuning rate of 2500 GHz/s. Hence, the spatial resolution can be improved by reducing external clock sampling error in OFDR system.

  3. EEG oscillatory patterns are associated with error prediction during music performance and are altered in musician's dystonia.

    PubMed

    Ruiz, María Herrojo; Strübing, Felix; Jabusch, Hans-Christian; Altenmüller, Eckart

    2011-04-15

    Skilled performance requires the ability to monitor ongoing behavior, detect errors in advance and modify the performance accordingly. The acquisition of fast predictive mechanisms might be possible due to the extensive training characterizing expertise performance. Recent EEG studies on piano performance reported a negative event-related potential (ERP) triggered in the ACC 70 ms before performance errors (pitch errors due to incorrect keypress). This ERP component, termed pre-error related negativity (pre-ERN), was assumed to reflect processes of error detection in advance. However, some questions remained to be addressed: (i) Does the electrophysiological marker prior to errors reflect an error signal itself or is it related instead to the implementation of control mechanisms? (ii) Does the posterior frontomedial cortex (pFMC, including ACC) interact with other brain regions to implement control adjustments following motor prediction of an upcoming error? (iii) Can we gain insight into the electrophysiological correlates of error prediction and control by assessing the local neuronal synchronization and phase interaction among neuronal populations? (iv) Finally, are error detection and control mechanisms defective in pianists with musician's dystonia (MD), a focal task-specific dystonia resulting from dysfunction of the basal ganglia-thalamic-frontal circuits? Consequently, we investigated the EEG oscillatory and phase synchronization correlates of error detection and control during piano performances in healthy pianists and in a group of pianists with MD. In healthy pianists, the main outcomes were increased pre-error theta and beta band oscillations over the pFMC and 13-15 Hz phase synchronization, between the pFMC and the right lateral prefrontal cortex, which predicted corrective mechanisms. In MD patients, the pattern of phase synchronization appeared in a different frequency band (6-8 Hz) and correlated with the severity of the disorder. The present

  4. Position error propagation in the simplex strapdown navigation system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The results of an analysis of the effects of deterministic error sources on position error in the simplex strapdown navigation system were documented. Improving the long term accuracy of the system was addressed in two phases: understanding and controlling the error within the system, and defining methods of damping the net system error through the use of an external reference velocity or position. Review of the flight and ground data revealed error containing the Schuler frequency as well as non-repeatable trends. The only unbounded terms are those involving gyro bias and azimuth error coupled with velocity. All forms of Schuler-periodic position error were found to be sufficiently large to require update or damping capability unless the source coefficients can be limited to values less than those used in this analysis for misalignment and gyro and accelerometer bias. The first-order effects of the deterministic error sources were determined with a simple error propagator which provided plots of error time functions in response to various source error values.

  5. Influence of modulation frequency in rubidium cell frequency standards

    NASA Technical Reports Server (NTRS)

    Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.

    1983-01-01

    The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.

  6. Outpatient Prescribing Errors and the Impact of Computerized Prescribing

    PubMed Central

    Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W

    2005-01-01

    Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752

  7. Reducing medication errors in critical care: a multimodal approach

    PubMed Central

    Kruer, Rachel M; Jarrell, Andrew S; Latif, Asad

    2014-01-01

    The Institute of Medicine has reported that medication errors are the single most common type of error in health care, representing 19% of all adverse events, while accounting for over 7,000 deaths annually. The frequency of medication errors in adult intensive care units can be as high as 947 per 1,000 patient-days, with a median of 105.9 per 1,000 patient-days. The formulation of drugs is a potential contributor to medication errors. Challenges related to drug formulation are specific to the various routes of medication administration, though errors associated with medication appearance and labeling occur among all drug formulations and routes of administration. Addressing these multifaceted challenges requires a multimodal approach. Changes in technology, training, systems, and safety culture are all strategies to potentially reduce medication errors related to drug formulation in the intensive care unit. PMID:25210478

  8. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  9. [Dealing with errors in medicine].

    PubMed

    Schoenenberger, R A; Perruchoud, A P

    1998-12-24

    Iatrogenic disease is probably more commonly than assumed the consequence of errors and mistakes committed by physicians and other medical personnel. Traditionally, strategies to prevent errors in medicine focus on inspection and rely on the professional ethos of health care personnel. The increasingly complex nature of medical practise and the multitude of interventions that each patient receives increases the likelihood of error. More efficient approaches to deal with errors have been developed. The methods include routine identification of errors (critical incidence report), systematic monitoring of multiple-step processes in medical practice, system analysis, and system redesign. A search for underlying causes of errors (rather than distal causes) will enable organizations to collectively learn without denying the inevitable occurrence of human error. Errors and mistakes may become precious chances to increase the quality of medical care.

  10. Multifrequency frequency-domain spectrometer for tissue analysis.

    PubMed

    Spichtig, Sonja; Hornung, René; Brown, Derek W; Haensse, Daniel; Wolf, Martin

    2009-02-01

    In this paper we describe the modification and assessment of a standard multidistance frequency-domain near infrared spectroscopy (NIRS) instrument to perform multifrequency frequency-domain NIRS measurements. The first aim of these modifications was to develop an instrument that enables measurement of small volumes of tissue such as the cervix, which is too small to be measured using a multidistance approach. The second aim was to enhance the spectral resolution to be able to determine the absolute concentrations of oxy-, deoxy- and total hemoglobin, water, and lipids. The third aim was to determine the accuracy and error of measurement of this novel instrument in both in vitro and in vivo environments. The modifications include two frequency synthesizers with variable, freely adjustable frequency, broadband high-frequency amplifiers, the development of a novel avalanche photodiode (APD) detector and demodulation circuit, additional laser diodes with additional wavelengths, and a respective graphic user interface to analyze the measurements. To test the instrument and algorithm, phantoms with optical properties similar to those of biological tissue were measured and analyzed. The results show that the absorption coefficient can be determined with an error of <10%. The error of the scattering coefficient was <31%. Since the accuracy of the chromophore concentrations depends on the absorption coefficient and not on the scattering coefficient, the <10% error is the clinically relevant parameter. In addition, the new APD had similar accuracy as the standard photomultiplier tubes. To determine the accuracy of chromophore concentration measurements we employed liquid Intralipid(R) phantoms that contained 99% water, 1% lipid, and an increasing concentration of hemoglobin in steps of 0.010 mM. Water concentration was measured with an accuracy of 6.5% and hemoglobin concentration with an error of 0.0024 mM independent of the concentration. The measured lipid concentration

  11. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  12. Reducing nurse medicine administration errors.

    PubMed

    Ofosu, Rose; Jarrett, Patricia

    Errors in administering medicines are common and can compromise the safety of patients. This review discusses the causes of drug administration error in hospitals by student and registered nurses, and the practical measures educators and hospitals can take to improve nurses' knowledge and skills in medicines management, and reduce drug errors.

  13. Error Bounds for Interpolative Approximations.

    ERIC Educational Resources Information Center

    Gal-Ezer, J.; Zwas, G.

    1990-01-01

    Elementary error estimation in the approximation of functions by polynomials as a computational assignment, error-bounding functions and error bounds, and the choice of interpolation points are discussed. Precalculus and computer instruction are used on some of the calculations. (KR)

  14. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  15. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  16. Errors inducing radiation overdoses.

    PubMed

    Grammaticos, Philip C

    2013-01-01

    There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304

  17. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.

  18. Financial errors in dementia: Testing a neuroeconomic conceptual framework

    PubMed Central

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.

    2013-01-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884

  19. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

  20. Modeling and quality assessment of halftoning by error diffusion.

    PubMed

    Kite, T D; Evans, B L; Bovik, A C

    2000-01-01

    Digital halftoning quantizes a graylevel image to one bit per pixel. Halftoning by error diffusion reduces local quantization error by filtering the quantization error in a feedback loop. In this paper, we linearize error diffusion algorithms by modeling the quantizer as a linear gain plus additive noise. We confirm the accuracy of the linear model in three independent ways. Using the linear model, we quantify the two primary effects of error diffusion: edge sharpening and noise shaping. For each effect, we develop an objective measure of its impact on the subjective quality of the halftone. Edge sharpening is proportional to the linear gain, and we give a formula to estimate the gain from a given error filter. In quantifying the noise, we modify the input image to compensate for the sharpening distortion and apply a perceptually weighted signal-to-noise ratio to the residual of the halftone and modified input image. We compute the correlation between the residual and the original image to show when the residual can be considered signal independent. We also compute a tonality measure similar to total harmonic distortion. We use the proposed measures for edge sharpening, noise shaping, and tonality to evaluate the quality of error diffusion algorithms. PMID:18255461

  1. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  2. Identifying and Reducing Systematic Errors in Chromosome Conformation Capture Data

    PubMed Central

    Hahn, Seungsoo; Kim, Dongsup

    2015-01-01

    Chromosome conformation capture (3C)-based techniques have recently been used to uncover the mystic genomic architecture in the nucleus. These techniques yield indirect data on the distances between genomic loci in the form of contact frequencies that must be normalized to remove various errors. This normalization process determines the quality of data analysis. In this study, we describe two systematic errors that result from the heterogeneous local density of restriction sites and different local chromatin states, methods to identify and remove those artifacts, and three previously described sources of systematic errors in 3C-based data: fragment length, mappability, and local DNA composition. To explain the effect of systematic errors on the results, we used three different published data sets to show the dependence of the results on restriction enzymes and experimental methods. Comparison of the results from different restriction enzymes shows a higher correlation after removing systematic errors. In contrast, using different methods with the same restriction enzymes shows a lower correlation after removing systematic errors. Notably, the improved correlation of the latter case caused by systematic errors indicates that a higher correlation between results does not ensure the validity of the normalization methods. Finally, we suggest a method to analyze random error and provide guidance for the maximum reproducibility of contact frequency maps. PMID:26717152

  3. Register file soft error recovery

    SciTech Connect

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  4. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  5. Method and apparatus for reducing quantization error in laser gyro test data through high speed filtering

    SciTech Connect

    Mark, J.G.; Brown, A.K.; Matthews, A.

    1987-01-06

    A method is described for processing ring laser gyroscope test data comprising the steps of: (a) accumulating the data over a preselected sample period; and (b) filtering the data at a predetermined frequency so that non-time dependent errors are reduced by a substantially greater amount than are time dependent errors; then (c) analyzing the random walk error of the filtered data.

  6. Error analysis in the measurement of average power with application to switching controllers

    NASA Technical Reports Server (NTRS)

    Maisel, J. E.

    1979-01-01

    The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.

  7. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  8. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  9. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  10. Sepsis: Medical errors in Poland.

    PubMed

    Rorat, Marta; Jurek, Tomasz

    2016-01-01

    Health, safety and medical errors are currently the subject of worldwide discussion. The authors analysed medico-legal opinions trying to determine types of medical errors and their impact on the course of sepsis. The authors carried out a retrospective analysis of 66 medico-legal opinions issued by the Wroclaw Department of Forensic Medicine between 2004 and 2013 (at the request of the prosecutor or court) in cases examined for medical errors. Medical errors were confirmed in 55 of the 66 medico-legal opinions. The age of victims varied from 2 weeks to 68 years; 49 patients died. The analysis revealed medical errors committed by 113 health-care workers: 98 physicians, 8 nurses and 8 emergency medical dispatchers. In 33 cases, an error was made before hospitalisation. Hospital errors occurred in 35 victims. Diagnostic errors were discovered in 50 patients, including 46 cases of sepsis being incorrectly recognised and insufficient diagnoses in 37 cases. Therapeutic errors occurred in 37 victims, organisational errors in 9 and technical errors in 2. In addition to sepsis, 8 patients also had a severe concomitant disease and 8 had a chronic disease. In 45 cases, the authors observed glaring errors, which could incur criminal liability. There is an urgent need to introduce a system for reporting and analysing medical errors in Poland. The development and popularisation of standards for identifying and treating sepsis across basic medical professions is essential to improve patient safety and survival rates. Procedures should be introduced to prevent health-care workers from administering incorrect treatment in cases.

  11. Skills, rules and knowledge in aircraft maintenance: errors in context

    NASA Technical Reports Server (NTRS)

    Hobbs, Alan; Williamson, Ann

    2002-01-01

    Automatic or skill-based behaviour is generally considered to be less prone to error than behaviour directed by conscious control. However, researchers who have applied Rasmussen's skill-rule-knowledge human error framework to accidents and incidents have sometimes found that skill-based errors appear in significant numbers. It is proposed that this is largely a reflection of the opportunities for error which workplaces present and does not indicate that skill-based behaviour is intrinsically unreliable. In the current study, 99 errors reported by 72 aircraft mechanics were examined in the light of a task analysis based on observations of the work of 25 aircraft mechanics. The task analysis identified the opportunities for error presented at various stages of maintenance work packages and by the job as a whole. Once the frequency of each error type was normalized in terms of the opportunities for error, it became apparent that skill-based performance is more reliable than rule-based performance, which is in turn more reliable than knowledge-based performance. The results reinforce the belief that industrial safety interventions designed to reduce errors would best be directed at those aspects of jobs that involve rule- and knowledge-based performance.

  12. Frequency spectrum analyzer with phase-lock

    DOEpatents

    Boland, Thomas J.

    1984-01-01

    A frequency-spectrum analyzer with phase-lock for analyzing the frequency and amplitude of an input signal is comprised of a voltage controlled oscillator (VCO) which is driven by a ramp generator, and a phase error detector circuit. The phase error detector circuit measures the difference in phase between the VCO and the input signal, and drives the VCO locking it in phase momentarily with the input signal. The input signal and the output of the VCO are fed into a correlator which transfers the input signal to a frequency domain, while providing an accurate absolute amplitude measurement of each frequency component of the input signal.

  13. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  14. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  15. Congenital errors of folate metabolism.

    PubMed

    Zittoun, J

    1995-09-01

    Congenital errors of folate metabolism can be related either to defective transport of folate through various cells or to defective intracellular utilization of folate due to some enzyme deficiencies. Defective transport of folate across the intestine and the blood-brain barrier was reported in the condition 'Congenital Malabsorption of Folate'. This disease is characterized by a severe megaloblastic anaemia of early appearance associated with mental retardation. Anaemia is folate-responsive, but neurological symptoms are only poorly improved because of the inability to maintain adequate levels of folate in the CSF. A familial defect of cellular uptake was described in a family with a high frequency of aplastic anaemia or leukaemia. An isolated defect in folate transport into CSF was identified in a patient suffering from a cerebellar syndrome and pyramidal tract dysfunction. Among enzyme deficiencies, some are well documented, others still putative. Methylenetetrahydrofolate reductase deficiency is the most common. The main clinical findings are neurological signs (mental retardation, seizures, rarely schizophrenic syndromes) or vascular disease, without any haematological abnormality. Low levels of folate in serum, red blood cells and CSF associated with homocystinuria are constant. Methionine synthase deficiency is characterized by a megaloblastic anaemia occurring early in life that is more or less folate-responsive and associated with mental retardation. Glutamate formiminotransferase-cyclodeaminase deficiency is responsible for massive excretion of formiminoglutamic acid but megaloblastic anaemia is not constant. The clinical findings are a more or less severe mental or physical retardation. Dihydrofolate reductase deficiency was reported in three children presenting with a megaloblastic anaemia a few days or weeks after birth, which responded to folinic acid. The possible relationship between congenital disorders such as neural tube defects or

  16. Medication Errors in Outpatient Pediatrics.

    PubMed

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086

  17. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  18. Further characterization of the influence of crowding on medication errors

    PubMed Central

    Watts, Hannah; Nasim, Muhammad Umer; Sweis, Rolla; Sikka, Rishi; Kulstad, Erik

    2013-01-01

    Study Objectives: Our prior analysis suggested that error frequency increases disproportionately with Emergency department (ED) crowding. To further characterize, we measured this association while controlling for the number of charts reviewed and the presence of ambulance diversion status. We hypothesized that errors would occur significantly more frequently as crowding increased, even after controlling for higher patient volumes. Materials and Methods: We performed a prospective, observational study in a large, community hospital ED from May to October of 2009. Our ED has full-time pharmacists who review orders of patients to help identify errors prior to their causing harm. Research volunteers shadowed our ED pharmacists over discrete 4- hour time periods during their reviews of orders on patients in the ED. The total numbers of charts reviewed and errors identified were documented along with details for each error type, severity, and category. We then measured the correlation between error rate (number of errors divided by total number of charts reviewed) and ED occupancy rate while controlling for diversion status during the observational period. We estimated a sample size requirement of at least 45 errors identified to allow detection of an effect size of 0.6 based on our historical data. Results: During 324 hours of surveillance, 1171 charts were reviewed and 87 errors were identified. Median error rate per 4-hour block was 5.8% of charts reviewed (IQR 0-13). No significant change was seen with ED occupancy rate (Spearman's rho = –.08, P = .49). Median error rate during times on ambulance diversion was almost twice as large (11%, IQR 0-17), but this rate did not reach statistical significance in univariate or multivariate analysis. Conclusions: Error frequency appears to remain relatively constant across the range of crowding in our ED when controlling for patient volume via the quantity of orders reviewed. Error quantity therefore increases with crowding

  19. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  20. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  1. ISA accelerometer onboard the Mercury Planetary Orbiter: error budget

    NASA Astrophysics Data System (ADS)

    Iafolla, Valerio; Lucchesi, David M.; Nozzoli, Sergio; Santoli, Francesco

    2007-03-01

    We have estimated a preliminary error budget for the Italian Spring Accelerometer (ISA) that will be allocated onboard the Mercury Planetary Orbiter (MPO) of the European Space Agency (ESA) space mission to Mercury named BepiColombo. The role of the accelerometer is to remove from the list of unknowns the non-gravitational accelerations that perturb the gravitational trajectory followed by the MPO in the strong radiation environment that characterises the orbit of Mercury around the Sun. Such a role is of fundamental importance in the context of the very ambitious goals of the Radio Science Experiments (RSE) of the BepiColombo mission. We have subdivided the errors on the accelerometer measurements into two main families: (i) the pseudo-sinusoidal errors and (ii) the random errors. The former are characterised by a periodic behaviour with the frequency of the satellite mean anomaly and its higher order harmonic components, i.e., they are deterministic errors. The latter are characterised by an unknown frequency distribution and we assumed for them a noise-like spectrum, i.e., they are stochastic errors. Among the pseudo-sinusoidal errors, the main contribution is due to the effects of the gravity gradients and the inertial forces, while among the random-like errors the main disturbing effect is due to the MPO centre-of-mass displacements produced by the onboard High Gain Antenna (HGA) movements and by the fuel consumption and sloshing. Very subtle to be considered are also the random errors produced by the MPO attitude corrections necessary to guarantee the nadir pointing of the spacecraft. We have therefore formulated the ISA error budget and the requirements for the satellite in order to guarantee an orbit reconstruction for the MPO spacecraft with an along-track accuracy of about 1 m over the orbital period of the satellite around Mercury in such a way to satisfy the RSE requirements.

  2. Error estimates and specification parameters for functional renormalization

    SciTech Connect

    Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; Wetterich, Christof

    2013-07-15

    We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

  3. Improving medication administration error reporting systems. Why do errors occur?

    PubMed

    Wakefield, B J; Wakefield, D S; Uden-Holman, T

    2000-01-01

    Monitoring medication administration errors (MAE) is often included as part of the hospital's risk management program. While observation of actual medication administration is the most accurate way to identify errors, hospitals typically rely on voluntary incident reporting processes. Although incident reporting systems are more economical than other methods of error detection, incident reporting can also be a time-consuming process depending on the complexity or "user-friendliness" of the reporting system. Accurate incident reporting systems are also dependent on the ability of the practitioner to: 1) recognize an error has actually occurred; 2) believe the error is significant enough to warrant reporting; and 3) overcome the embarrassment of having committed a MAE and the fear of punishment for reporting a mistake (either one's own or another's mistake).

  4. The infrared spectrum of trimethylenemethane. Predictions of in-plane vibrational frequencies from correlated wave functions

    NASA Astrophysics Data System (ADS)

    Blahous, Charles P., III; Xie, Yaoming; Schaefer, Henry F., III

    1990-01-01

    The infrared vibrational spectrum of trimethylenemethane (TMM) is predicted with self-consistent field and configuration interaction methods. The 3A'2 electronic ground state of TMM is described in terms of restricted Hartree-Fock theory and in light of experimental evidence. Analytic gradient methods are employed to optimize theoretical geometries for 3A2 TMM; vibrational frequencies are evaluated via analytic second-derivative techniques (self-consistent-field) and finite differences of analytic gradients (configuration interaction). The resulting IR-spectral predictions are modified to reflect average errors statistically associated with the two theoretical methods.

  5. The infrared spectrum of trimethylenemethane. Predictions of in-plane vibrational frequencies from correlated wave functions

    SciTech Connect

    Blahous, C.P. III; Xie, Y.; Schaefer, H.F. III )

    1990-01-15

    The infrared vibrational spectrum of trimethylenemethane (TMM) is predicted with self-consistent field and configuration interaction methods. The {sup 3}{ital A}{sup {prime}}{sub 2} electronic ground state of TMM is described in terms of restricted Hartree--Fock theory and in light of experimental evidence. Analytic gradient methods are employed to optimize theoretical geometries for {sup 3}{ital A}{sup {prime}}{sub 2} TMM; vibrational frequencies are evaluated via analytic second-derivative techniques (self-consistent-field) and finite differences of analytic gradients (configuration interaction). The resulting IR-spectral predictions are modified to reflect average errors statistically associated with the two theoretical methods.

  6. Ultrashort-pulse measurement using noninstantaneous nonlinearities: Raman effects in frequency-resolved optical gating

    SciTech Connect

    DeLong, K.W.; Ladera, C.L.; Trebino, R.; Kohler, B.; Wilson, K.R.

    1995-03-01

    Ultrashort-pulse-characterization techniques generally require instantaneously responding media. We show that this is not the case for frequency-resolved optical gating (FROG). We include, as an example, the noninstantaneous Raman response of fused silica, which can cause errors in the retrieved pulse width of as much as 8% for a 25-fs pulse in polarization-gate FROG. We present a modified pulse-retrieval algorithm that deconvolves such slow effects and use it to retrieve pulses of any width. In experiments with 45-fs pulses this algorithm achieved better convergence and yielded a shorter pulse than previous FROG algorithms.

  7. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  8. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  9. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  10. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  11. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  12. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  13. Quantifying error distributions in crowding.

    PubMed

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  14. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  15. Challenge and error: critical events and attention-related errors.

    PubMed

    Cheyne, James Allan; Carriere, Jonathan S A; Solman, Grayden J F; Smilek, Daniel

    2011-12-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error↔attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention lapses; resource-depleting cognitions interfering with attention to subsequent task challenges. Attention lapses lead to errors, and errors themselves are a potent consequence often leading to further attention lapses potentially initiating a spiral into more serious errors. We investigated this challenge-induced error↔attention-lapse model using the Sustained Attention to Response Task (SART), a GO-NOGO task requiring continuous attention and response to a number series and withholding of responses to a rare NOGO digit. We found response speed and increased commission errors following task challenges to be a function of temporal distance from, and prior performance on, previous NOGO trials. We conclude by comparing and contrasting the present theory and findings to those based on choice paradigms and argue that the present findings have implications for the generality of conflict monitoring and control models.

  16. Human error in recreational boating.

    PubMed

    McKnight, A James; Becker, Wayne W; Pettit, Anthony J; McKnight, A Scott

    2007-03-01

    Each year over 600 people die and more than 4000 are reported injured in recreational boating accidents. As with most other accidents, human error is the major contributor. U.S. Coast Guard reports of 3358 accidents were analyzed to identify errors in each of the boat types by which statistics are compiled: auxiliary (motor) sailboats, cabin motorboats, canoes and kayaks, house boats, personal watercraft, open motorboats, pontoon boats, row boats, sail-only boats. The individual errors were grouped into categories on the basis of similarities in the behavior involved. Those presented here are the categories accounting for at least 5% of all errors when summed across boat types. The most revealing and significant finding is the extent to which the errors vary across types. Since boating is carried out with one or two types of boats for long periods of time, effective accident prevention measures, including safety instruction, need to be geared to individual boat types.

  17. Angle interferometer cross axis errors

    SciTech Connect

    Bryan, J.B.; Carter, D.L.; Thompson, S.L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them by remachining the reference surfaces.

  18. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  19. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  20. Motion error analysis of the 3D coordinates of airborne lidar for typical terrains

    NASA Astrophysics Data System (ADS)

    Peng, Tao; Lan, Tian; Ni, Guoqiang

    2013-07-01

    A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.

  1. Canceling the momentum in a phase-shifting algorithm to eliminate spatially uniform errors.

    PubMed

    Hibino, Kenichi; Kim, Yangjin

    2016-08-10

    In phase-shifting interferometry, phase modulation nonlinearity causes both spatially uniform and nonuniform errors in the measured phase. Conventional linear-detuning error-compensating algorithms only eliminate the spatially variable error component. The uniform error is proportional to the inertial momentum of the data-sampling weight of a phase-shifting algorithm. This paper proposes a design approach to cancel the momentum by using characteristic polynomials in the Z-transform space and shows that an arbitrary M-frame algorithm can be modified to a new (M+2)-frame algorithm that acquires new symmetry to eliminate the uniform error. PMID:27534475

  2. Errors as allies: error management training in health professions education.

    PubMed

    King, Aimee; Holder, Michael G; Ahmed, Rami A

    2013-06-01

    This paper adopts methods from the organisational team training literature to outline how health professions education can improve patient safety. We argue that health educators can improve training quality by intentionally encouraging errors during simulation-based team training. Preventable medical errors are inevitable, but encouraging errors in low-risk settings like simulation can allow teams to have better emotional control and foresight to manage the situation if it occurs again with live patients. Our paper outlines an innovative approach for delivering team training.

  3. Theory and Simulation of Field Error Transport.

    NASA Astrophysics Data System (ADS)

    Dubin, D. H. E.

    2007-11-01

    The rate at which a plasma escapes across an applied magnetic field B due to symmetry-breaking electric or magnetic ``field errors'' is revisited. Such field errors cause plasma loss (or compression) in stellarators, tokamaks,ootnotetextH.E. Mynick, Ph Plas 13 058102 (2006). and nonneutral plasmas.ootnotetextEggleston, Ph Plas 14 012302 (07); Danielson et al., Ph Plas 13 055706. We study this process using idealized simulations that follow guiding centers in given trap fields, neglecting their collective effect on the evolution, but including collisions. Also, the Fokker-Planck equation describing the particle distribution is solved, and the predicted transport agrees with simulations in every applicable regime. When a field error of the form δφ(r, θ, z ) = ɛ(r) e^i m θ kz is applied to an infinite plasma column, the transport rates fall into the usual banana, plateau and fluid regimes. When the particles are axially confined by applied trap fields, the same three regimes occur. When an added ``squeeze'' potential produces a separatrix in the axial motion, the transport is enhanced, scaling roughly as ( ν/ B )^1/2 δ2̂ when ν< φ. For φ< ν< φB (where φ, ν and φB are the rotation, collision and axial bounce frequencies) there is also a 1/ ν regime similar to that predicted for ripple-enhanced transport.^1

  4. Phonologic error distributions in the Iowa-Nebraska Articulation Norms Project: consonant singletons.

    PubMed

    Smit, A B

    1993-06-01

    The errors on consonant singletons made by children in the Iowa-Nebraska Articulation Norms Project (Smit, Hand, Freilinger, Bernthal, & Bird, 1990) were tabulated by age range and frequency. The prominent error types can usually be described as phonological processes, but there are other common errors as well, especially distortions of liquids and fricatives. Moreover, some of the relevant phonological processes appear to be restricted in the range of consonants or word-positions to which they apply. A metric based on frequency of use is proposed for determining that an error type is or is not atypical. Changes in frequency of error types over the age range are examined to determine if certain atypical error types are likely to be developmental, that is, likely to self-correct as the child matures. Finally, the clinical applications of these data for evaluation and intervention are explored.

  5. Detection of Mendelian Consistent Genotyping Errors in Pedigrees

    PubMed Central

    Cheung, Charles Y. K.; Thompson, Elizabeth A.; Wijsman, Ellen M.

    2014-01-01

    Detection of genotyping errors is a necessary step to minimize false results in genetic analysis. This is especially important when the rate of genotyping errors is high, as has been reported for high-throughput sequence data. To detect genotyping errors in pedigrees, Mendelian inconsistent (MI) error checks exist, as do multi-point methods that flag Mendelian consistent (MC) errors for sparse multi-allelic markers. However, few methods exist for detecting MC genotyping errors, particularly for dense variants on large pedigrees. Here we introduce an efficient method to detect MC errors even for very dense variants (e.g. SNPs and sequencing data) on pedigrees that may be large. Our method first samples inheritance vectors (IVs) using a moderately sparse but informative set of markers using a Markov chain Monte Carlo-based sampler. Using sampled IVs, we considered two test statistics to detect MC genotyping errors: the percentage of IVs inconsistent with observed genotypes (A1) or the posterior probability of error configurations (A2). Using simulations, we show that this method, even with the simpler A1 statistic, is effective for detecting MC genotyping errors in dense variants, with sensitivity almost as high as the theoretical best sensitivity possible. We also evaluate the effectiveness of this method as a function of parameters, when including the observed pattern for genotype, density of framework markers, error rate, allele frequencies, and number of sampled inheritance vectors. Our approach provides a line of defense against false findings based on the use of dense variants in pedigrees. PMID:24718985

  6. Sensorimotor adaptation error signals are derived from realistic predictions of movement outcomes.

    PubMed

    Wong, Aaron L; Shelhamer, Mark

    2011-03-01

    Neural systems that control movement maintain accuracy by adaptively altering motor commands in response to errors. It is often assumed that the error signal that drives adaptation is equivalent to the sensory error observed at the conclusion of a movement; for saccades, this is typically the visual (retinal) error. However, we instead propose that the adaptation error signal is derived as the difference between the observed visual error and a realistic prediction of movement outcome. Using a modified saccade-adaptation task in human subjects, we precisely controlled the amount of error experienced at the conclusion of a movement by back-stepping the target so that the saccade is hypometric (positive retinal error), but less hypometric than if the target had not moved (smaller retinal error than expected). This separates prediction error from both visual errors and motor corrections. Despite positive visual errors and forward-directed motor corrections, we found an adaptive decrease in saccade amplitudes, a finding that is well-explained by the employment of a prediction-based error signal. Furthermore, adaptive changes in movement size were linearly correlated to the disparity between the predicted and observed movement outcomes, in agreement with the forward-model hypothesis of motor learning, which states that adaptation error signals incorporate predictions of motor outcomes computed using a copy of the motor command (efference copy).

  7. Syntactic and Semantic Errors in Radiology Reports Associated With Speech Recognition Software.

    PubMed

    Ringler, Michael D; Goss, Brian C; Bartholmai, Brian J

    2015-01-01

    Speech recognition software (SRS) has many benefits, but also increases the frequency of errors in radiology reports, which could impact patient care. As part of a quality control project, 13 trained medical transcriptionists proofread 213,977 SRS-generated signed reports from 147 different radiologists over a 40 month time interval. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods using .2 analysis and multiple logistic regression, as appropriate. 20,759 (9.7%) reports contained errors; 3,992 (1.9%) contained material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors (P<.001). Error proportion varied significantly among radiologists and between imaging subspecialties (P<.001). Errors were more common in cross-sectional reports (vs. plain radiography) (OR, 3.72), reports reinterpreting results of outside examinations (vs. in-house) (OR, 1.55), and procedural studies (vs. diagnostic) (OR, 1.91) (all P<.001). Dictation microphone upgrade did not affect error rate (P=.06). Error rate decreased over time (P<.001). PMID:26262224

  8. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  9. Critical evidence for the prediction error theory in associative learning

    PubMed Central

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an “auto-blocking”, which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  10. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  11. Error compensation for thermally induced errors on a machine tool

    SciTech Connect

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  12. Frequency spirals

    NASA Astrophysics Data System (ADS)

    Ottino-Löffler, Bertrand; Strogatz, Steven H.

    2016-09-01

    We study the dynamics of coupled phase oscillators on a two-dimensional Kuramoto lattice with periodic boundary conditions. For coupling strengths just below the transition to global phase-locking, we find localized spatiotemporal patterns that we call "frequency spirals." These patterns cannot be seen under time averaging; they become visible only when we examine the spatial variation of the oscillators' instantaneous frequencies, where they manifest themselves as two-armed rotating spirals. In the more familiar phase representation, they appear as wobbly periodic patterns surrounding a phase vortex. Unlike the stationary phase vortices seen in magnetic spin systems, or the rotating spiral waves seen in reaction-diffusion systems, frequency spirals librate: the phases of the oscillators surrounding the central vortex move forward and then backward, executing a periodic motion with zero winding number. We construct the simplest frequency spiral and characterize its properties using analytical and numerical methods. Simulations show that frequency spirals in large lattices behave much like this simple prototype.

  13. Detection and frequency tracking of chirping signals

    SciTech Connect

    Elliott, G.R.; Stearns, S.D.

    1990-08-01

    This paper discusses several methods to detect the presence of and track the frequency of a chirping signal in broadband noise. The dynamic behavior of each of the methods is described and tracking error bounds are investigated in terms of the chirp rate. Frequency tracking and behavior in the presence of varying levels of noise are illustrated in examples. 11 refs., 29 figs.

  14. BFC: correcting Illumina sequencing errors

    PubMed Central

    2015-01-01

    Summary: BFC is a free, fast and easy-to-use sequencing error corrector designed for Illumina short reads. It uses a non-greedy algorithm but still maintains a speed comparable to implementations based on greedy methods. In evaluations on real data, BFC appears to correct more errors with fewer overcorrections in comparison to existing tools. It particularly does well in suppressing systematic sequencing errors, which helps to improve the base accuracy of de novo assemblies. Availability and implementation: https://github.com/lh3/bfc Contact: hengli@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25953801

  15. Assessing the impact of differential genotyping errors on rare variant tests of association.

    PubMed

    Mayer-Jochimsen, Morgan; Fast, Shannon; Tintle, Nathan L

    2013-01-01

    Genotyping errors are well-known to impact the power and type I error rate in single marker tests of association. Genotyping errors that happen according to the same process in cases and controls are known as non-differential genotyping errors, whereas genotyping errors that occur with different processes in the cases and controls are known as differential genotype errors. For single marker tests, non-differential genotyping errors reduce power, while differential genotyping errors increase the type I error rate. However, little is known about the behavior of the new generation of rare variant tests of association in the presence of genotyping errors. In this manuscript we use a comprehensive simulation study to explore the effects of numerous factors on the type I error rate of rare variant tests of association in the presence of differential genotyping error. We find that increased sample size, decreased minor allele frequency, and an increased number of single nucleotide variants (SNVs) included in the test all increase the type I error rate in the presence of differential genotyping errors. We also find that the greater the relative difference in case-control genotyping error rates the larger the type I error rate. Lastly, as is the case for single marker tests, genotyping errors classifying the common homozygote as the heterozygote inflate the type I error rate significantly more than errors classifying the heterozygote as the common homozygote. In general, our findings are in line with results from single marker tests. To ensure that type I error inflation does not occur when analyzing next-generation sequencing data careful consideration of study design (e.g. use of randomization), caution in meta-analysis and using publicly available controls, and the use of standard quality control metrics is critical.

  16. Goldmann Tonometer Prism with an Optimized Error Correcting Applanation Surface

    PubMed Central

    McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko; Schwiegerling, Jim

    2016-01-01

    Purpose We evaluate solutions for an applanating surface modification to the Goldmann tonometer prism, which substantially negates the errors due to patient variability in biomechanics. Methods A modified Goldmann or correcting applanation tonometry surface (CATS) prism is presented which was optimized to minimize the intraocular pressure (IOP) error due to corneal thickness, stiffness, curvature, and tear film. Mathematical modeling with finite element analysis (FEA) and manometric IOP referenced cadaver eyes were used to optimize and validate the design. Results Mathematical modeling of the optimized CATS prism indicates an approximate 50% reduction in each of the corneal biomechanical and tear film errors. Manometric IOP referenced pressure in cadaveric eyes demonstrates substantial equivalence to GAT in nominal eyes with the CATS prism as predicted by modeling theory. Conclusion A CATS modified Goldmann prism is theoretically able to significantly improve the accuracy of IOP measurement without changing Goldmann measurement technique or interpretation. Clinical validation is needed but the analysis indicates a reduction in CCT error alone to less than ±2 mm Hg using the CATS prism in 100% of a standard population compared to only 54% less than ±2 mm Hg error with the present Goldmann prism. Translational Relevance This article presents an easily adopted novel approach and critical design parameters to improve the accuracy of a Goldmann applanating tonometer.

  17. Goldmann Tonometer Prism with an Optimized Error Correcting Applanation Surface

    PubMed Central

    McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko; Schwiegerling, Jim

    2016-01-01

    Purpose We evaluate solutions for an applanating surface modification to the Goldmann tonometer prism, which substantially negates the errors due to patient variability in biomechanics. Methods A modified Goldmann or correcting applanation tonometry surface (CATS) prism is presented which was optimized to minimize the intraocular pressure (IOP) error due to corneal thickness, stiffness, curvature, and tear film. Mathematical modeling with finite element analysis (FEA) and manometric IOP referenced cadaver eyes were used to optimize and validate the design. Results Mathematical modeling of the optimized CATS prism indicates an approximate 50% reduction in each of the corneal biomechanical and tear film errors. Manometric IOP referenced pressure in cadaveric eyes demonstrates substantial equivalence to GAT in nominal eyes with the CATS prism as predicted by modeling theory. Conclusion A CATS modified Goldmann prism is theoretically able to significantly improve the accuracy of IOP measurement without changing Goldmann measurement technique or interpretation. Clinical validation is needed but the analysis indicates a reduction in CCT error alone to less than ±2 mm Hg using the CATS prism in 100% of a standard population compared to only 54% less than ±2 mm Hg error with the present Goldmann prism. Translational Relevance This article presents an easily adopted novel approach and critical design parameters to improve the accuracy of a Goldmann applanating tonometer. PMID:27642540

  18. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2008-03-01

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  19. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2009-02-20

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  20. FORCE: FORtran for Cosmic Errors

    NASA Astrophysics Data System (ADS)

    Colombi, Stéphane; Szapudi, István

    We review the theory of cosmic errors we have recently developed for count-in-cells statistics. The corresponding FORCE package provides a simple and useful way to compute cosmic covariance on factorial moments and cumulants measured in galaxy catalogs.

  1. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  2. Quantile Regression With Measurement Error

    PubMed Central

    Wei, Ying; Carroll, Raymond J.

    2010-01-01

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. PMID:20305802

  3. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  4. Static Detection of Disassembly Errors

    SciTech Connect

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K

    2009-10-13

    Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  5. Dual processing and diagnostic errors.

    PubMed

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  6. Prospective errors determine motor learning

    PubMed Central

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model’s novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  7. Orbital and Geodetic Error Analysis

    NASA Technical Reports Server (NTRS)

    Felsentreger, T.; Maresca, P.; Estes, R.

    1985-01-01

    Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.

  8. Quantum error correction for state transfer in noisy spin chains

    NASA Astrophysics Data System (ADS)

    Kay, Alastair

    2016-04-01

    Can robustness against experimental imperfections and noise be embedded into a quantum simulation? In this paper, we report on a special case in which this is possible. A spin chain can be engineered such that, in the absence of imperfections and noise, an unknown quantum state is transported from one end of the chain to the other, due only to the intrinsic dynamics of the system. We show that an encoding into a standard error-correcting code (a Calderbank-Shor-Steane code) can be embedded into this simulation task such that a modified error-correction procedure on readout can recover from sufficiently low rates of noise during transport.

  9. THERP and HEART integrated methodology for human error assessment

    NASA Astrophysics Data System (ADS)

    Castiglia, Francesco; Giardina, Mariarosa; Tomarchio, Elio

    2015-11-01

    THERP and HEART integrated methodology is proposed to investigate accident scenarios that involve operator errors during high-dose-rate (HDR) treatments. The new approach has been modified on the basis of fuzzy set concept with the aim of prioritizing an exhaustive list of erroneous tasks that can lead to patient radiological overexposures. The results allow for the identification of human errors that are necessary to achieve a better understanding of health hazards in the radiotherapy treatment process, so that it can be properly monitored and appropriately managed.

  10. Automatic oscillator frequency control system

    NASA Technical Reports Server (NTRS)

    Smith, S. F. (Inventor)

    1985-01-01

    A frequency control system makes an initial correction of the frequency of its own timing circuit after comparison against a frequency of known accuracy and then sequentially checks and corrects the frequencies of several voltage controlled local oscillator circuits. The timing circuit initiates the machine cycles of a central processing unit which applies a frequency index to an input register in a modulo-sum frequency divider stage and enables a multiplexer to clock an accumulator register in the divider stage with a cyclical signal derived from the oscillator circuit being checked. Upon expiration of the interval, the processing unit compares the remainder held as the contents of the accumulator against a stored zero error constant and applies an appropriate correction word to a correction stage to shift the frequency of the oscillator being checked. A signal from the accumulator register may be used to drive a phase plane ROM and, with periodic shifts in the applied frequency index, to provide frequency shift keying of the resultant output signal. Interposition of a phase adder between the accumulator register and phase plane ROM permits phase shift keying of the output signal by periodic variation in the value of a phase index applied to one input of the phase adder.

  11. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  12. Extended frequency turbofan model

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Park, J. W.; Jaekel, R. F.

    1980-01-01

    The fan model was developed using two dimensional modeling techniques to add dynamic radial coupling between the core stream and the bypass stream of the fan. When incorporated into a complete TF-30 engine simulation, the fan model greatly improved compression system frequency response to planar inlet pressure disturbances up to 100 Hz. The improved simulation also matched engine stability limits at 15 Hz, whereas the one dimensional fan model required twice the inlet pressure amplitude to stall the simulation. With verification of the two dimensional fan model, this program formulated a high frequency F-100(3) engine simulation using row by row compression system characteristics. In addition to the F-100(3) remote splitter fan, the program modified the model fan characteristics to simulate a proximate splitter version of the F-100(3) engine.

  13. Addressee Errors in ATC Communications: The Call Sign Problem

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.

  14. System-related factors contributing to diagnostic errors.

    PubMed

    Thammasitboon, Satid; Thammasitboon, Supat; Singhal, Geeta

    2013-10-01

    Several studies in primary care, internal medicine, and emergency departments show that rates of errors in test requests and result interpretations are unacceptably high and translate into missed, delayed, or erroneous diagnoses. Ineffective follow-up of diagnostic test results could lead to patient harm if appropriate therapeutic interventions are not delivered in a timely manner. The frequency of system-related factors that contribute directly to diagnostic errors depends on the types and sources of errors involved. Recent studies reveal that the errors and patient harm in the diagnostic testing loop have occurred mainly at the pre- and post-analytic phases, which are directed primarily by clinicians who may have limited expertise in the rapidly expanding field of clinical pathology. These errors may include inappropriate test requests, failure/delay in receiving results, and erroneous interpretation and application of test results to patient care. Efforts to address system-related factors often focus on technical errors in laboratory testing or failures in delivery of intended treatment. System-improvement strategies related to diagnostic errors tend to focus on technical aspects of laboratory medicine or delivery of treatment after completion of the diagnostic process. System failures and cognitive errors, more often than not, coexist and together contribute to the incidents of errors in diagnostic process and in laboratory testing. The use of highly structured hand-off procedures and pre-planned follow-up for any diagnostic test could improve efficiency and reliability of the follow-up process. Many feedback pathways should be established so that providers can learn if or when a diagnosis is changed. Patients can participate in the effort to reduce diagnostic errors. Providers should educate their patients about diagnostic probabilities and uncertainties. The patient-safety strategies focusing on the interface between diagnostic system and therapeutic

  15. Quantifying errors without random sampling

    PubMed Central

    Phillips, Carl V; LaPole, Luwanna M

    2003-01-01

    Background All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. Discussion We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Summary Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research. PMID:12892568

  16. Medical Error and Moral Luck.

    PubMed

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome. PMID:26662613

  17. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  18. An experimental evaluation of error seeding as a program validation technique

    NASA Technical Reports Server (NTRS)

    Knight, J. C.; Ammann, P. E.

    1985-01-01

    A previously reported experiment in error seeding as a program validation technique is summarized. The experiment was designed to test the validity of three assumptions on which the alleged effectiveness of error seeding is based. Errors were seeded into 17 functionally identical but independently programmed Pascal programs in such a way as to produce 408 programs, each with one seeded error. Using mean time to failure as a metric, results indicated that it is possible to generate seeded errors that are arbitrarily but not equally difficult to locate. Examination of indigenous errors demonstrated that these are also arbitrarily difficult to locate. These two results support the assumption that seeded and indigenous errors are approximately equally difficult to locate. However, the assumption that, for each type of error, all errors are equally difficult to locate was not borne out. Finally, since a seeded error occasionally corrected an indigenous error, the assumption that errors do not interfere with each other was proven wrong. Error seeding can be made useful by taking these results into account in modifying the underlying model.

  19. The Study of Prescribing Errors Among General Dentists

    PubMed Central

    Araghi, Solmaz; Sharifi, Rohollah; Ahmadi, Goran; Esfehani, Mahsa; Rezaei, Fatemeh

    2016-01-01

    Introduction: In dentistry, medicine often prescribed to relieve pain and remove infections. Therefore, wrong prescription can lead to a range of problems including lack of pain, antimicrobial treatment failure and the development of resistance to antibiotics. Materials and Methods: In this cross-sectional study, the aim was to evaluate the common errors in written prescriptions by general dentists in Kermanshah in 2014. Dentists received a questionnaire describing five hypothetical patient and the appropriate prescription for the patient in question was asked. Information about age, gender, work experience and the admission in university was collected. The frequency of errors in prescriptions was determined. Data by SPSS 20 statistical software and using statistical t-test, chi-square and Pearson correlation were analyzed (0.05> P). Results: A total of 180 dentists (62.6% male and 37.4% female) with a mean age of 8.23 ± 39.199 participated in this study. Prescription errors include the wrong in pharmaceutical form (11%), not having to write therapeutic dose (13%), writing wrong dose (14%), typos (15%), error prescription (23%) and writing wrong number of drugs (24%). The most frequent errors in the administration of antiviral drugs (31%) and later stages of antifungal drugs (30%), analgesics (23%) and antibiotics (16%) was observed. Males dentists compared with females dentists showed more frequent errors (P=0.046). Error frequency among dentists with a long work history (P>0.001) and the acceptance in the university except for the entrance examination (P=0.041) had a statistically significant relationship. Conclusion: This study showed that the written prescription by general dentists examined contained significant errors and improve prescribing through continuing education of dentists is essential. PMID:26573049

  20. Error Reduction in Portable, Low-Speed Weigh-In-Motion (Sub-0.1 Percent Error)

    SciTech Connect

    Abercrombie, Robert K; Hively, Lee M; Scudiere, Matthew B; Sheldon, Frederick T

    2008-01-01

    We present breakthrough findings based on significant modifications to the Weigh-in-Motion (WIM) Gen II approach, so-called the modified Gen II. The revisions enable slow speed weight measurements at least as precise as in ground static scales, which are certified to 0.1% error. Concomitant software and hardware revisions reflect a philosophical and practical change that enables an order of magnitude improvement in low-speed weighing precision. This error reduction breakthrough is presented within the context of the complete host of commercial and governmental application rationale including the flexibility to extend information and communication technology for future needs.

  1. Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene a.

    2006-01-01

    Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

  2. Food frequency questionnaires.

    PubMed

    Pérez Rodrigo, Carmen; Aranceta, Javier; Salvador, Gemma; Varela-Moreiras, Gregorio

    2015-02-26

    Food Frequency Questionnaires are dietary assessment tools widely used in epidemiological studies investigating the relationship between dietary intake and disease or risk factors since the early '90s. The three main components of these questionnaires are the list of foods, frequency of consumption and the portion size consumed. The food list should reflect the food habits of the study population at the time the data is collected. The frequency of consumption may be asked by open ended questions or by presenting frequency categories. Qualitative Food Frequency Questionnaires do not ask about the consumed portions; semi-quantitative include standard portions and quantitative questionnaires ask respondents to estimate the portion size consumed either in household measures or grams. The latter implies a greater participant burden. Some versions include only close-ended questions in a standardized format, while others add an open section with questions about some specific food habits and practices and admit additions to the food list for foods and beverages consumed which are not included. The method can be self-administered, on paper or web-based, or interview administered either face-to-face or by telephone. Due to the standard format, especially closed-ended versions, and method of administration, FFQs are highly cost-effective thus encouraging its widespread use in large scale epidemiological cohort studies and also in other study designs. Coding and processing data collected is also less costly and requires less nutrition expertise compared to other dietary intake assessment methods. However, the main limitations are systematic errors and biases in estimates. Important efforts are being developed to improve the quality of the information. It has been recommended the use of FFQs with other methods thus enabling the adjustments required.

  3. Explaining errors in children's questions.

    PubMed

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  4. Application of the Modified Urey-Bradley-Shimanouchi Force field of α-D-Glucopyranose and β-D-Fructopyranose to Predict the Vibrational Spectra of Disaccharides

    NASA Astrophysics Data System (ADS)

    Gafour, H. M.; Sekkal-Rahal, M.; Sail, K.

    2014-01-01

    The vibrational frequencies of the disaccharide isomaltulose in the solid state have been reproduced in the 50-4000 cm-1 range. The modified Urey-Bradley-Shimanouchi force field was used, combined with an inter molecular potential energy function that includes van der Waals interactions, electrostatic terms, and an explicit hydrogen bond function. The force constants previously established for α-D-glucopyranose and β-D-fructo pyranose, as well as the crystallographic data of isomaltulose monohydrate, were the starting parameters for the present work. The vibrational frequencies of isomaltulose were calculated and assigned to the experimentally observed vibrational frequencies. Overall, there was good agreement between the observed and calculated frequencies with an average error of 4 cm-1. Furthermore, good agreement was found between our calculated results and the vibration spectra of other disaccharides and monosaccharides.

  5. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  6. 1.76Tb/s Nyquist PDM 16QAM signal transmission over 714km SSMF with the modified SCFDE technique.

    PubMed

    Zheng, Zhennan; Ding, Rui; Zhang, Fan; Chen, Zhangyuan

    2013-07-29

    Nyquist pulse shaping is a promising technique for high-speed optical fiber transmission. We experimentally demonstrate the generation and transmission of a 1.76Tb/s, polarization-division-multiplexing (PDM) 16 quadrature amplitude modulation (QAM) Nyquist pulse shaping super-channel over 714km standard single-mode fiber (SSMF) with Erbium-doped fiber amplifier (EDFA) only amplification. The superchannel consists of 40 subcarriers tightly spaced at 6.25GHz with a spectral efficiency of 7.06b/s/Hz. The experiment is successfully enabled with the modified single carrier frequency domain estimation and equalization (SCFDE) scheme by performing training sequence based channel estimation in frequency domain and subsequent channel equalization in time domain. After 714km transmission, the bit-error-rate (BER) of all subcarriers are lower than the forward error correction limit of 3.8 × 10(-3).

  7. Frequency-Offset Cartesian Feedback for MRI Power Amplifier Linearization

    PubMed Central

    Zanchi, Marta Gaia; Stang, Pascal; Kerr, Adam; Pauly, John Mark; Scott, Greig Cameron

    2011-01-01

    High-quality magnetic resonance imaging (MRI) requires precise control of the transmit radio-frequency field. In parallel excitation applications such as transmit SENSE, high RF power linearity is essential to cancel aliased excitations. In widely-employed class AB power amplifiers, gain compression, cross-over distortion, memory effects, and thermal drift all distort the RF field modulation and can degrade image quality. Cartesian feedback (CF) linearization can mitigate these effects in MRI, if the quadrature mismatch and DC offset imperfections inherent in the architecture can be minimized. In this paper, we present a modified Cartesian feedback technique called “frequency-offset Cartesian feedback” (FOCF) that significantly reduces these problems. In the FOCF architecture, the feedback control is performed at a low intermediate frequency rather than DC, so that quadrature ghosts and DC errors are shifted outside the control bandwidth. FOCF linearization is demonstrated with a variety of typical MRI pulses. Simulation of the magnetization obtained with the Bloch equation demonstrates that high-fidelity RF reproduction can be obtained even with inexpensive class AB amplifiers. Finally, the enhanced RF fidelity of FOCF over CF is demonstrated with actual images obtained in a 1.5 T MRI system. PMID:20959264

  8. Experimental verification of the modified spring-mass theory of fiber Bragg grating accelerometers using transverse forces.

    PubMed

    Li, Kuo; Chan, Tommy H T; Yau, Man Hong; Thambiratnam, David P; Tam, Hwa Yaw

    2014-02-20

    A fiber Bragg grating (FBG) accelerometer using transverse forces is more sensitive than one using axial forces with the same mass of the inertial object, because a barely stretched FBG fixed at its two ends is much more sensitive to transverse forces than axial ones. The spring-mass theory, with the assumption that the axial force changes little during the vibration, cannot accurately predict its sensitivity and resonant frequency in the gravitational direction because the assumption does not hold due to the fact that the FBG is barely prestretched. It was modified but still required experimental verification due to the limitations in the original experiments, such as the (1) friction between the inertial object and shell; (2) errors involved in estimating the time-domain records; (3) limited data; and (4) large interval ~5  Hz between the tested frequencies in the frequency-response experiments. The experiments presented here have verified the modified theory by overcoming those limitations. On the frequency responses, it is observed that the optimal condition for simultaneously achieving high sensitivity and resonant frequency is at the infinitesimal prestretch. On the sensitivity at the same frequency, the experimental sensitivities of the FBG accelerometer with a 5.71 gram inertial object at 6 Hz (1.29, 1.19, 0.88, 0.64, and 0.31  nm/g at the 0.03, 0.69, 1.41, 1.93, and 3.16 nm prestretches, respectively) agree with the static sensitivities predicted (1.25, 1.14, 0.83, 0.61, and 0.29  nm/g, correspondingly). On the resonant frequency, (1) its assumption that the resonant frequencies in the forced and free vibrations are similar is experimentally verified; (2) its dependence on the distance between the FBG's fixed ends is examined, showing it to be independent; (3) the predictions of the spring-mass theory and modified theory are compared with the experimental results, showing that the modified theory predicts more accurately. The modified theory

  9. Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Niu, Qunjie; Liang, Kun

    2016-09-01

    Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.

  10. Investigation of error compensation in CGH-based form testing of aspheres

    NASA Astrophysics Data System (ADS)

    Stuerwald, S.; Brill, N.; Schmitt, R.

    2014-05-01

    Interferometric form testing using computer generated holograms is one of the main full-field measurement techniques. Till now, various modified measurement setups for optical form testing interferometry have been presented. Currently, typical form deviations in the region of several tens of nanometers occur in case of the widely used computer generated hologram (CGH) based interferometric form testing. Deviations occur due to a non-perfect alignment of the computer generated hologram (CGH) relative to the transmission sphere (Fizeau objective) and also of the asphere relative to the testing wavefront. Thus, measurement results are user and setup dependent which results in an unsatisfactory reproducibility of the form errors. In case of aligning a CGH, this usually requires a minimization of the spatial frequency of the fringe pattern by an operator. Finding the ideal position however often cannot be performed with sufficient accuracy by the operator as the position of minimum spatial fringe density is usually not unique. Therefore, the scientific and technical objectives of this paper comprise the development of a simulation based approach to explain and quantify the experimental errors due to misalignment of the specimen towards a computer generated hologram in an optical form testing measurement system. A further step is the programming of an iterative method to realize a virtual optimised realignment of the system on the basis of Zernike polynomial decomposition which should allow the calculation of the measured form for an ideal alignment and thus the subtraction of the alignment based form error. Different analysis approaches are investigated with regard to the final accuracy and reproducibility. To validate the theoretical models a series of systematic experiments is performed with hexapod-positioning systems in order to allow an exact and reproducible positioning of the optical CGH-based setup.

  11. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  12. Synthetic aperture interferometry: error analysis

    SciTech Connect

    Biswas, Amiya; Coupland, Jeremy

    2010-07-10

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  13. Orbit IMU alignment: Error analysis

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  14. Error Field Correction in ITER

    SciTech Connect

    Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.

    2008-05-22

    A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.

  15. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. PMID:27184070

  16. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis.

  17. 20 Tips to Help Prevent Medical Errors

    MedlinePlus

    ... Prevent Medical Errors 20 Tips to Help Prevent Medical Errors: Patient Fact Sheet This information is for ... current information. Select to Download PDF (295 KB). Medical errors can occur anywhere in the health care ...

  18. Analysis of Medication Error Reports

    SciTech Connect

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  19. Influence of Tooth Spacing Error on Gears With and Without Profile Modifications

    NASA Technical Reports Server (NTRS)

    Padmasolala, Giri; Lin, Hsiang H.; Oswald, Fred B.

    2000-01-01

    A computer simulation was conducted to investigate the effectiveness of profile modification for reducing dynamic loads in gears with different tooth spacing errors. The simulation examined varying amplitudes of spacing error and differences in the span of teeth over which the error occurs. The modification considered included both linear and parabolic tip relief. The analysis considered spacing error that varies around most of the gear circumference (similar to a typical sinusoidal error pattern) as well as a shorter span of spacing errors that occurs on only a few teeth. The dynamic analysis was performed using a revised version of a NASA gear dynamics code, modified to add tooth spacing errors to the analysis. Results obtained from the investigation show that linear tip relief is more effective in reducing dynamic loads on gears with small spacing errors but parabolic tip relief becomes more effective as the amplitude of spacing error increases. In addition, the parabolic modification is more effective for the more severe error case where the error is spread over a longer span of teeth. The findings of this study can be used to design robust tooth profile modification for improving dynamic performance of gear sets with different tooth spacing errors.

  20. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  1. The Relation of Spelling Errors to Cognitive Variables and Word Type

    ERIC Educational Resources Information Center

    Goyen, J. D.; Martin, M.

    1977-01-01

    Attempts to relate the spelling errors of secondary school students to visual and auditory sequential memory, intelligence, reading, and writing speed. The relation of spelling ability to the frequency and regularity of words is also examined. (Author/RK)

  2. Instantaneous microwave frequency measurement using four-wave mixing in a chalcogenide chip

    NASA Astrophysics Data System (ADS)

    Pagani, Mattia; Vu, Khu; Choi, Duk-Yong; Madden, Steve J.; Eggleton, Benjamin J.; Marpaung, David

    2016-08-01

    We present the first instantaneous frequency measurement (IFM) system using four-wave mixing (FWM) in a compact photonic chip. We exploit the high nonlinearity of chalcogenide to achieve efficient FWM in a short 23 mm As2S3 waveguide. This reduces the measurement latency by orders of magnitude, compared to fiber-based approaches. We demonstrate the tuning of the system response to maximize measurement bandwidth (40 GHz, limited by the equipment used), or accuracy (740 MHz rms error). Additionally, we modify the previous FWM-based IFM system structure to allow for ultra-fast reconfiguration of the bandwidth and resolution of the measurement. This has the potential to become the first IFM system capable of ultra-fast accurate frequency measurement, with no compromise of bandwidth.

  3. Sampling data for OSSEs. [simulating errors for WINDSAT Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross

    1988-01-01

    An OSSE should for the sake of realism incorporate at least some of the high-frequency, small-scale phenomena that are suppressed by atmospheric models; these phenomena should be present in the realistic atmosphere sampled by all observing sensor systems whose data are being used. Errors are presently generated for an OSSE in a way that encompasses representational errors, sampling, geophysical local bias, random error, and sensor filtering.

  4. Improved spectral vector error diffusion by dot gain compensation

    NASA Astrophysics Data System (ADS)

    Nyström, Daniel; Norberg, Ole

    2013-02-01

    Spectral Vector Error Diffusion, sVED, is an interesting approach to achieve spectral color reproduction, i.e. reproducing the spectral reflectance of an original, creating a reproduction that will match under any illumination. For each pixel in the spectral image, the colorant combination producing the spectrum closest to the target spectrum is selected, and the spectral error is diffused to surrounding pixels using an error distribution filter. However, since the colorant separation and halftoning is performed in a single step in sVED, compensation for dot gain cannot be made for each color channel independently, as in a conventional workflow where the colorant separation and halftoning is performed sequentially. In this study, we modify the sVED routine to compensate for the dot gain, applying the Yule-Nielsen n-factor to modify the target spectra, i.e. performing the computations in (1/n)-space. A global n-factor, optimal for each print resolution, reduces the spectral reproduction errors by approximately a factor of 4, while an n-factor that is individually optimized for each target spectrum reduces the spectral reproduction error to 7% of that for the unmodified prints. However, the improvements when using global n-values are still not sufficient for the method to be of any real use in practice, and to individually optimize the n-values for each target is not feasible in a real workflow. The results illustrate the necessity to properly account for the dot gain in the printing process, and that further developments is needed in order to make Spectral Vector Error Diffusion a realistic alternative for spectral color reproduction.

  5. Management of human error by design

    NASA Technical Reports Server (NTRS)

    Wiener, Earl

    1988-01-01

    Design-induced errors and error prevention as well as the concept of lines of defense against human error are discussed. The concept of human error prevention, whose main focus has been on hardware, is extended to other features of the human-machine interface vulnerable to design-induced errors. In particular, it is pointed out that human factors and human error prevention should be part of the process of transport certification. Also, the concept of error tolerant systems is considered as a last line of defense against error.

  6. Reducing medical errors and adverse events.

    PubMed

    Pham, Julius Cuong; Aswani, Monica S; Rosen, Michael; Lee, HeeWon; Huddle, Matthew; Weeks, Kristina; Pronovost, Peter J

    2012-01-01

    Medical errors account for ∼98,000 deaths per year in the United States. They increase disability and costs and decrease confidence in the health care system. We review several important types of medical errors and adverse events. We discuss medication errors, healthcare-acquired infections, falls, handoff errors, diagnostic errors, and surgical errors. We describe the impact of these errors, review causes and contributing factors, and provide an overview of strategies to reduce these events. We also discuss teamwork/safety culture, an important aspect in reducing medical errors.

  7. Quantum Metrology Enhanced by Repetitive Quantum Error Correction.

    PubMed

    Unden, Thomas; Balasubramanian, Priya; Louzon, Daniel; Vinkler, Yuval; Plenio, Martin B; Markham, Matthew; Twitchen, Daniel; Stacey, Alastair; Lovchinsky, Igor; Sushkov, Alexander O; Lukin, Mikhail D; Retzker, Alex; Naydenov, Boris; McGuinness, Liam P; Jelezko, Fedor

    2016-06-10

    We experimentally demonstrate the protection of a room-temperature hybrid spin register against environmental decoherence by performing repeated quantum error correction whilst maintaining sensitivity to signal fields. We use a long-lived nuclear spin to correct multiple phase errors on a sensitive electron spin in diamond and realize magnetic field sensing beyond the time scales set by natural decoherence. The universal extension of sensing time, robust to noise at any frequency, demonstrates the definitive advantage entangled multiqubit systems provide for quantum sensing and offers an important complement to quantum control techniques. PMID:27341218

  8. Quantum Metrology Enhanced by Repetitive Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Unden, Thomas; Balasubramanian, Priya; Louzon, Daniel; Vinkler, Yuval; Plenio, Martin B.; Markham, Matthew; Twitchen, Daniel; Stacey, Alastair; Lovchinsky, Igor; Sushkov, Alexander O.; Lukin, Mikhail D.; Retzker, Alex; Naydenov, Boris; McGuinness, Liam P.; Jelezko, Fedor

    2016-06-01

    We experimentally demonstrate the protection of a room-temperature hybrid spin register against environmental decoherence by performing repeated quantum error correction whilst maintaining sensitivity to signal fields. We use a long-lived nuclear spin to correct multiple phase errors on a sensitive electron spin in diamond and realize magnetic field sensing beyond the time scales set by natural decoherence. The universal extension of sensing time, robust to noise at any frequency, demonstrates the definitive advantage entangled multiqubit systems provide for quantum sensing and offers an important complement to quantum control techniques.

  9. Error detection and correction unit with built-in self-test capability for spacecraft applications

    NASA Technical Reports Server (NTRS)

    Timoc, Constantin

    1990-01-01

    The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.

  10. He's Frequency Formulation for Nonlinear Oscillators

    ERIC Educational Resources Information Center

    Geng, Lei; Cai, Xu-Chu

    2007-01-01

    Based on an ancient Chinese algorithm, J H He suggested a simple but effective method to find the frequency of a nonlinear oscillator. In this paper, a modified version is suggested to improve the accuracy of the frequency; two examples are given, revealing that the obtained solutions are of remarkable accuracy and are valid for the whole solution…

  11. Investigating the Relationship between Conceptual and Procedural Errors in the Domain of Probability Problem-Solving.

    ERIC Educational Resources Information Center

    O'Connell, Ann Aileen

    The relationships among types of errors observed during probability problem solving were studied. Subjects were 50 graduate students in an introductory probability and statistics course. Errors were classified as text comprehension, conceptual, procedural, and arithmetic. Canonical correlation analysis was conducted on the frequencies of specific…

  12. Administration and Scoring Errors of Graduate Students Learning the WISC-IV: Issues and Controversies

    ERIC Educational Resources Information Center

    Mrazik, Martin; Janzen, Troy M.; Dombrowski, Stefan C.; Barford, Sean W.; Krawchuk, Lindsey L.

    2012-01-01

    A total of 19 graduate students enrolled in a graduate course conducted 6 consecutive administrations of the Wechsler Intelligence Scale for Children, 4th edition (WISC-IV, Canadian version). Test protocols were examined to obtain data describing the frequency of examiner errors, including administration and scoring errors. Results identified 511…

  13. Flood frequency in Alaska

    USGS Publications Warehouse

    Childers, J.M.

    1970-01-01

    Records of peak discharge at 183 sites were used to study flood frequency in Alaska. The vast size of Alaska, its great ranges of physiography, and the lack of data for much of the State precluded a comprehensive analysis of all flood determinants. Peak stream discharges, where gaging-station records were available, were analyzed for 2-year, 5-year, 10-year, 25-year, and 50-year average-recurrence intervals. A regional analysis of the flood characteristics by multiple-regression methods gave a set of equations that can be used to estimate floods of selected recurrence intervals up to 50 years for any site on any stream in Alaska. The equations relate floods to drainage-basin characteristics. The study indicates that in Alaska the 50-year flood can be estimated from 10-year gaging- station records with a standard error of 22 percent whereas the 50-year flood can be estimated from the regression equation with a standard error of 53 percent. Also, maximum known floods at more than 500 gaging stations and miscellaneous sites in Alaska were related to drainage-area size. An envelope curve of 500 cubic feet per second per square mile covered all but 2 floods in the State.

  14. Frequency-domain Green's functions for radar waves in heterogeneous 2.5D media

    USGS Publications Warehouse

    Ellefsen, K.J.; Croize, D.; Mazzella, A.T.; McKenna, J.R.

    2009-01-01

    Green's functions for radar waves propagating in heterogeneous 2.5D media might be calculated in the frequency domain using a hybrid method. The model is defined in the Cartesian coordinate system, and its electromagnetic properties might vary in the x- and z-directions, but not in the y-direction. Wave propagation in the x- and z-directions is simulated with the finite-difference method, and wave propagation in the y-direction is simulated with an analytic function. The absorbing boundaries on the finite-difference grid are perfectly matched layers that have been modified to make them compatible with the hybrid method. The accuracy of these numerical Greens functions is assessed by comparing them with independently calculated Green's functions. For a homogeneous model, the magnitude errors range from -4.16% through 0.44%, and the phase errors range from -0.06% through 4.86%. For a layered model, the magnitude errors range from -2.60% through 2.06%, and the phase errors range from -0.49% through 2.73%. These numerical Green's functions might be used for forward modeling and full waveform inversion. ?? 2009 Society of Exploration Geophysicists. All rights reserved.

  15. Meteor radar signal processing and error analysis

    NASA Astrophysics Data System (ADS)

    Kang, Chunmei

    Meteor wind radar systems are a powerful tool for study of the horizontal wind field in the mesosphere and lower thermosphere (MLT). While such systems have been operated for many years, virtually no literature has focused on radar system error analysis. The instrumental error may prevent scientists from getting correct conclusions on geophysical variability. The radar system instrumental error comes from different sources, including hardware, software, algorithms and etc. Radar signal processing plays an important role in radar system and advanced signal processing algorithms may dramatically reduce the radar system errors. In this dissertation, radar system error propagation is analyzed and several advanced signal processing algorithms are proposed to optimize the performance of radar system without increasing the instrument costs. The first part of this dissertation is the development of a time-frequency waveform detector, which is invariant to noise level and stable to a wide range of decay rates. This detector is proposed to discriminate the underdense meteor echoes from the background white Gaussian noise. The performance of this detector is examined using Monte Carlo simulations. The resulting probability of detection is shown to outperform the often used power and energy detectors for the same probability of false alarm. Secondly, estimators to determine the Doppler shift, the decay rate and direction of arrival (DOA) of meteors are proposed and evaluated. The performance of these estimators is compared with the analytically derived Cramer-Rao bound (CRB). The results show that the fast maximum likelihood (FML) estimator for determination of the Doppler shift and decay rate and the spatial spectral method for determination of the DOAs perform best among the estimators commonly used on other radar systems. For most cases, the mean square error (MSE) of the estimator meets the CRB above a 10dB SNR. Thus meteor echoes with an estimated SNR below 10dB are

  16. Similarities between the target and the intruder in naturally occurring repeated person naming errors

    PubMed Central

    Brédart, Serge; Dardenne, Benoit

    2015-01-01

    The present study investigated an intriguing phenomenon that did not receive much attention so far: repeatedly calling a familiar person with someone else’s name. From participants’ responses to a questionnaire, these repeated naming errors were characterized with respect to a number of properties (e.g., type of names being substituted, error frequency, error longevity) and different features of similarity (e.g., age, gender, type of relationship with the participant, face resemblance and similarity of the contexts of encounter) between the bearer of the target name and the bearer of the wrong name. Moreover, it was evaluated whether the phonological similarity between names, the participants’ age, the difference of age between the two persons whose names were substituted, and face resemblance between the two persons predicted the frequency of error. Regression analyses indicated that phonological similarity between the target name and the wrong name predicted the frequency of repeated person naming errors. The age of the participant was also a significant predictor of error frequency: the older the participant the higher the frequency of errors. Consistent with previous research stressing the importance of the age of acquisition of words on lexical access in speech production, results indicated that bearer of the wrong name was on average known for longer than the bearer of the target name. PMID:26483728

  17. Errors in airborne flux measurements

    NASA Astrophysics Data System (ADS)

    Mann, Jakob; Lenschow, Donald H.

    1994-07-01

    We present a general approach for estimating systematic and random errors in eddy correlation fluxes and flux gradients measured by aircraft in the convective boundary layer as a function of the length of the flight leg, or of the cutoff wavelength of a highpass filter. The estimates are obtained from empirical expressions for various length scales in the convective boundary layer and they are experimentally verified using data from the First ISLSCP (International Satellite Land Surface Climatology Experiment) Field Experiment (FIFE), the Air Mass Transformation Experiment (AMTEX), and the Electra Radome Experiment (ELDOME). We show that the systematic flux and flux gradient errors can be important if fluxes are calculated from a set of several short flight legs or if the vertical velocity and scalar time series are high-pass filtered. While the systematic error of the flux is usually negative, that of the flux gradient can change sign. For example, for temperature flux divergence the systematic error changes from negative to positive about a quarter of the way up in the convective boundary layer.

  18. Sampling Errors of Variance Components.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…

  19. Measurement error in geometric morphometrics.

    PubMed

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.

  20. The Errors of Our Ways

    ERIC Educational Resources Information Center

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  1. Typical errors of ESP users

    NASA Astrophysics Data System (ADS)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  2. Amplify Errors to Minimize Them

    ERIC Educational Resources Information Center

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  3. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  4. Error Patterns of Bilingual Readers.

    ERIC Educational Resources Information Center

    Gonzalez, Phillip C.; Elijah, David V.

    1979-01-01

    In a study of developmental reading behaviors, errors of 75 Spanish-English bilingual students (grades 2-9) on the McLeod GAP Comprehension Test were categorized in an attempt to ascertain a pattern of language difficulties. Contrary to previous research, bilingual readers minimally used native language cues in reading second language materials.…

  5. What Is a Reading Error?

    ERIC Educational Resources Information Center

    Labov, William; Baker, Bettina

    2010-01-01

    Early efforts to apply knowledge of dialect differences to reading stressed the importance of the distinction between differences in pronunciation and mistakes in reading. This study develops a method of estimating the probability that a given oral reading that deviates from the text is a true reading error by observing the semantic impact of the…

  6. Having Fun with Error Analysis

    ERIC Educational Resources Information Center

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  7. Input/output error analyzer

    NASA Technical Reports Server (NTRS)

    Vaughan, E. T.

    1977-01-01

    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  8. A brief history of error.

    PubMed

    Murray, Andrew W

    2011-10-01

    The spindle checkpoint monitors chromosome alignment on the mitotic and meiotic spindle. When the checkpoint detects errors, it arrests progress of the cell cycle while it attempts to correct the mistakes. This perspective will present a brief history summarizing what we know about the checkpoint, and a list of questions we must answer before we understand it. PMID:21968991

  9. Error sources affecting thermocouple thermometry in RF electromagnetic fields.

    PubMed

    Chakraborty, D P; Brezovich, I A

    1982-03-01

    Thermocouple thermometry errors in radiofrequency (typically 13, 56 MHZ) electromagnetic fields such as are encountered in hyperthermia are described. RF currents capacitatively or inductively coupled into the thermocouple-detector circuit produce errors which are a combination of interference, i.e., 'pick-up' error, and genuine rf induced temperature changes at the junction of the thermocouple. The former can be eliminated by adequate filtering and shielding; the latter is due to (a) junction current heating in which the generally unequal resistances of the thermocouple wires cause a net current flow from the higher to the lower resistance wire across the junction, (b) heating in the surrounding resistive material (tissue in hyperthermia), and (c) eddy current heating of the thermocouple wires in the oscillating magnetic field. Low frequency theories are used to estimate these errors under given operating conditions and relevant experiments demonstrating these effects and precautions necessary to minimize the errors are described. It is shown that at 13.56 MHz and voltage levels below 100 V rms these errors do not exceed 0.1 degrees C if the precautions are observed and thermocouples with adequate insulation (e.g., Bailey IT-18) are used. Results of this study are being currently used in our clinical work with good success.

  10. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  11. A method of minimizing the frequency stabilization sensitivity for four frequency differential laser gyro

    NASA Astrophysics Data System (ADS)

    Yang, Jianqiang; Zhu, Yong; Luo, Yun; Jiang, Tian

    2010-10-01

    The frequency stabilization error is an important error source to limit the precision of four frequency differential ring laser gyro (DILAG) in navigation application. Different from the traditional technology mainly related to frequency stabilization circuits design, this paper introduces a new method to solve the problem. The method can essentially minimize the frequency stabilization sensitivity of DILAG, by applying an outer longitudinal magnetic field to the gain region of DILAG. Through adjusting the value of magnetic field to make the frequency splitting equal to the Faraday splitting, the minimum frequency stabilization sensitivity of DILAG will be available. The physics mechanism and mathematic model of this method are analyzed and set up. Concrete steps to realize the method are given in detail. Experimental results have verified its validity and it can decrease the startup drift. Hence, this new method can improve the performance of DILAG, which will be helpful to navigation application.

  12. A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications

    SciTech Connect

    Bronevetsky, G; de Supinski, B; Schulz, M

    2009-02-13

    Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.

  13. Ac-dc converter firing error detection

    SciTech Connect

    Gould, O.L.

    1996-07-15

    Each of the twelve Booster Main Magnet Power Supply modules consist of two three-phase, full-wave rectifier bridges in series to provide a 560 VDC maximum output. The harmonic contents of the twelve-pulse ac-dc converter output are multiples of the 60 Hz ac power input, with a predominant 720 Hz signal greater than 14 dB in magnitude above the closest harmonic components at maximum output. The 720 Hz harmonic is typically greater than 20 dB below the 500 VDC output signal under normal operation. Extracting specific harmonics from the rectifier output signal of a 6, 12, or 24 pulse ac-dc converter allows the detection of SCR firing angle errors or complete misfires. A bandpass filter provides the input signal to a frequency-to-voltage converter. Comparing the output of the frequency-to-voltage converter to a reference voltage level provides an indication of the magnitude of the harmonics in the ac-dc converter output signal.

  14. Phase measurement error in summation of electron holography series.

    PubMed

    McLeod, Robert A; Bergen, Michael; Malac, Marek

    2014-06-01

    Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions.

  15. High-precision coseismic displacement estimation with a single-frequency GPS receiver

    NASA Astrophysics Data System (ADS)

    Guo, Bofeng; Zhang, Xiaohong; Ren, Xiaodong; Li, Xingxing

    2015-07-01

    To improve the performance of Global Positioning System (GPS) in the earthquake/tsunami early warning and rapid response applications, minimizing the blind zone and increasing the stability and accuracy of both the rapid source and rupture inversion, the density of existing GPS networks must be increased in the areas at risk. For economic reasons, low-cost single-frequency receivers would be preferable to make the sparse dual-frequency GPS networks denser. When using single-frequency GPS receivers, the main problem that must be solved is the ionospheric delay, which is a critical factor when determining accurate coseismic displacements. In this study, we introduce a modified Satellite-specific Epoch-differenced Ionospheric Delay (MSEID) model to compensate for the effect of ionospheric error on single-frequency GPS receivers. In the MSEID model, the time-differenced ionospheric delays observed from a regional dual-frequency GPS network to a common satellite are fitted to a plane rather than part of a sphere, and the parameters of this plane are determined by using the coordinates of the stations. When the parameters are known, time-differenced ionospheric delays for a single-frequency GPS receiver could be derived from the observations of those dual-frequency receivers. Using these ionospheric delay corrections, coseismic displacements of a single-frequency GPS receiver can be accurately calculated based on time-differenced carrier-phase measurements in real time. The performance of the proposed approach is validated using 5 Hz GPS data collected during the 2012 Nicoya Peninsula Earthquake (Mw 7.6, 2012 September 5) in Costa Rica. This shows that the proposed approach improves the accuracy of the displacement of a single-frequency GPS station, and coseismic displacements with an accuracy of a few centimetres are achieved over a 10-min interval.

  16. Analysis of double-probe characteristics in low-frequency gas discharges and its improvement

    SciTech Connect

    Liu, DongLin Li, XiaoPing; Xie, Kai; Liu, ZhiWei; Shao, MingXu

    2015-01-15

    The double-probe has been used successfully in radio-frequency discharges. However, in low-frequency discharges, the double-probe I-V curve is so much seriously distorted by the strong plasma potential fluctuations that the I-V curve may lead to a large estimate error of plasma parameters. To suppress the distortion, we investigate the double-probe characteristics in low-frequency gas discharge based on an equivalent circuit model, taking both the plasma sheath and probe circuit into account. We discovered that there are two primary interferences to the I-V curve distortion: the voltage fluctuation between two probe tips caused by the filter difference voltage and the current peak at the negative edge of the plasma potential. Consequently, we propose a modified passive filter to reduce the two types of interference simultaneously. Experiments are conducted in a glow-discharge plasma (f = 30 kHz) to test the performance of the improved double probe. The results show that the electron density error is reduced from more than 100% to less than 10%. The proposed improved method is also suitable in cases where intensive potential fluctuations exist.

  17. GP-B error modeling and analysis

    NASA Technical Reports Server (NTRS)

    Hung, J. C.

    1982-01-01

    Individual source errors and their effects on the accuracy of the Gravity Probe B (GP-B) experiment were investigated. Emphasis was placed on: (1) the refinement of source error identification and classifications of error according to their physical nature; (2) error analysis for the GP-B data processing; and (3) measurement geometry for the experiment.

  18. A simple double error correcting BCH codes

    NASA Astrophysics Data System (ADS)

    Sinha, V.

    1983-07-01

    With the availability of various cost effective digital hardware components, error correcting codes are realized in hardware in simpler fashion than was hitherto possible. Instead of computing error locations in BCH decoding by Berklekamp algorith, syndrome to error location mapping using an EPROM for double error correcting BCH code is described. The processing is parallel instead of serial. Possible applications are given.

  19. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  20. Error Analysis in the Introductory Physics Laboratory.

    ERIC Educational Resources Information Center

    Deacon, Christopher G.

    1992-01-01

    Describes two simple methods of error analysis: (1) combining errors in the measured quantities; and (2) calculating the error or uncertainty in the slope of a straight-line graph. Discusses significance of the error in the comparison of experimental results with some known value. (MDH)

  1. Medical Errors: Tips to Help Prevent Them

    MedlinePlus

    ... to Web version Medical Errors: Tips to Help Prevent Them Medical Errors: Tips to Help Prevent Them Medical errors are one of the nation's ... single most important way you can help to prevent errors is to be an active member of ...

  2. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  3. Report on errors in pretransfusion testing from a tertiary care center: A step toward transfusion safety

    PubMed Central

    Sidhu, Meena; Meenia, Renu; Akhter, Naveen; Sawhney, Vijay; Irm, Yasmeen

    2016-01-01

    Introduction: Errors in the process of pretransfusion testing for blood transfusion can occur at any stage from collection of the sample to administration of the blood component. The present study was conducted to analyze the errors that threaten patients’ transfusion safety and actual harm/serious adverse events that occurred to the patients due to these errors. Materials and Methods: The prospective study was conducted in the Department Of Transfusion Medicine, Shri Maharaja Gulab Singh Hospital, Government Medical College, Jammu, India from January 2014 to December 2014 for a period of 1 year. Errors were defined as any deviation from established policies and standard operating procedures. A near-miss event was defined as those errors, which did not reach the patient. Location and time of occurrence of the events/errors were also noted. Results: A total of 32,672 requisitions for the transfusion of blood and blood components were received for typing and cross-matching. Out of these, 26,683 products were issued to the various clinical departments. A total of 2,229 errors were detected over a period of 1 year. Near-miss events constituted 53% of the errors and actual harmful events due to errors occurred in 0.26% of the patients. Sample labeling errors were 2.4%, inappropriate request for blood components 2%, and information on requisition forms not matching with that on the sample 1.5% of all the requisitions received were the most frequent errors in clinical services. In transfusion services, the most common event was accepting sample in error with the frequency of 0.5% of all requisitions. ABO incompatible hemolytic reactions were the most frequent harmful event with the frequency of 2.2/10,000 transfusions. Conclusion: Sample labeling, inappropriate request, and sample received in error were the most frequent high-risk errors. PMID:27011670

  4. Robust Blind Frequency and Transition Time Estimation for Frequency Hopping Systems

    NASA Astrophysics Data System (ADS)

    Fu, Kuo-Ching; Chen, Yung-Fang

    2010-12-01

    In frequency hopping spread spectrum (FHSS) systems, two major problems are timing synchronization and frequency estimation. A blind estimation scheme is presented for estimating frequency and transition time without using reference signals. The scheme is robust in the sense that it can avoid the unbalanced sampling block problem that occurs in existing maximum likelihood-based schemes, which causes large errors in one of the estimates of frequency. The proposed scheme has a lower computational cost than the maximum likelihood-based greedy search method. The estimated parameters are also used for the subsequent time and frequency tracking. The simulation results demonstrate the efficacy of the proposed approach.

  5. Interference signal frequency tracking for extracting phase in frequency scanning interferometry using an extended Kalman filter.

    PubMed

    Liu, Zhe; Liu, Zhigang; Deng, Zhongwen; Tao, Long

    2016-04-10

    Optical frequency scanning nonlinearity seriously affects interference signal phase extraction accuracy in frequency-scanning interferometry systems using external cavity diode lasers. In this paper, an interference signal frequency tracking method using an extended Kalman filter is proposed. The interferometric phase is obtained by integrating the estimated instantaneous frequency over time. The method is independent of the laser's optical frequency scanning nonlinearity. The method is validated through simulations and experiments. The experimental results demonstrate that the relative phase extraction error in the fractional part is <1.5% with the proposed method and the standard deviation of absolute distance measurement is <2.4  μm. PMID:27139864

  6. Interference signal frequency tracking for extracting phase in frequency scanning interferometry using an extended Kalman filter.

    PubMed

    Liu, Zhe; Liu, Zhigang; Deng, Zhongwen; Tao, Long

    2016-04-10

    Optical frequency scanning nonlinearity seriously affects interference signal phase extraction accuracy in frequency-scanning interferometry systems using external cavity diode lasers. In this paper, an interference signal frequency tracking method using an extended Kalman filter is proposed. The interferometric phase is obtained by integrating the estimated instantaneous frequency over time. The method is independent of the laser's optical frequency scanning nonlinearity. The method is validated through simulations and experiments. The experimental results demonstrate that the relative phase extraction error in the fractional part is <1.5% with the proposed method and the standard deviation of absolute distance measurement is <2.4  μm.

  7. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

    SciTech Connect

    LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

    2004-07-26

    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

  8. One-step error correction for multipartite polarization entanglement

    SciTech Connect

    Deng Fuguo

    2011-06-15

    We present two economical one-step error-correction protocols for multipartite polarization-entangled systems in a Greenberger-Horne-Zeilinger state. One uses spatial entanglement to correct errors in the polarization entanglement of an N-photon system, resorting to linear optical elements. The other uses frequency entanglement to correct errors in the polarization entanglement of an N-photon system. The parties in quantum communication can obtain a maximally entangled state from each N-photon system transmitted with one step in these two protocols, and both of their success probabilities are 100%, in principle. That is, they both work in a deterministic way, and they do not largely consume the less-entangled photon systems, which is far different from conventional multipartite entanglement purification schemes. These features may make these two protocols more useful for practical applications in long-distance quantum communication.

  9. Selection of Error-Less Synthetic Genes in Yeast.

    PubMed

    Hoshida, Hisashi; Yarimizu, Tohru; Akada, Rinji

    2017-01-01

    Conventional gene synthesis is usually accompanied by sequence errors, which are often deletions derived from chemically synthesized oligonucleotides. Such deletions lead to frame shifts and mostly result in premature translational terminations. Therefore, in-frame fusion of a marker gene to the downstream of a synthetic gene is an effective strategy to select for frame-shift-free synthetic genes. Functional expression of fused marker genes indicates that synthetic genes are translated without premature termination, i.e., error-less synthetic genes. A recently developed nonhomologous end joining (NHEJ)-mediated DNA cloning method in the yeast Kluyveromyces marxianus is suitable for the selection of frame-shift-free synthetic genes. Transformation and NHEJ-mediated in-frame joining of a synthetic gene with a selection marker gene enables colony formation of only the yeast cells containing synthetic genes without premature termination. This method increased selection frequency of error-less synthetic genes by 3- to 12-fold. PMID:27671945

  10. Semantic errors in deep dyslexia: does orthographic depth matter?

    PubMed

    Beaton, Alan A; Davies, Nia Wyn

    2007-05-01

    Semantic errors of oral reading by aphasic patients are said to be comparatively rare in languages with a shallow orthography. The present report concerns three bilingual brain-damaged patients who prior to their stroke were fluent in both English, an orthographically deep language, and Welsh, an orthographically shallow language. On a picture-naming task, each patient made a similar proportion of semantic errors in the two languages. Similarly, in oral reading of the corresponding words, no patient produced proportionally more semantic paralexias in English than in Welsh. The findings are discussed in relation to the summation hypothesis as invoked by Miceli, Capasso, and Caramazza (1994) to explain apparent differences in frequency of semantic errors of reading in languages differing in orthographic depth.

  11. Modified Wilkinson Power Dividers For K And Ka Bands

    NASA Technical Reports Server (NTRS)

    Antsos, Dimitrios

    1995-01-01

    Modified configuration for Wilkinson power dividers devised for operating frequencies in K and Ka bands (18 to 27 and 27 to 40 GHz, respectively). Overcomes some difficulties associated with increasing frequency, making possible to design and accurately predict performances of unequal-split power dividers for frequencies above X-band.

  12. Acoustic evidence for phonologically mismatched speech errors.

    PubMed

    Gormley, Andrea

    2015-04-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.

  13. Robot learning and error correction

    NASA Technical Reports Server (NTRS)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  14. Negligence, genuine error, and litigation.

    PubMed

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  15. Brief optogenetic inhibition of dopamine neurons mimics endogenous negative reward prediction errors.

    PubMed

    Chang, Chun Yun; Esber, Guillem R; Marrero-Garcia, Yasmin; Yau, Hau-Jie; Bonci, Antonello; Schoenbaum, Geoffrey

    2016-01-01

    Correlative studies have strongly linked phasic changes in dopamine activity with reward prediction error signaling. But causal evidence that these brief changes in firing actually serve as error signals to drive associative learning is more tenuous. Although there is direct evidence that brief increases can substitute for positive prediction errors, there is no comparable evidence that similarly brief pauses can substitute for negative prediction errors. In the absence of such evidence, the effect of increases in firing could reflect novelty or salience, variables also correlated with dopamine activity. Here we provide evidence in support of the proposed linkage, showing in a modified Pavlovian over-expectation task that brief pauses in the firing of dopamine neurons in rat ventral tegmental area at the time of reward are sufficient to mimic the effects of endogenous negative prediction errors. These results support the proposal that brief changes in the firing of dopamine neurons serve as full-fledged bidirectional prediction error signals. PMID:26642092

  16. Brief optogenetic inhibition of dopamine neurons mimics endogenous negative reward prediction errors.

    PubMed

    Chang, Chun Yun; Esber, Guillem R; Marrero-Garcia, Yasmin; Yau, Hau-Jie; Bonci, Antonello; Schoenbaum, Geoffrey

    2016-01-01

    Correlative studies have strongly linked phasic changes in dopamine activity with reward prediction error signaling. But causal evidence that these brief changes in firing actually serve as error signals to drive associative learning is more tenuous. Although there is direct evidence that brief increases can substitute for positive prediction errors, there is no comparable evidence that similarly brief pauses can substitute for negative prediction errors. In the absence of such evidence, the effect of increases in firing could reflect novelty or salience, variables also correlated with dopamine activity. Here we provide evidence in support of the proposed linkage, showing in a modified Pavlovian over-expectation task that brief pauses in the firing of dopamine neurons in rat ventral tegmental area at the time of reward are sufficient to mimic the effects of endogenous negative prediction errors. These results support the proposal that brief changes in the firing of dopamine neurons serve as full-fledged bidirectional prediction error signals.

  17. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Billings, C. E.; Lanber, J. K.; Cooper, G. E.

    1974-01-01

    This report is a brief description of research being undertaken by the National Aeronautics and Space Administration. The project is designed to seek out factors in the aviation system which contribute to human error, and to search for ways of minimizing the potential threat posed by these factors. The philosophy and assumptions underlying the study are discussed, together with an outline of the research plan.

  18. Clinical review: Medication errors in critical care

    PubMed Central

    Moyen, Eric; Camiré, Eric; Stelfox, Henry Thomas

    2008-01-01

    Medication errors in critical care are frequent, serious, and predictable. Critically ill patients are prescribed twice as many medications as patients outside of the intensive care unit (ICU) and nearly all will suffer a potentially life-threatening error at some point during their stay. The aim of this article is to provide a basic review of medication errors in the ICU, identify risk factors for medication errors, and suggest strategies to prevent errors and manage their consequences. PMID:18373883

  19. [Errors in laboratory daily practice].

    PubMed

    Larrose, C; Le Carrer, D

    2007-01-01

    Legislation set by GBEA (Guide de bonne exécution des analyses) requires that, before performing analysis, the laboratory directors have to check both the nature of the samples and the patients identity. The data processing of requisition forms, which identifies key errors, was established in 2000 and in 2002 by the specialized biochemistry laboratory, also with the contribution of the reception centre for biological samples. The laboratories follow a strict criteria of defining acceptability as a starting point for the reception to then check requisition forms and biological samples. All errors are logged into the laboratory database and analysis report are sent to the care unit specifying the problems and the consequences they have on the analysis. The data is then assessed by the laboratory directors to produce monthly or annual statistical reports. This indicates the number of errors, which are then indexed to patient files to reveal the specific problem areas, therefore allowing the laboratory directors to teach the nurses and enable corrective action.

  20. Human error in hospitals and industrial accidents: current concepts.

    PubMed

    Spencer, F C

    2000-10-01

    Most data concerning errors and accidents are from industrial accidents and airline injuries. General Electric, Alcoa, and Motorola, among others, all have reported complex programs that resulted in a marked reduction in frequency of worker injuries. In the field of medicine, however, with the outstanding exception of anesthesiology, there is a paucity of information, most reports referring to the 1984 Harvard-New York State Study, more than 16 years ago. This scarcity of information indicates the complexity of the problem. It seems very unlikely that simple exhortation or additional regulations will help because the problem lies principally in the multiple human-machine interfaces that constitute modern medical care. The absence of success stories also indicates that the best methods have to be learned by experience. A liaison with industry should be helpful, although the varieties of human illness are far different from a standardized manufacturing process. Concurrent with the studies of industrial and nuclear accidents, cognitive psychologists have intensively studied how the brain stores and retrieves information. Several concepts have emerged. First, errors are not character defects to be treated by the classic approach of discipline and education, but are byproducts of normal thinking that occur frequently. Second, major accidents are rarely causedby a single error; instead, they are often a combination of chronic system errors, termed latent errors. Identifying and correcting these latent errors should be the principal focus for corrective planning rather than searching for an individual culprit. This nonpunitive concept of errors is a key basis for an effective reporting system, brilliantly demonstrated in aviation with the ASRS system developed more than 25 years ago. The ASRS currently receives more than 30,000 reports annually and is credited with the remarkable increase in safety of airplane travel. Adverse drug events constitute about 25% of hospital

  1. Error control in the GCF: An information-theoretic model for error analysis and coding

    NASA Technical Reports Server (NTRS)

    Adeyemi, O.

    1974-01-01

    The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

  2. The Evaluation of the Error Term in Some Gauss-Type Formulae for the Approximation of Cauchy Principal Value Integrals

    ERIC Educational Resources Information Center

    Smith, H. V.

    2008-01-01

    A method is derived for the numerical evaluation of the error term arising in some Gauss-type formulae modified so as to approximate Cauchy Principal Value integrals. The method uses Chebyshev polynomials of the first kind. (Contains 1 table.)

  3. Dynamic X-Y Crosstalk / Aliasing Errors of Multiplexing BPMs

    SciTech Connect

    Straumann, T.; /SLAC

    2005-08-09

    Multiplexing Beam Position Monitors are widely used for their simplicity and inherent drift cancellation property. These systems successively feed the signals of (typically four) RF pickups through one single detector channel. The beam position is calculated from the demultiplexed base band signal. However, as shown below, transverse beam motion results in positional aliasing errors due to the finite multiplexing frequency. Fast vertical motion, for example, can alias into an apparent, slow horizontal position change.

  4. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  5. Calibrations of phase and ratio errors of current and voltage channels of energy meter

    NASA Astrophysics Data System (ADS)

    Mlejnek, P.; Kaspar, P.

    2013-06-01

    This paper deals with measurement of phase and ratio errors of current and voltage channels of a new produced energy meter. This fully digitally controlled energy meter combines the classical static energy meter with power quality analyzer. The calibration of phase and ratio errors in wide frequency range is then necessary. Paper shows the results of error measurement, introduces the mathematical approximations and describes the calibration constants. It allows error compensation and power calculation of particular harmonics. The electric power of the higher harmonics can be interesting information of distributed electric energy quality.

  6. Carriage Error Identification Based on Cross-Correlation Analysis and Wavelet Transformation

    PubMed Central

    Mu, Donghui; Chen, Dongju; Fan, Jinwei; Wang, Xiaofeng; Zhang, Feihu

    2012-01-01

    This paper proposes a novel method for identifying carriage errors. A general mathematical model of a guideway system is developed, based on the multi-body system method. Based on the proposed model, most error sources in the guideway system can be measured. The flatness of a workpiece measured by the PGI1240 profilometer is represented by a wavelet. Cross-correlation analysis performed to identify the error source of the carriage. The error model is developed based on experimental results on the low frequency components of the signals. With the use of wavelets, the identification precision of test signals is very high. PMID:23012558

  7. Righting errors in writing errors: the Wing and Baddeley (1980) spelling error corpus revisited.

    PubMed

    Wing, Alan M; Baddeley, Alan D

    2009-03-01

    We present a new analysis of our previously published corpus of handwriting errors (slips) using the proportional allocation algorithm of Machtynger and Shallice (2009). As previously, the proportion of slips is greater in the middle of the word than at the ends, however, in contrast to before, the proportion is greater at the end than at the beginning of the word. The findings are consistent with the hypothesis of memory effects in a graphemic output buffer.

  8. Error growth in operational ECMWF forecasts

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Dalcher, A.

    1985-01-01

    A parameterization scheme used at the European Centre for Medium Range Forecasting to model the average growth of the difference between forecasts on consecutive days was extended by including the effect of error growth on forecast model deficiencies. Error was defined as the difference between the forecast and analysis fields during the verification time. Systematic and random errors were considered separately in calculating the error variance for a 10 day operational forecast. A good fit was obtained with measured forecast errors and a satisfactory trend was achieved in the difference between forecasts. Fitting six parameters to forecast errors and differences that were performed separately for each wavenumber revealed that the error growth rate grew with wavenumber. The saturation error decreased with the total wavenumber and the limit of predictability, i.e., when error variance reaches 95 percent of saturation, decreased monotonically with the total wavenumber.

  9. How psychotherapists handle treatment errors – an ethical analysis

    PubMed Central

    2013-01-01

    Background Dealing with errors in psychotherapy is challenging, both ethically and practically. There is almost no empirical research on this topic. We aimed (1) to explore psychotherapists’ self-reported ways of dealing with an error made by themselves or by colleagues, and (2) to reconstruct their reasoning according to the two principle-based ethical approaches that are dominant in the ethics discourse of psychotherapy, Beauchamp & Childress (B&C) and Lindsay et al. (L). Methods We conducted 30 semi-structured interviews with 30 psychotherapists (physicians and non-physicians) and analysed the transcripts using qualitative content analysis. Answers were deductively categorized according to the two principle-based ethical approaches. Results Most psychotherapists reported that they preferred to an disclose error to the patient. They justified this by spontaneous intuitions and common values in psychotherapy, rarely using explicit ethical reasoning. The answers were attributed to the following categories with descending frequency: 1. Respect for patient autonomy (B&C; L), 2. Non-maleficence (B&C) and Responsibility (L), 3. Integrity (L), 4. Competence (L) and Beneficence (B&C). Conclusions Psychotherapists need specific ethical and communication training to complement and articulate their moral intuitions as a support when disclosing their errors to the patients. Principle-based ethical approaches seem to be useful for clarifying the reasons for disclosure. Further research should help to identify the most effective and acceptable ways of error disclosure in psychotherapy. PMID:24321503

  10. Controlling qubit drift by recycling error correction syndromes

    NASA Astrophysics Data System (ADS)

    Blume-Kohout, Robin

    2015-03-01

    Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE

  11. Stochastic modelling and analysis of IMU sensor errors

    NASA Astrophysics Data System (ADS)

    Zaho, Y.; Horemuz, M.; Sjöberg, L. E.

    2011-12-01

    The performance of a GPS/INS integration system is greatly determined by the ability of stand-alone INS system to determine position and attitude within GPS outage. The positional and attitude precision degrades rapidly during GPS outage due to INS sensor errors. With advantages of low price and volume, the Micro Electrical Mechanical Sensors (MEMS) have been wildly used in GPS/INS integration. Moreover, standalone MEMS can keep a reasonable positional precision only a few seconds due to systematic and random sensor errors. General stochastic error sources existing in inertial sensors can be modelled as (IEEE STD 647, 2006) Quantization Noise, Random Walk, Bias Instability, Rate Random Walk and Rate Ramp. Here we apply different methods to analyze the stochastic sensor errors, i.e. autoregressive modelling, Gauss-Markov process, Power Spectral Density and Allan Variance. Then the tests on a MEMS based inertial measurement unit were carried out with these methods. The results show that different methods give similar estimates of stochastic error model parameters. These values can be used further in the Kalman filter for better navigation accuracy and in the Doppler frequency estimate for faster acquisition after GPS signal outage.

  12. Grid-scale fluctuations and forecast error in wind power

    NASA Astrophysics Data System (ADS)

    Bel, G.; Connaughton, C. P.; Toots, M.; Bandi, M. M.

    2016-02-01

    Wind power fluctuations at the turbine and farm scales are generally not expected to be correlated over large distances. When power from distributed farms feeds the electrical grid, fluctuations from various farms are expected to smooth out. Using data from the Irish grid as a representative example, we analyze wind power fluctuations entering an electrical grid. We find that not only are grid-scale fluctuations temporally correlated up to a day, but they possess a self-similar structure—a signature of long-range correlations in atmospheric turbulence affecting wind power. Using the statistical structure of temporal correlations in fluctuations for generated and forecast power time series, we quantify two types of forecast error: a timescale error ({e}τ ) that quantifies deviations between the high frequency components of the forecast and generated time series, and a scaling error ({e}\\zeta ) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations for generated power. With no a priori knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ({e}τ ) and the scaling error ({e}\\zeta ).

  13. Sibship reconstruction from genetic data with typing errors.

    PubMed Central

    Wang, Jinliang

    2004-01-01

    Likelihood methods have been developed to partition individuals in a sample into full-sib and half-sib families using genetic marker data without parental information. They invariably make the critical assumption that marker data are free of genotyping errors and mutations and are thus completely reliable in inferring sibships. Unfortunately, however, this assumption is rarely tenable for virtually all kinds of genetic markers in practical use and, if violated, can severely bias sibship estimates as shown by simulations in this article. I propose a new likelihood method with simple and robust models of typing error incorporated into it. Simulations show that the new method can be used to infer full- and half-sibships accurately from marker data with a high error rate and to identify typing errors at each locus in each reconstructed sib family. The new method also improves previous ones by adopting a fresh iterative procedure for updating allele frequencies with reconstructed sibships taken into account, by allowing for the use of parental information, and by using efficient algorithms for calculating the likelihood function and searching for the maximum-likelihood configuration. It is tested extensively on simulated data with a varying number of marker loci, different rates of typing errors, and various sample sizes and family structures and applied to two empirical data sets to demonstrate its usefulness. PMID:15126412

  14. Quantum error correction of photon-scattering errors

    NASA Astrophysics Data System (ADS)

    Akerman, Nitzan; Glickman, Yinnon; Kotler, Shlomi; Ozeri, Roee

    2011-05-01

    Photon scattering by an atomic ground-state superposition is often considered as a source of decoherence. The same process also results in atom-photon entanglement which had been directly observed in various experiments using single atom, ion or a diamond nitrogen-vacancy center. Here we combine these two aspects to implement a quantum error correction protocol. We encode a qubit in the two Zeeman-splitted ground states of a single trapped 88 Sr+ ion. Photons are resonantly scattered on the S1 / 2 -->P1 / 2 transition. We study the process of single photon scattering i.e. the excitation of the ion to the excited manifold followed by a spontaneous emission and decay. In the absence of any knowledge on the emitted photon, the ion-qubit coherence is lost. However the joined ion-photon system still maintains coherence. We show that while scattering events where spin population is preserved (Rayleigh scattering) do not affect coherence, spin-changing (Raman) scattering events result in coherent amplitude exchange between the two qubit states. By applying a unitary spin rotation that is dependent on the detected photon polarization we retrieve the ion-qubit initial state. We characterize this quantum error correction protocol by process tomography and demonstrate an ability to preserve ion-qubit coherence with high fidelity.

  15. Positioning errors in panoramic images in general dentistry in Sörmland County, Sweden.

    PubMed

    Ekströmer, Karin; Hjalmarsson, Lars

    2014-01-01

    The purpose of this study was to evaluate the frequency and severity of positioning errors in panoramic radiography in general dentistry. A total of 1904 digital panoramic radiographs, taken by the Public Dental Service in the county of Sörmland, Sweden, were analysed retrospectively. The study population consisted of all patients who underwent a panoramic examination during the year 2011. One experienced oral radiologist evaluated all radiographs for 10 common errors. Of the 1904 radiographs examined, 79 per cent had errors. The number of errors varied between 1-4 errors per image. No errors were found in 404 images (21%). Fifty-five images (3%) had severe errors, which made it impossible to make correct diagnostics. The most common error was the tongue not being in contact with the hard palate during exposure. However, this did not greatly affect the diagnostic usefulness of the image due to the ability to enhance the image.The patient's head was tilted too far upwards in 23 per cent of the images and the patient's head was rotated during exposure in 15 per cent. The least common error was due to patient movement during exposure (1%). Panoramic radiographs taken in general dental clinics in a Swedish county show several errors. Proper positioning of the patient is necessary to achieve panoramic images with good image quality. Some of the errors could be adjusted with the digital technique used.This allowed assessment of the images, which reduces radiation dose by avoiding retakes. PMID:26995809

  16. Subthreshold muscle twitches dissociate oscillatory neural signatures of conflicts from errors.

    PubMed

    Cohen, Michael X; van Gaal, Simon

    2014-02-01

    We investigated the neural systems underlying conflict detection and error monitoring during rapid online error correction/monitoring mechanisms. We combined data from four separate cognitive tasks and 64 subjects in which EEG and EMG (muscle activity from the thumb used to respond) were recorded. In typical neuroscience experiments, behavioral responses are classified as "error" or "correct"; however, closer inspection of our data revealed that correct responses were often accompanied by "partial errors" - a muscle twitch of the incorrect hand ("mixed correct trials," ~13% of the trials). We found that these muscle twitches dissociated conflicts from errors in time-frequency domain analyses of EEG data. In particular, both mixed-correct trials and full error trials were associated with enhanced theta-band power (4-9Hz) compared to correct trials. However, full errors were additionally associated with power and frontal-parietal synchrony in the delta band. Single-trial robust multiple regression analyses revealed a significant modulation of theta power as a function of partial error correction time, thus linking trial-to-trial fluctuations in power to conflict. Furthermore, single-trial correlation analyses revealed a qualitative dissociation between conflict and error processing, such that mixed correct trials were associated with positive theta-RT correlations whereas full error trials were associated with negative delta-RT correlations. These findings shed new light on the local and global network mechanisms of conflict monitoring and error detection, and their relationship to online action adjustment.

  17. Error compensation in computer generated hologram-based form testing of aspheres.

    PubMed

    Stuerwald, Stephan

    2014-12-10

    Computer-generated holograms (CGHs) are used relatively often to test aspheric surfaces in the case of medium and high lot sizes. Until now differently modified measurement setups for optical form testing interferometry have been presented, like subaperture stitching interferometry and scanning interferometry. In contrast, for testing low to medium lot sizes in research and development, a variety of other tactile and nontactile measurement methods have been developed. In the case of CGH-based interferometric form testing, measurement deviations in the region of several tens of nanometers typically occur. Deviations arise especially due to a nonperfect alignment of the asphere relative to the testing wavefront. Therefore, the null test is user- and adjustment-dependent, which results in insufficient repeatability and reproducibility of the form errors. When adjusting a CGH, an operator usually performs a minimization of the spatial frequency of the fringe pattern. An adjustment to the ideal position, however, often cannot be performed with sufficient precision by the operator as the position of minimum spatial fringe density is often not unique, which also depends on the asphere. Thus, the scientific and technical objectives of this paper comprise the development of a simulation-based approach to explain and quantify typical experimental errors due to misalignment of the specimen toward a CGH in an optical form testing measurement system. A further step is the programming of an iterative method to realize a virtual optimized realignment of the system on the basis of Zernike polynomial decomposition, which should allow for the calculation of the measured form for an ideal alignment and thus a careful subtraction of a typical alignment-based form error. To validate the simulation-based findings, a series of systematic experiments is performed with a recently developed hexapod positioning system in order to allow an exact and reproducible positioning of the optical CGH

  18. Rate of Medical Errors in Affiliated Hospitals of Mazandaran University of Medical Sciences

    PubMed Central

    Saravi, Benyamin Mohseni; Mardanshahi, Alireza; Ranjbar, Mansour; Siamian, Hasan; Azar, Masoud Shayeste; Asghari, Zolikah; Motamed, Nima

    2015-01-01

    Introduction: Health care organizations are highly specialized and complex. Thus we may expect the adverse events will inevitably occur. Building a medical error reporting system to analyze the reported preventable adverse events and learn from their results can help to prevent the repeat of these events. The medical errors which were reported to the Clinical Governance’s office of Mazandaran University of Medical Sciences (MazUMS) in years 2011-2012 were analyzed. Methods and Materials: This is a descriptive retrospective study in which 18 public hospitals were participated. The instrument of data collection was checklist that was designed by the Ministry of Health of Iran. Variables were type of hospital, unit of hospital, season, severity of event and type of error. The data were analyzed with SPSS software. Results: Of 317966 admissions 182 cases, about 0.06%, medical error reported of which most of the reports (%51.6) were from non- teaching hospitals. Among various units of hospital, the highest frequency of medical error was related to surgical unit (%42.3). The frequency of medical error according to the type of error was also evaluated of which the highest frequency was related to inappropriate and no care (totally 37%) and medication error 28%. We also analyzed the data with respect to the effect of the error on a patient of which the highest frequency was related to minor effect (44.5%). Conclusion: The results showed that a wide variety of errors. Encourage and revision of the reporting process will be result to know more data for prevention of them. PMID:25870528

  19. A procedure for removing the effect of response bias errors from waterfowl hunter questionnaire responses

    USGS Publications Warehouse

    Atwood, E.L.

    1958-01-01

    Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.

  20. Error awareness revisited: accumulation of multimodal evidence from central and autonomic nervous systems.

    PubMed

    Wessel, Jan R; Danielmeier, Claudia; Ullsperger, Markus

    2011-10-01

    The differences between erroneous actions that are consciously perceived as errors and those that go unnoticed have recently become an issue in the field of performance monitoring. In EEG studies, error awareness has been suggested to influence the error positivity (Pe) of the response-locked event-related brain potential, a positive voltage deflection prominent approximately 300 msec after error commission, whereas the preceding error-related negativity (ERN) seemed to be unaffected by error awareness. Erroneous actions, in general, have been shown to promote several changes in ongoing autonomic nervous system (ANS) activity, yet such investigations have only rarely taken into account the question of subjective error awareness. In the first part of this study, heart rate, pupillometry, and EEG were recorded during an antisaccade task to measure autonomic arousal and activity of the CNS separately for perceived and unperceived errors. Contrary to our expectations, we observed differences in both Pe and ERN with respect to subjective error awareness. This was replicated in a second experiment, using a modified version of the same task. In line with our predictions, only perceived errors provoke the previously established post-error heart rate deceleration. Also, pupil size yields a more prominent dilatory effect after an erroneous saccade, which is also significantly larger for perceived than unperceived errors. On the basis of the ERP and ANS results as well as brain-behavior correlations, we suggest a novel interpretation of the implementation and emergence of error awareness in the brain. In our framework, several systems generate input signals (e.g., ERN, sensory input, proprioception) that influence the emergence of error awareness, which is then accumulated and presumably reflected in later potentials, such as the Pe.

  1. Error analysis in the measurement of average power with application to switching controllers

    NASA Technical Reports Server (NTRS)

    Maisel, J. E.

    1980-01-01

    Power measurement errors due to the bandwidth of a power meter and the sampling of the input voltage and current of a power meter were investigated assuming sinusoidal excitation and periodic signals generated by a model of a simple chopper system. Errors incurred in measuring power using a microcomputer with limited data storage were also considered. The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current, and the signal multiplier was studied. Results indicate that this power measurement error can be minimized if the frequency responses of the first order transfer functions are identical. The power error analysis was extended to include the power measurement error for a model of a simple chopper system with a power source and an ideal shunt motor acting as an electrical load for the chopper. The behavior of the power measurement error was determined as a function of the chopper's duty cycle and back EMF of the shunt motor. Results indicate that the error is large when the duty cycle or back EMF is small. Theoretical and experimental results indicate that the power measurement error due to sampling of sinusoidal voltages and currents becomes excessively large when the number of observation periods approaches one-half the size of the microcomputer data memory allocated to the storage of either the input sinusoidal voltage or current.

  2. Why the distribution of medical errors matters.

    PubMed

    McLean, Thomas R

    2015-07-01

    During the last decade, interventions to reduce the number of medical errors have been largely ineffective. Although it is widely assumed that medical errors follow a Gaussian distribution, they may actually follow a Power Rule distribution. This article presents the evidence in favor of a Power Rule distribution for medical errors and then examines the consequences of such a distribution for medical errors. As the distribution of medical errors has real-world implications, further research is needed to determine whether medical errors follow a Gaussian or Power Rule distribution.

  3. Quantum error correction via robust probe modes

    SciTech Connect

    Yamaguchi, Fumiko; Nemoto, Kae; Munro, William J.

    2006-06-15

    We propose a scheme for quantum error correction using robust continuous variable probe modes, rather than fragile ancilla qubits, to detect errors without destroying data qubits. The use of such probe modes reduces the required number of expensive qubits in error correction and allows efficient encoding, error detection, and error correction. Moreover, the elimination of the need for direct qubit interactions significantly simplifies the construction of quantum circuits. We will illustrate how the approach implements three existing quantum error correcting codes: the three-qubit bit-flip (phase-flip) code, the Shor code, and an erasure code.

  4. Intra-Rater and Inter-Rater Reliability of the Balance Error Scoring System in Pre-Adolescent School Children

    ERIC Educational Resources Information Center

    Sheehan, Dwayne P.; Lafave, Mark R.; Katz, Larry

    2011-01-01

    This study was designed to test the intra- and inter-rater reliability of the University of North Carolina's Balance Error Scoring System in 9- and 10-year-old children. Additionally, a modified version of the Balance Error Scoring System was tested to determine if it was more sensitive in this population ("raw scores"). Forty-six normally…

  5. Modeling methodology for MLS range navigation system errors using flight test data

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.

  6. Some effects of quantization on a noiseless phase-locked loop. [sampling phase errors

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1979-01-01

    If the VCO of a phase-locked receiver is to be replaced by a digitally programmed synthesizer, the phase error signal must be sampled and quantized. Effects of quantizing after the loop filter (frequency quantization) or before (phase error quantization) are investigated. Constant Doppler or Doppler rate noiseless inputs are assumed. The main result gives the phase jitter due to frequency quantization for a Doppler-rate input. By itself, however, frequency quantization is impractical because it makes the loop dynamic range too small.

  7. Time synchronization of a frequency-hopped MFSK communication system

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Polydoros, A.; Huth, G. K.

    1981-01-01

    In a frequency-hopped (FH) multiple-frequency-shift-keyed (MFSK) communication system, frequency hopping causes the necessary frequency transitions for time synchronization estimation rather than the data sequence as in the conventional (nonfrequency-hopped) system. Making use of this observation, this paper presents a fine synchronization (i.e., time errors of less than a hop duration) technique for estimation of FH timing. The performance degradation due to imperfect FH time synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of hops used in the FH timing estimate.

  8. Exact probability of error analysis for FHSS/CDMA communications in the presence of single term Rician fading

    NASA Astrophysics Data System (ADS)

    Turcotte, Randy L.; Wickert, Mark A.

    An exact expression is found for the probability of bit error of an FHSS-BFSK (frequency-hopping spread-spectrum/binary-frequency-shift-keying) multiple-access system in the presence of slow, nonselective, 'single-term' Rician fading. The effects of multiple-access interference and/or continuous tone jamming are considered. Comparisons are made between the error expressions developed here and previously published upper bounds. It is found that under certain channel conditions the upper bounds on the probability of bit error may exceed the actual probability of error by an order of magnitude.

  9. Oligonucleotide frequencies in DNA follow a Yule distribution.

    PubMed

    Martindale, C; Konopka, A K

    1996-03-01

    We show that ranked oligonucleotide frequencies in both protein-coding and non-coding regions from several genomes fit poorly to the Zipf distribution, but that the same frequency data give excellent fit to the Yule distribution. The parameters of the Yule distribution for oligonucleotide frequencies in exons are the same (within error limits) as the parameters for introns. This precludes application of Yule or Zipf distribution of ranked oligonucleotide frequencies to annotating new genomic sequences.

  10. Method for decoupling error correction from privacy amplification

    NASA Astrophysics Data System (ADS)

    Lo, Hoi-Kwong

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  11. Frequency domain FIR and IIR adaptive filters

    NASA Technical Reports Server (NTRS)

    Lynn, D. W.

    1990-01-01

    A discussion of the LMS adaptive filter relating to its convergence characteristics and the problems associated with disparate eigenvalues is presented. This is used to introduce the concept of proportional convergence. An approach is used to analyze the convergence characteristics of block frequency-domain adaptive filters. This leads to a development showing how the frequency-domain FIR adaptive filter is easily modified to provide proportional convergence. These ideas are extended to a block frequency-domain IIR adaptive filter and the idea of proportional convergence is applied. Experimental results illustrating proportional convergence in both FIR and IIR frequency-domain block adaptive filters is presented.

  12. Error detection and reduction in blood banking.

    PubMed

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle

  13. Human decision error (HUMDEE) trees

    SciTech Connect

    Ostrom, L.T.

    1993-08-01

    Graphical presentations of human actions in incident and accident sequences have been used for many years. However, for the most part, human decision making has been underrepresented in these trees. This paper presents a method of incorporating the human decision process into graphical presentations of incident/accident sequences. This presentation is in the form of logic trees. These trees are called Human Decision Error Trees or HUMDEE for short. The primary benefit of HUMDEE trees is that they graphically illustrate what else the individuals involved in the event could have done to prevent either the initiation or continuation of the event. HUMDEE trees also present the alternate paths available at the operator decision points in the incident/accident sequence. This is different from the Technique for Human Error Rate Prediction (THERP) event trees. There are many uses of these trees. They can be used for incident/accident investigations to show what other courses of actions were available and for training operators. The trees also have a consequence component so that not only the decision can be explored, also the consequence of that decision.

  14. Evaluation of intravenous medication errors with smart infusion pumps in an academic medical center.

    PubMed

    Ohashi, Kumiko; Dykes, Patricia; McIntosh, Kathleen; Buckley, Elizabeth; Wien, Matt; Bates, David W

    2013-01-01

    While some published research indicates a fairly high frequency of Intravenous (IV) medication errors associated with the use of smart infusion pumps, the generalizability of these results are uncertain. Additionally, the lack of a standardized methodology for measuring these errors is an issue. In this study we iteratively developed a web-based data collection tool to capture IV medication errors using a participatory design approach with interdisciplinary experts. Using the developed tool, a prevalence study was then conducted in an academic medical center. The results showed that the tool was easy to use and effectively captured all IV medication errors. Through the prevalence study, violation errors of hospital policy were found that could potentially place patients at risk, but no critical errors known to contribute to patient harm were noted.

  15. Development of an RTK-GPS positioning application with an improved position error model for smartphones.

    PubMed

    Hwang, Jinsang; Yun, Hongsik; Suh, Yongcheol; Cho, Jeongho; Lee, Dongha

    2012-01-01

    This study developed a smartphone application that provides wireless communication, NRTIP client, and RTK processing features, and which can simplify the Network RTK-GPS system while reducing the required cost. A determination method for an error model in Network RTK measurements was proposed, considering both random and autocorrelation errors, to accurately calculate the coordinates measured by the application using state estimation filters. The performance evaluation of the developed application showed that it could perform high-precision real-time positioning, within several centimeters of error range at a frequency of 20 Hz. A Kalman Filter was applied to the coordinates measured from the application, to evaluate the appropriateness of the determination method for an error model, as proposed in this study. The results were more accurate, compared with those of the existing error model, which only considered the random error. PMID:23201981

  16. Development of an RTK-GPS Positioning Application with an Improved Position Error Model for Smartphones

    PubMed Central

    Hwang, Jinsang; Yun, Hongsik; Suh, Yongcheol; Cho, Jeongho; Lee, Dongha

    2012-01-01

    This study developed a smartphone application that provides wireless communication, NRTIP client, and RTK processing features, and which can simplify the Network RTK-GPS system while reducing the required cost. A determination method for an error model in Network RTK measurements was proposed, considering both random and autocorrelation errors, to accurately calculate the coordinates measured by the application using state estimation filters. The performance evaluation of the developed application showed that it could perform high-precision real-time positioning, within several centimeters of error range at a frequency of 20 Hz. A Kalman Filter was applied to the coordinates measured from the application, to evaluate the appropriateness of the determination method for an error model, as proposed in this study. The results were more accurate, compared with those of the existing error model, which only considered the random error. PMID:23201981

  17. Debye Entropic Force and Modified Newtonian Dynamics

    NASA Astrophysics Data System (ADS)

    Li, Xin; Chang, Zhe

    2011-04-01

    Verlinde has suggested that the gravity has an entropic origin, and a gravitational system could be regarded as a thermodynamical system. It is well-known that the equipartition law of energy is invalid at very low temperature. Therefore, entropic force should be modified while the temperature of the holographic screen is very low. It is shown that the modified entropic force is proportional to the square of the acceleration, while the temperature of the holographic screen is much lower than the Debye temperature TD. The modified entropic force returns to the Newton's law of gravitation while the temperature of the holographic screen is much higher than the Debye temperature. The modified entropic force is connected with modified Newtonian dynamics (MOND). The constant a0 involved in MOND is linear in the Debye frequency ωD, which can be regarded as the largest frequency of the bits in screen. We find that there do have a strong connection between MOND and cosmology in the framework of Verlinde's entropic force, if the holographic screen is taken to be bound of the Universe. The Debye frequency is linear in the Hubble constant H0.

  18. Frequency modulation noise and linewidth reduction in a semiconductor laser by means of negative frequency feedback technique

    SciTech Connect

    Saito, S.; Nilsson, O.; Yamamoto, Y.

    1985-01-01

    Electrical negative frequency feedback control has been shown to reduce frequency modulation (FM) noise linewidth in semiconductor lasers. The method is based on the direct frequency modulation capability of a semiconductor laser. An error signal is extracted through optical heterodyne frequency discrimination detection using a stable master laser. FM noise is reduced by more than 20 dB and linewidth is reduced by one order of magnitude.

  19. Error field and magnetic diagnostic modeling for W7-X

    SciTech Connect

    Lazerson, Sam A.; Gates, David A.; NEILSON, GEORGE H.; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.

    2014-07-01

    The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (Β = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.

  20. Study of geopotential error models used in orbit determination error analysis

    NASA Technical Reports Server (NTRS)

    Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.

    1991-01-01

    The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic

  1. Refractive Errors - Multiple Languages: MedlinePlus

    MedlinePlus

    ... Are Here: Home → Multiple Languages → All Health Topics → Refractive Errors URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Refractive Errors - Multiple Languages To use the sharing features on ...

  2. Field errors in hybrid insertion devices

    SciTech Connect

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  3. Medication Errors - Multiple Languages: MedlinePlus

    MedlinePlus

    ... Are Here: Home → Multiple Languages → All Health Topics → Medication Errors URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Medication Errors - Multiple Languages To use the sharing features on ...

  4. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  5. Systematic Errors in an Air Track Experiment.

    ERIC Educational Resources Information Center

    Ramirez, Santos A.; Ham, Joe S.

    1990-01-01

    Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

  6. Dynamic frequency tuning of electric and magnetic metamaterial response

    DOEpatents

    O'Hara, John F; Averitt, Richard; Padilla, Willie; Chen, Hou-Tong

    2014-09-16

    A geometrically modifiable resonator is comprised of a resonator disposed on a substrate, and a means for geometrically modifying the resonator. The geometrically modifiable resonator can achieve active optical and/or electronic control of the frequency response in metamaterials and/or frequency selective surfaces, potentially with sub-picosecond response times. Additionally, the methods taught here can be applied to discrete geometrically modifiable circuit components such as inductors and capacitors. Principally, controlled conductivity regions, using either reversible photodoping or voltage induced depletion activation, are used to modify the geometries of circuit components, thus allowing frequency tuning of resonators without otherwise affecting the bulk substrate electrical properties. The concept is valid over any frequency range in which metamaterials are designed to operate.

  7. Analysis and classification of human error

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.; Rouse, S. H.

    1983-01-01

    The literature on human error is reviewed with emphasis on theories of error and classification schemes. A methodology for analysis and classification of human error is then proposed which includes a general approach to classification. Identification of possible causes and factors that contribute to the occurrence of errors is also considered. An application of the methodology to the use of checklists in the aviation domain is presented for illustrative purposes.

  8. Error Propagation in a System Model

    NASA Technical Reports Server (NTRS)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  9. [Medical errors and conflicts in clinical practice].

    PubMed

    Doskin, V A; Dorinova, E A; Kartoeva, R A; Sokolova, M S

    2014-01-01

    The number of medical errors is increasing. Medical errors have negative impact on the professional activities of physicians. Analysis of the causes and incidence of medical errors and conflicts in clinical practice of foreign and domestic doctors is presented based on the author's observations and didactic materials recommended for training doctors to prevent conflict situations in their professional work and for developing a common strategy for the prevention of medical errors.

  10. Optimized entanglement-assisted quantum error correction

    SciTech Connect

    Taghavi, Soraya; Brun, Todd A.; Lidar, Daniel A.

    2010-10-15

    Using convex optimization, we propose entanglement-assisted quantum error-correction procedures that are optimized for given noise channels. We demonstrate through numerical examples that such an optimized error-correction method achieves higher channel fidelities than existing methods. This improved performance, which leads to perfect error correction for a larger class of error channels, is interpreted in at least some cases by quantum teleportation, but for general channels this interpretation does not hold.

  11. Compensation of optode sensitivity and position errors in diffuse optical tomography using the approximation error approach.

    PubMed

    Mozumder, Meghdoot; Tarvainen, Tanja; Arridge, Simon R; Kaipio, Jari; Kolehmainen, Ville

    2013-01-01

    Diffuse optical tomography is highly sensitive to measurement and modeling errors. Errors in the source and detector coupling and positions can cause significant artifacts in the reconstructed images. Recently the approximation error theory has been proposed to handle modeling errors. In this article, we investigate the feasibility of the approximation error approach to compensate for modeling errors due to inaccurately known optode locations and coupling coefficients. The approach is evaluated with simulations. The results show that the approximation error method can be used to recover from artifacts in reconstructed images due to optode coupling and position errors.

  12. AU-FREDI - AUTONOMOUS FREQUENCY DOMAIN IDENTIFICATION

    NASA Technical Reports Server (NTRS)

    Yam, Y.

    1994-01-01

    The Autonomous Frequency Domain Identification program, AU-FREDI, is a system of methods, algorithms and software that was developed for the identification of structural dynamic parameters and system transfer function characterization for control of large space platforms and flexible spacecraft. It was validated in the CALTECH/Jet Propulsion Laboratory's Large Spacecraft Control Laboratory. Due to the unique characteristics of this laboratory environment, and the environment-specific nature of many of the software's routines, AU-FREDI should be considered to be a collection of routines which can be modified and reassembled to suit system identification and control experiments on large flexible structures. The AU-FREDI software was originally designed to command plant excitation and handle subsequent input/output data transfer, and to conduct system identification based on the I/O data. Key features of the AU-FREDI methodology are as follows: 1. AU-FREDI has on-line digital filter design to support on-orbit optimal input design and data composition. 2. Data composition of experimental data in overlapping frequency bands overcomes finite actuator power constraints. 3. Recursive least squares sine-dwell estimation accurately handles digitized sinusoids and low frequency modes. 4. The system also includes automated estimation of model order using a product moment matrix. 5. A sample-data transfer function parametrization supports digital control design. 6. Minimum variance estimation is assured with a curve fitting algorithm with iterative reweighting. 7. Robust root solvers accurately factorize high order polynomials to determine frequency and damping estimates. 8. Output error characterization of model additive uncertainty supports robustness analysis. The research objectives associated with AU-FREDI were particularly useful in focusing the identification methodology for realistic on-orbit testing conditions. Rather than estimating the entire structure, as is

  13. Ambulatory prescribing errors among community-based providers in two states

    PubMed Central

    Bates, David W; Jenter, Chelsea; Volk, Lynn A; Barrón, Yolanda; Quaresimo, Jill; Seger, Andrew C; Burdick, Elisabeth; Simon, Steven; Kaushal, Rainu

    2011-01-01

    Objective Little is known about the frequency and types of prescribing errors in the ambulatory setting among community-based, primary care providers. Therefore, the rates and types of prescribing errors were assessed among community-based, primary care providers in two states. Material and Methods A non-randomized cross-sectional study was conducted of 48 providers in New York and 30 providers in Massachusetts, all of whom used paper prescriptions, from September 2005 to November 2006. Using standardized methodology, prescriptions and medical records were reviewed to identify errors. Results 9385 prescriptions were analyzed from 5955 patients. The overall prescribing error rate, excluding illegibility errors, was 36.7 per 100 prescriptions (95% CI 30.7 to 44.0) and did not vary significantly between providers from each state (p=0.39). One or more non-illegibility errors were found in 28% of prescriptions. Rates of illegibility errors were very high (175.0 per 100 prescriptions, 95% CI 169.1 to 181.3). Inappropriate abbreviation and direction errors also occurred frequently (13.4 and 4.2 errors per 100 prescriptions, respectively). Reviewers determined that the vast majority of errors could have been eliminated through the use of e-prescribing with clinical decision support. Discussion Prescribing errors appear to occur at very high rates among community-based primary care providers, especially when compared with studies of academic-affiliated providers that have found nearly threefold lower error rates. Illegibility errors are particularly problematical. Conclusions Further characterizing prescribing errors of community-based providers may inform strategies to improve ambulatory medication safety, especially e-prescribing. Trial registration number http://www.clinicaltrials.gov, NCT00225576. PMID:22140209

  14. Error field penetration and locking to the backward propagating wave

    SciTech Connect

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.

  15. Error field penetration and locking to the backward propagating wave

    DOE PAGESBeta

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less

  16. Procedural error monitoring and smart checklists

    NASA Technical Reports Server (NTRS)

    Palmer, Everett

    1990-01-01

    Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.

  17. Error Analysis of Quadrature Rules. Classroom Notes

    ERIC Educational Resources Information Center

    Glaister, P.

    2004-01-01

    Approaches to the determination of the error in numerical quadrature rules are discussed and compared. This article considers the problem of the determination of errors in numerical quadrature rules, taking Simpson's rule as the principal example. It suggests an approach based on truncation error analysis of numerical schemes for differential…

  18. Misclassification Errors and Categorical Data Analysis.

    ERIC Educational Resources Information Center

    Katz, Barry M.; McSweeney, Maryellen

    1979-01-01

    Errors of misclassification and their effects on categorical data analysis are discussed. The chi-square test for equality of two proportions is examined in the context of errorful categorical data. The effects of such errors are illustrated. A correction procedure is developed and discussed. (Author/MH)

  19. Understanding EFL Students' Errors in Writing

    ERIC Educational Resources Information Center

    Phuket, Pimpisa Rattanadilok Na; Othman, Normah Binti

    2015-01-01

    Writing is the most difficult skill in English, so most EFL students tend to make errors in writing. In assisting the learners to successfully acquire writing skill, the analysis of errors and the understanding of their sources are necessary. This study attempts to explore the major sources of errors occurred in the writing of EFL students. It…

  20. Errors in Standardized Tests: A Systemic Problem.

    ERIC Educational Resources Information Center

    Rhoades, Kathleen; Madaus, George

    The nature and extent of human error in educational testing over the past 25 years were studied. In contrast to the random measurement error expected in all tests, the presence of human error is unexpected and brings unknown, often harmful, consequences for students and their schools. Using data from a variety of sources, researchers found 103…