Science.gov

Sample records for modified frequency error

  1. Spatial frequency domain error budget

    SciTech Connect

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  2. Frequency-Tracking-Error Detector

    NASA Technical Reports Server (NTRS)

    Randall, Richard L.

    1990-01-01

    Frequency-tracking-error detector compares average period of output signal from band-pass tracking filter with average period of signal of frequency 100 f(sub 0) that controls center frequency f(sub 0) of tracking filter. Measures difference between f(sub 0) and frequency of one of periodic components in output of bearing sensor. Bearing sensor is accelerometer, strain gauge, or deflectometer mounted on bearing housing. Detector part of system of electronic equipment used to measure vibrations in bearings in rotating machinery.

  3. Error Analysis of Modified Langevin Dynamics

    NASA Astrophysics Data System (ADS)

    Redon, Stephane; Stoltz, Gabriel; Trstanova, Zofia

    2016-06-01

    We consider Langevin dynamics associated with a modified kinetic energy vanishing for small momenta. This allows us to freeze slow particles, and hence avoid the re-computation of inter-particle forces, which leads to computational gains. On the other hand, the statistical error may increase since there are a priori more correlations in time. The aim of this work is first to prove the ergodicity of the modified Langevin dynamics (which fails to be hypoelliptic), and next to analyze how the asymptotic variance on ergodic averages depends on the parameters of the modified kinetic energy. Numerical results illustrate the approach, both for low-dimensional systems where we resort to a Galerkin approximation of the generator, and for more realistic systems using Monte Carlo simulations.

  4. Error Analysis of Modified Langevin Dynamics

    NASA Astrophysics Data System (ADS)

    Redon, Stephane; Stoltz, Gabriel; Trstanova, Zofia

    2016-08-01

    We consider Langevin dynamics associated with a modified kinetic energy vanishing for small momenta. This allows us to freeze slow particles, and hence avoid the re-computation of inter-particle forces, which leads to computational gains. On the other hand, the statistical error may increase since there are a priori more correlations in time. The aim of this work is first to prove the ergodicity of the modified Langevin dynamics (which fails to be hypoelliptic), and next to analyze how the asymptotic variance on ergodic averages depends on the parameters of the modified kinetic energy. Numerical results illustrate the approach, both for low-dimensional systems where we resort to a Galerkin approximation of the generator, and for more realistic systems using Monte Carlo simulations.

  5. Error control coding for multi-frequency modulation

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.

    1990-06-01

    Multi-frequency modulation (MFM) has been developed at NPS using both quadrature-phase-shift-keyed (QPSK) and quadrature-amplitude-modulated (QAM) signals with good bit error performance at reasonable signal-to-noise ratios. Improved performance can be achieved by the introduction of error control coding. This report documents a FORTRAN simulation of the implementation of error control coding into an MFM communication link with additive white Gaussian noise. Four Reed-Solomon codes were incorporated, two for 16-QAM and two for 32-QAM modulation schemes. The error control codes used were modified from the conventional Reed-Solomon codes in that one information symbol was sacrificed to parity in order to use a simplified decoding algorithm which requires no iteration and enhances error detection capability. Bit error rates as a function of SNR and E(sub b)/N(sub 0) were analyzed, and bit error performance was weighed against reduction in information rate to determine the value of the codes.

  6. Evaluation and control of spatial frequency errors in reflective telescopes

    NASA Astrophysics Data System (ADS)

    Zhang, Xuejun; Zeng, Xuefeng; Hu, Haixiang; Zheng, Ligong

    2015-08-01

    In this paper, the influence on the image quality of manufacturing residual errors was studied. By analyzing the statistical distribution characteristics of the residual errors and their effects on PSF and MTF, we divided those errors into low, middle and high frequency domains using the unit "cycles per aperture". Two types of mid-frequency errors, algorithm intrinsic and tool path induced were analyzed. Control methods in current deterministic polishing process, such as MRF or IBF were presented.

  7. Compensation Low-Frequency Errors in TH-1 Satellite

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Renxiang; Hu, Xin

    2016-06-01

    The topographic mapping products at 1:50,000 scale can be realized using satellite photogrammetry without ground control points (GCPs), which requires the high accuracy of exterior orientation elements. Usually, the attitudes of exterior orientation elements are obtained from the attitude determination system on the satellite. Based on the theoretical analysis and practice, the attitude determination system exists not only the high-frequency errors, but also the low-frequency errors related to the latitude of satellite orbit and the time. The low-frequency errors would affect the location accuracy without GCPs, especially to the horizontal accuracy. In SPOT5 satellite, the latitudinal model was proposed to correct attitudes using approximately 20 calibration sites data, and the location accuracy was improved. The low-frequency errors are also found in Tian Hui 1 (TH-1) satellite. Then, the method of compensation low-frequency errors is proposed in ground image processing of TH-1, which can detect and compensate the low-frequency errors automatically without using GCPs. This paper deal with the low-frequency errors in TH-1: First, the analysis about low-frequency errors of the attitude determination system is performed. Second, the compensation models are proposed in bundle adjustment. Finally, the verification is tested using data of TH-1. The testing results show: the low-frequency errors of attitude determination system can be compensated during bundle adjustment, which can improve the location accuracy without GCPs and has played an important role in the consistency of global location accuracy.

  8. Frequency of Consonant Articulation Errors in Dysarthric Speech

    ERIC Educational Resources Information Center

    Kim, Heejin; Martin, Katie; Hasegawa-Johnson, Mark; Perlman, Adrienne

    2010-01-01

    This paper analyses consonant articulation errors in dysarthric speech produced by seven American-English native speakers with cerebral palsy. Twenty-three consonant phonemes were transcribed with diacritics as necessary in order to represent non-phoneme misarticulations. Error frequencies were examined with respect to six variables: articulatory…

  9. Evaluation of stress wave propagation through rock mass using a modified dominate frequency method

    NASA Astrophysics Data System (ADS)

    Fan, L. F.; Wu, Z. J.

    2016-09-01

    This paper presents an evaluation of stress wave propagation through rock mass using a modified dominate frequency method. The effective velocity and transmission coefficient of stress wave propagation through rock mass with different joint stiffnesses are investigated. The results are validated by the theoretical method and the effects of incident frequency on the calculation accuracy are discussed. The results show that the modified dominate frequency method can be used to predict the effective velocity when the frequency of stress waves is within the low frequency range or high frequency range. However, the error cannot be ignored when the frequency is in the transitional frequency range. On the other hand, the modified dominate frequency method can be used to predict the transmission coefficient when the frequency of stress wave is within the low frequency range or optimal frequency range. However, the error cannot be ignored when the wave is within the high frequency range, which approaches 40% when the frequency is sufficiently large. Finally, the optimal stiffness-frequency relationship for the maximum calculation errors of effective velocity and the minimum calculation errors of transmission coefficient are proposed.

  10. High Frequency of Imprinted Methylation Errors in Human Preimplantation Embryos

    PubMed Central

    White, Carlee R.; Denomme, Michelle M.; Tekpetey, Francis R.; Feyles, Valter; Power, Stephen G. A.; Mann, Mellissa R. W.

    2015-01-01

    Assisted reproductive technologies (ARTs) represent the best chance for infertile couples to conceive, although increased risks for morbidities exist, including imprinting disorders. This increased risk could arise from ARTs disrupting genomic imprints during gametogenesis or preimplantation. The few studies examining ART effects on genomic imprinting primarily assessed poor quality human embryos. Here, we examined day 3 and blastocyst stage, good to high quality, donated human embryos for imprinted SNRPN, KCNQ1OT1 and H19 methylation. Seventy-six percent day 3 embryos and 50% blastocysts exhibited perturbed imprinted methylation, demonstrating that extended culture did not pose greater risk for imprinting errors than short culture. Comparison of embryos with normal and abnormal methylation didn’t reveal any confounding factors. Notably, two embryos from male factor infertility patients using donor sperm harboured aberrant methylation, suggesting errors in these embryos cannot be explained by infertility alone. Overall, these results indicate that ART human preimplantation embryos possess a high frequency of imprinted methylation errors. PMID:26626153

  11. Effect of photogrammetric reading error on slope-frequency distributions. [obtained from Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Wu, S. C.

    1973-01-01

    The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.

  12. The testing of the aspheric mirror high-frequency band error

    NASA Astrophysics Data System (ADS)

    Wan, JinLong; Li, Bo; Li, XinNan

    2015-08-01

    In recent years, high frequency errors of mirror surface are taken seriously gradually. In manufacturing process of advanced telescope, there is clear indicator about high frequency errors. However, the sub-mirror off-axis aspheric telescope used is large. If uses the full aperture interferometers shape measurement, you need to use complex optical compensation device. Therefore, we propose a method to detect non-spherical lens based on the high-frequency stitching errors. This method does not use compensation components, only to measure Aperture sub-surface shape. By analyzing Zernike polynomial coefficients corresponding to the frequency errors, removing the previous 15 Zernike polynomials, then joining the surface shape, you can get full bore inside tested mirror high-frequency errors. 330mm caliber off-axis aspherical hexagon are measured with this method, obtain a complete face type of high-frequency surface errors and the feasibility of the approach.

  13. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  14. Nature and frequency of medication errors in a geriatric ward: an Indonesian experience

    PubMed Central

    Ernawati, Desak Ketut; Lee, Ya Ping; Hughes, Jeffery David

    2014-01-01

    Purpose To determine the nature and frequency of medication errors during medication delivery processes in a public teaching hospital geriatric ward in Bali, Indonesia. Methods A 20-week prospective study on medication errors occurring during the medication delivery process was conducted in a geriatric ward in a public teaching hospital in Bali, Indonesia. Participants selected were inpatients aged more than 60 years. Patients were excluded if they had a malignancy, were undergoing surgery, or receiving chemotherapy treatment. The occurrence of medication errors in prescribing, transcribing, dispensing, and administration were detected by the investigator providing in-hospital clinical pharmacy services. Results Seven hundred and seventy drug orders and 7,662 drug doses were reviewed as part of the study. There were 1,563 medication errors detected among the 7,662 drug doses reviewed, representing an error rate of 20.4%. Administration errors were the most frequent medication errors identified (59%), followed by transcription errors (15%), dispensing errors (14%), and prescribing errors (7%). Errors in documentation were the most common form of administration errors. Of these errors, 2.4% were classified as potentially serious and 10.3% as potentially significant. Conclusion Medication errors occurred in every stage of the medication delivery process, with administration errors being the most frequent. The majority of errors identified in the administration stage were related to documentation. Provision of in-hospital clinical pharmacy services could potentially play a significant role in detecting and preventing medication errors. PMID:24940067

  15. Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth

    PubMed Central

    Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan

    2015-01-01

    Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars. PMID:26347779

  16. Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.

    PubMed

    Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan

    2015-01-01

    Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars. PMID:26347779

  17. Bounding higher-order ionosphere errors for the dual-frequency GPS user

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Walter, T.; Blanch, J.; Enge, P.

    2008-10-01

    Civil signals at L2 and L5 frequencies herald a new phase of Global Positioning System (GPS) performance. Dual-frequency users typically assume a first-order approximation of the ionosphere index of refraction, combining the GPS observables to eliminate most of the ranging delay, on the order of meters, introduced into the pseudoranges. This paper estimates the higher-order group and phase errors that occur from assuming the ordinary first-order dual-frequency ionosphere model using data from the Federal Aviation Administration's Wide Area Augmentation System (WAAS) network on a solar maximum quiet day and an extremely stormy day postsolar maximum. We find that during active periods, when ionospheric storms may introduce slant range delays at L1 as high as 100 m, the higher-order group errors in the L1-L2 or L1-L5 dual-frequency combination can be tens of centimeters. The group and phase errors are no longer equal and opposite, so these errors accumulate in carrier smoothing of the dual-frequency code observable. We show the errors in the carrier-smoothed code are due to higher-order group errors and, to a lesser extent, to higher-order phase rate errors. For many applications, this residual error is sufficiently small as to be neglected. However, such errors can impact geodetic applications as well as the error budgets of GPS Augmentation Systems providing Category III precision approach.

  18. "Coded and Uncoded Error Feedback: Effects on Error Frequencies in Adult Colombian EFL Learners' Writing"

    ERIC Educational Resources Information Center

    Sampson, Andrew

    2012-01-01

    This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…

  19. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  20. Frequency of medication errors in an emergency department of a large teaching hospital in southern Iran.

    PubMed

    Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid

    2014-01-01

    This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391

  1. Frequency of medication errors in an emergency department of a large teaching hospital in southern Iran

    PubMed Central

    Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid

    2014-01-01

    This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391

  2. Packet error probabilities in frequency-hopped spread spectrum packet radio networks. Markov frequency hopping patterns considered

    NASA Astrophysics Data System (ADS)

    Georgiopoulos, M.; Kazakos, P.

    1987-09-01

    We compute the packet error probability induced in a frequency-hopped spread spectrum packet radio network, which utilizes first order Markov frequency hopping patterns. The frequency spectrum is divided into q frequency bins and the packets are divided into M bytes each. Every user in the network sends each of the M bytes of his packet at a frequency bin, which is different from the frequency bin used by the previous byte, but equally likely to be any one of the remaining q-1 frequency bins (Markov frequency hopping patterns). Furthermore, different users in the network utilize statistically independent frequency hopping patterns. Provided that, K users have simultaneously transmitted their packets on the channel, and a receiver has locked on to one of these K packets, we present a method for the computation of P sub e (K) (i.e. the probability that this packet is incorrectly decoded). Furthermore, we present numerical results (i.e. P sub e (K) versus K) for various values of the multiple access interference K, when Reed Solomon (RS) codes are used for the encoding of packets. Finally, some useful comparisons, with the packet error probability induced, if we assume that the byte errors are independent, are made; based on these comparisons, we can easily evaluate the performance of our spread spectrum system.

  3. Disentangling the impacts of outcome valence and outcome frequency on the post-error slowing

    PubMed Central

    Wang, Lijun; Tang, Dandan; Zhao, Yuanfang; Hitchman, Glenn; Wu, Shanshan; Tan, Jinfeng; Chen, Antao

    2015-01-01

    Post-error slowing (PES) reflects efficient outcome monitoring, manifested as slower reaction time after errors. Cognitive control account assumes that PES depends on error information, whereas orienting account posits that it depends on error frequency. This raises the question how the outcome valence and outcome frequency separably influence the generation of PES. To address this issue, we varied the probability of observation errors (50/50 and 20/80, correct/error) the “partner” committed by employing an observation-execution task and investigated the corresponding behavioral and neural effects. On each trial, participants first viewed the outcome of a flanker-run that was supposedly performed by a ‘partner’, and then performed a flanker-run themselves afterwards. We observed PES in the two error rate conditions. However, electroencephalographic data suggested error-related potentials (oERN and oPe) and rhythmic oscillation associated with attentional process (alpha band) were respectively sensitive to outcome valence and outcome frequency. Importantly, oERN amplitude was positively correlated with PES. Taken together, these findings support the assumption of the cognitive control account, suggesting that outcome valence and outcome frequency are both involved in PES. Moreover, the generation of PES is indexed by oERN, whereas the modulation of PES size could be reflected on the alpha band. PMID:25732237

  4. On low-frequency errors of uniformly modulated filtered white-noise models for ground motions

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1988-01-01

    Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).

  5. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  6. Frequency-domain correction of sensor dynamic error for step response.

    PubMed

    Yang, Shuang-Long; Xu, Ke-Jun

    2012-11-01

    To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly. PMID:23206091

  7. Frequency-domain correction of sensor dynamic error for step response

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Xu, Ke-Jun

    2012-11-01

    To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.

  8. Inverse material identification in coupled acoustic-structure interaction using a modified error in constitutive equation functional

    NASA Astrophysics Data System (ADS)

    Warner, James E.; Diaz, Manuel I.; Aquino, Wilkins; Bonnet, Marc

    2014-09-01

    This work focuses on the identification of heterogeneous linear elastic moduli in the context of frequency-domain, coupled acoustic-structure interaction (ASI), using either solid displacement or fluid pressure measurement data. The approach postulates the inverse problem as an optimization problem where the solution is obtained by minimizing a modified error in constitutive equation (MECE) functional. The latter measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses, while incorporating the measurement data as additional quadratic error terms. We demonstrate two strategies for selecting the MECE weighting coefficient to produce regularized solutions to the ill-posed identification problem: 1) the discrepancy principle of Morozov, and 2) an error-balance approach that selects the weight parameter as the minimizer of another functional involving the ECE and the data misfit. Numerical results demonstrate that the proposed methodology can successfully recover elastic parameters in 2D and 3D ASI systems from response measurements taken in either the solid or fluid subdomains. Furthermore, both regularization strategies are shown to produce accurate reconstructions when the measurement data is polluted with noise. The discrepancy principle is shown to produce nearly optimal solutions, while the error-balance approach, although not optimal, remains effective and does not need a priori information on the noise level.

  9. Inverse Material Identification in Coupled Acoustic-Structure Interaction using a Modified Error in Constitutive Equation Functional

    PubMed Central

    Warner, James E.; Diaz, Manuel I.; Aquino, Wilkins; Bonnet, Marc

    2014-01-01

    This work focuses on the identification of heterogeneous linear elastic moduli in the context of frequency-domain, coupled acoustic-structure interaction (ASI), using either solid displacement or fluid pressure measurement data. The approach postulates the inverse problem as an optimization problem where the solution is obtained by minimizing a modified error in constitutive equation (MECE) functional. The latter measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses, while incorporating the measurement data as additional quadratic error terms. We demonstrate two strategies for selecting the MECE weighting coefficient to produce regularized solutions to the ill-posed identification problem: 1) the discrepancy principle of Morozov, and 2) an error-balance approach that selects the weight parameter as the minimizer of another functional involving the ECE and the data misfit. Numerical results demonstrate that the proposed methodology can successfully recover elastic parameters in 2D and 3D ASI systems from response measurements taken in either the solid or fluid subdomains. Furthermore, both regularization strategies are shown to produce accurate reconstructions when the measurement data is polluted with noise. The discrepancy principle is shown to produce nearly optimal solutions, while the error-balance approach, although not optimal, remains effective and does not need a priori information on the noise level. PMID:25339790

  10. Design methodology accounting for fabrication errors in manufactured modified Fresnel lenses for controlled LED illumination.

    PubMed

    Shim, Jongmyeong; Kim, Joongeok; Lee, Jinhyung; Park, Changsu; Cho, Eikhyun; Kang, Shinill

    2015-07-27

    The increasing demand for lightweight, miniaturized electronic devices has prompted the development of small, high-performance optical components for light-emitting diode (LED) illumination. As such, the Fresnel lens is widely used in applications due to its compact configuration. However, the vertical groove angle between the optical axis and the groove inner facets in a conventional Fresnel lens creates an inherent Fresnel loss, which degrades optical performance. Modified Fresnel lenses (MFLs) have been proposed in which the groove angles along the optical paths are carefully controlled; however, in practice, the optical performance of MFLs is inferior to the theoretical performance due to fabrication errors, as conventional design methods do not account for fabrication errors as part of the design process. In this study, the Fresnel loss and the loss area due to microscopic fabrication errors in the MFL were theoretically derived to determine optical performance. Based on this analysis, a design method for the MFL accounting for the fabrication errors was proposed. MFLs were fabricated using an ultraviolet imprinting process and an injection molding process, two representative processes with differing fabrication errors. The MFL fabrication error associated with each process was examined analytically and experimentally to investigate our methodology. PMID:26367631

  11. Comparison of Aseptic Compounding Errors Before and After Modified Laboratory and Introductory Pharmacy Practice Experiences

    PubMed Central

    Owora, Arthur H.; Kirkpatrick, Alice E.

    2015-01-01

    Objective. To determine whether aseptic compounding errors were reduced at the end of the third professional year after modifying pharmacy practice laboratories and implementing an institutional introductory pharmacy practice experience (IPPE). Design. An aseptic compounding laboratory, previously occurring during the third-year spring semester, was added to the second-year spring semester. An 80-hour institutional IPPE was also added in the summer between the second and third years. Instructors recorded aseptic compounding errors using a grading checklist for second-year and third-year student assessments. Third-year student aseptic compounding errors were assessed prior to the curricular changes and for 2 subsequent years for students on the Oklahoma City and Tulsa campuses of the University of Oklahoma. Assessment. Both third-year cohorts committed fewer aseptic technique errors than they did during their second years, and the probability was significantly lower for students on the Oklahoma City campus. The probability of committing major aseptic technique errors was significantly lower for 2 consecutive third-year cohorts after the curricular changes. Conclusion. The addition of second-year aseptic compounding laboratory experiences and third-year institutional IPPE content reduced instructor-assessed errors at the end of the third year. PMID:26889070

  12. Online public reactions to frequency of diagnostic errors in US outpatient care

    PubMed Central

    Giardina, Traber Davis; Sarkar, Urmimala; Gourley, Gato; Modi, Varsha; Meyer, Ashley N.D.; Singh, Hardeep

    2016-01-01

    Background Diagnostic errors pose a significant threat to patient safety but little is known about public perceptions of diagnostic errors. A study published in BMJ Quality & Safety in 2014 estimated that diagnostic errors affect at least 5% of US adults (or 12 million) per year. We sought to explore online public reactions to media reports on the reported frequency of diagnostic errors in the US adult population. Methods We searched the World Wide Web for any news article reporting findings from the study. We then gathered all the online comments made in response to the news articles to evaluate public reaction to the newly reported diagnostic error frequency (n=241). Two coders conducted content analyses of the comments and an experienced qualitative researcher resolved differences. Results Overall, there were few comments made regarding the frequency of diagnostic errors. However, in response to the media coverage, 44 commenters shared personal experiences of diagnostic errors. Additionally, commentary centered on diagnosis-related quality of care as affected by two emergent categories: (1) US health care providers (n=79; 63 commenters) and (2) US health care reform-related policies, most commonly the Affordable Care Act (ACA) and insurance/reimbursement issues (n=62; 47 commenters). Conclusion The public appears to have substantial concerns about the impact of the ACA and other reform initiatives on the diagnosis-related quality of care. However, policy discussions on diagnostic errors are largely absent from the current national conversation on improving quality and safety. Because outpatient diagnostic errors have emerged as a major safety concern, researchers and policymakers should consider evaluating the effects of policy and practice changes on diagnostic accuracy. PMID:27347474

  13. Efficient simulation for fixed-receiver bistatic SAR with time and frequency synchronization errors

    NASA Astrophysics Data System (ADS)

    Yan, Feifei; Chang, Wenge; Li, Xiangyang

    2015-12-01

    Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.

  14. Sparsity-based moving target localization using multiple dual-frequency radars under phase errors

    NASA Astrophysics Data System (ADS)

    Al Kadry, Khodour; Ahmad, Fauzia; Amin, Moeness G.

    2015-05-01

    In this paper, we consider moving target localization in urban environments using a multiplicity of dual-frequency radars. Dual-frequency radars offer the benefit of reduced complexity and fast computation time, thereby permitting real-time indoor target localization and tracking. The multiple radar units are deployed in a distributed system configuration, which provides robustness against target obscuration. We develop the dual-frequency signal model for the distributed radar system under phase errors and employ a joint sparse scene reconstruction and phase error correction technique to provide accurate target location and velocity estimates. Simulation results are provided that validate the performance of the proposed scheme under both full and reduced data volumes.

  15. Impact of radar systematic error on the orthogonal frequency division multiplexing chirp waveform orthogonality

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Liang, Xingdong; Chen, Longyong; Ding, Chibiao

    2015-01-01

    Orthogonal frequency division multiplexing (OFDM) chirp waveform, which is composed of two successive identical linear frequency modulated subpulses, is a newly proposed orthogonal waveform scheme for multiinput multioutput synthetic aperture radar (SAR) systems. However, according to the waveform model, radar systematic error, which introduces phase or amplitude difference between the subpulses of the OFDM waveform, significantly degrades the orthogonality. The impact of radar systematic error on the waveform orthogonality is mainly caused by the systematic nonlinearity rather than the thermal noise or the frequency-dependent systematic error. Due to the influence of the causal filters, the first subpulse leaks into the second one. The leaked signal interacts with the second subpulse in the nonlinear components of the transmitter. This interaction renders a dramatic phase distortion in the beginning of the second subpulse. The resultant distortion, which leads to a phase difference between the subpulses, seriously damages the waveform's orthogonality. The impact of radar systematic error on the waveform orthogonality is addressed. Moreover, the impact of the systematic nonlinearity on the waveform is avoided by adding a standby between the subpulses. Theoretical analysis is validated by practical experiments based on a C-band SAR system.

  16. A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory

    NASA Technical Reports Server (NTRS)

    Elander, Valjean; Koshak, William; Phanord, Dieudonne

    2004-01-01

    The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."

  17. Error detection and correction for a multiple frequency quaternary phase shift keyed signal

    NASA Astrophysics Data System (ADS)

    Hopkins, Kevin S.

    1989-06-01

    A multiple frequency quaternary phased shift (MFQPSK) signaling system was developed and experimentally tested in a controlled environment. In order to insure that the quality of the received signal is such that information recovery is possible, error detection/correction (EDC) must be used. Various EDC coding schemes available are reviewed and their application to the MFQPSK signal system is analyzed. Hamming, Golay, Bose-Chaudhuri-Hocquenghem (BCH), Reed-Solomon (R-S) block codes as well as convolutional codes are presented and analyzed in the context of specific MFQPSK system parameters. A computer program was developed in order to compute bit error probabilities as a function of signal to noise ratio. Results demonstrate that various EDC schemes are suitable for the MFQPSK signal structure, and that significant performance improvements are possible with the use of certain error correction codes.

  18. Robust nonstationary jammer mitigation for GPS receivers with instantaneous frequency error tolerance

    NASA Astrophysics Data System (ADS)

    Wang, Ben; Zhang, Yimin D.; Qin, Si; Amin, Moeness G.

    2016-05-01

    In this paper, we propose a nonstationary jammer suppression method for GPS receivers when the signals are sparsely sampled. Missing data samples induce noise-like artifacts in the time-frequency (TF) distribution and ambiguity function of the received signals, which lead to reduced capability and degraded performance in jammer signature estimation and excision. In the proposed method, a data-dependent TF kernel is utilized to mitigate the artifacts and sparse reconstruction methods are then applied to obtain instantaneous frequency (IF) estimation of the jammers. In addition, an error tolerance of the IF estimate is applied is applied to achieve robust jammer suppression performance in the presence of IF estimation inaccuracy.

  19. Minimizing high spatial frequency residual error in active space telescope mirrors

    NASA Astrophysics Data System (ADS)

    Gray, Thomas L.; Smith, Matthew W.; Cohan, Lucy E.; Miller, David W.

    2009-08-01

    The trend in future space telescopes is towards larger apertures, which provide increased sensitivity and improved angular resolution. Lightweight, segmented, rib-stiffened, actively controlled primary mirrors are an enabling technology, permitting large aperture telescopes to meet the mass and volume restrictions imposed by launch vehicles. Such mirrors, however, are limited in the extent to which their discrete surface-parallel electrostrictive actuators can command global prescription changes. Inevitably some amount of high spatial frequency residual error is added to the wavefront due to the discrete nature of the actuators. A parameterized finite element mirror model is used to simulate this phenomenon and determine designs that mitigate high spatial frequency residual errors in the mirror surface figure. Two predominant residual components are considered: dimpling induced by embedded actuators and print-through induced by facesheet polishing. A gradient descent algorithm is combined with the parameterized mirror model to allow rapid trade space navigation and optimization of the mirror design, yielding advanced design heuristics formulated in terms of minimum machinable rib thickness. These relationships produce mirrors that satisfy manufacturing constraints and minimize uncorrectable high spatial frequency error.

  20. Compensation of body shake errors in terahertz beam scanning single frequency holography for standoff personnel screening

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Chao; Sun, Zhao-Yang; Zhao, Yu; Wu, Shi-You; Fang, Guang-You

    2016-08-01

    In the terahertz (THz) band, the inherent shake of the human body may strongly impair the image quality of a beam scanning single frequency holography system for personnel screening. To realize accurate shake compensation in imaging processing, it is quite necessary to develop a high-precision measure system. However, in many cases, different parts of a human body may shake to different extents, resulting in greatly increasing the difficulty in conducting a reasonable measurement of body shake errors for image reconstruction. In this paper, a body shake error compensation algorithm based on the raw data is proposed. To analyze the effect of the body shake on the raw data, a model of echoed signal is rebuilt with considering both the beam scanning mode and the body shake. According to the rebuilt signal model, we derive the body shake error estimated method to compensate for the phase error. Simulation on the reconstruction of point targets with shake errors and proof-of-principle experiments on the human body in the 0.2-THz band are both performed to confirm the effectiveness of the body shake compensation algorithm proposed. Project supported by the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant No. YYYJ-1123).

  1. Three-dimensional transient elastodynamic inversion using the modified error in constitutive relation

    NASA Astrophysics Data System (ADS)

    Bonnet, Marc; Aquino, Wilkins

    2014-10-01

    This work is concerned with large-scale three-dimensional inversion under transient elastodynamic conditions by means of the modified error in constitutive relation (MECR), an energy-based cost functional. A peculiarity of time-domain MECR formulations is that each evaluation involves the computation of two elastodynamic states (one forward, one backward) which moreover are coupled. This coupling creates a major computational bottleneck, making MECR-based inversion difficult for spatially 2D or 3D configurations. To overcome this obstacle, we propose an approach whose main ingredients are (a) setting the entire computational procedure in a consistent time-discrete framework that incorporates the chosen time-stepping algorithm, and (b) using an iterative SOR-like method for the resulting stationarity equations. The resulting MECR-based inversion algorithm is demonstrated on a 3D transient elastodynamic example involving over 500,000 unknown elastic moduli.

  2. Correction of phase-error for phase-resolved k-clocked optical frequency domain imaging

    NASA Astrophysics Data System (ADS)

    Mo, Jianhua; Li, Jianan; de Boer, Johannes F.

    2012-01-01

    Phase-resolved optical frequency domain imaging (OFDI) has emerged as a promising technique for blood flow measurement in human tissues. Phase stability is essential for this technique to achieve high accuracy in flow velocity measurement. In OFDI systems that use k-clocking for the data acquisition, phase-error occurs due to jitter in the data acquisition electronics. We presented a statistical analysis of jitter represented as point shifts of the k-clocked spectrum. We demonstrated a real-time phase-error correction algorithm for phase-resolved OFDI. A 50 KHz wavelength-swept laser (Axsun Technologies) based balanced-detection OFDI system was developed centered at 1310 nm. To evaluate the performance of this algorithm, a stationary gold mirror was employed as sample for phase analysis. Furthermore, we implemented this algorithm for imaging of human skin. Good-quality skin structure and Doppler image can be observed in real-time after phase-error correction. The results show that the algorithm can effectively correct the jitter-induced phase error in OFDI system.

  3. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    SciTech Connect

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  4. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGESBeta

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  5. EEG error potentials detection and classification using time-frequency features for robot reinforcement learning.

    PubMed

    Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali

    2015-08-01

    In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier. PMID:26736619

  6. Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint

    SciTech Connect

    Florita, A.; Hodge, B. M.; Milligan, M.

    2012-08-01

    The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.

  7. A Modified Frequency Estimation Equating Method for the Common-Item Nonequivalent Groups Design

    ERIC Educational Resources Information Center

    Wang, Tianyou; Brennan, Robert L.

    2009-01-01

    Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…

  8. Coupling Modified Constitutive Relation Error, Model Reduction and Kalman Filtering Algorithms for Real-Time Parameters Identification

    NASA Astrophysics Data System (ADS)

    Marchand, Basile; Chamoin, Ludovic; Rey, Christian

    2015-11-01

    In this work we propose a new identification strategy based on the coupling between a probabilistic data assimilation method and a deterministic inverse problem approach using the modified Constitutive Relation Error energy functional. The idea is thus to offer efficient identification despite of highly corrupted data for time-dependent systems. In order to perform real-time identification, the modified Constitutive Relation Error is here associated to a model reduction method based on Proper Generalized Decomposition. The proposed strategy is applied to two thermal problems with identification of time-dependent boundary conditions, or material parameters.

  9. Wrongful Conviction: Perceptions of Criminal Justice Professionals Regarding the Frequency of Wrongful Conviction and the Extent of System Errors

    ERIC Educational Resources Information Center

    Ramsey, Robert J.; Frank, James

    2007-01-01

    Drawing on a sample of 798 Ohio criminal justice professionals (police, prosecutors, defense attorneys, judges), the authors examine respondents' perceptions regarding the frequency of system errors (i.e., professional error and misconduct suggested by previous research to be associated with wrongful conviction), and wrongful felony conviction.…

  10. Influence of nonhomogeneous earth on the rms phase error and beam-pointing errors of large, sparse high-frequency receiving arrays

    NASA Astrophysics Data System (ADS)

    Weiner, M. M.

    1994-01-01

    The performance of ground-based high-frequency (HF) receiving arrays is reduced when the array elements have electrically small ground planes. The array rms phase error and beam-pointing errors, caused by multipath rays reflected from a nonhomogeneous Earth, are determined for a sparse array of elements that are modeled as Hertzian dipoles in close proximity to Earth with no ground planes. Numerical results are presented for cases of randomly distributed and systematically distributed Earth nonhomogeneities where one-half of vertically polarized array elements are located in proximity to one type of Earth and the remaining half are located in proximity to a second type of Earth. The maximum rms phase errors, for the cases examined, are 18 deg and 9 deg for randomly distributed and systematically distributed nonhomogeneities, respectively. The maximum beampointing errors are 0 and 0.3 beam widths for randomly distributed and systematically distributed nonhomogeneities, respectively.

  11. Frequency behaviour of the modified Jiles Atherton model

    NASA Astrophysics Data System (ADS)

    Chwastek, Krzysztof

    2008-07-01

    In the paper the behaviour of the recently modified Jiles-Atherton model of hysteresis under a distorted magnetization pattern is examined. The modification is aimed at improving the modelling of reversible processes. The equation for anhysteretic model is replaced from Langevin function to the more general Brillouin function. The structure of model equation is similar to that of the product Preisach model. The dynamic effects are taken into account in the description by the introduction of the lagged response with respect to the input.

  12. Flood Frequency Analyses Using a Modified Stochastic Storm Transposition Method

    NASA Astrophysics Data System (ADS)

    Fang, N. Z.; Kiani, M.

    2015-12-01

    Research shows that areas with similar topography and climatic environment have comparable precipitation occurrences. Reproduction and realization of historical rainfall events provide foundations for frequency analysis and the advancement of meteorological studies. Stochastic Storm Transposition (SST) is a method for such a purpose and enables us to perform hydrologic frequency analyses by transposing observed historical storm events to the sites of interest. However, many previous studies in SST reveal drawbacks from simplified Probability Density Functions (PDFs) without considering restrictions for transposing rainfalls. The goal of this study is to stochastically examine the impacts of extreme events on all locations in a homogeneity zone. Since storms with the same probability of occurrence on homogenous areas do not have the identical hydrologic impacts, the authors utilize detailed precipitation parameters including the probability of occurrence of certain depth and the number of occurrence of extreme events, which are both incorporated into a joint probability function. The new approach can reduce the bias from uniformly transposing storms which erroneously increases the probability of occurrence of storms in areas with higher rainfall depths. This procedure is iterated to simulate storm events for one thousand years as the basis for updating frequency analysis curves such as IDF and FFA. The study area is the Upper Trinity River watershed including the Dallas-Fort Worth metroplex with a total area of 6,500 mi2. It is the first time that SST method is examined in such a wide scale with 20 years of radar rainfall data.

  13. Demonstration of the frequency offset errors introduced by an incorrect setting of the Zeeman/magnetic field adjustment on the cesium beam frequency standard

    NASA Technical Reports Server (NTRS)

    Kaufmann, D. C.

    1976-01-01

    The fine frequency setting of a cesium beam frequency standard is accomplished by adjusting the C field control with the appropriate Zeeman frequency applied to the harmonic generator. A novice operator in the field, even when using the correct Zeeman frequency input, may mistakenly set the C field to any one of seven major Beam I peaks (fingers) represented by the Ramsey curve. This can result in frequency offset errors of as much as 2.5 parts in ten to the tenth. The effects of maladjustment are demonstrated and suggestions are discussed on how to avoid the subtle traps associated with C field adjustments.

  14. Modeling work zone crash frequency by quantifying measurement errors in work zone length.

    PubMed

    Yang, Hong; Ozbay, Kaan; Ozturk, Ozgur; Yildirimoglu, Mehmet

    2013-06-01

    Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence. PMID:23563145

  15. Analysis of 454 sequencing error rate, error sources, and artifact recombination for detection of Low-frequency drug resistance mutations in HIV-1 DNA

    PubMed Central

    2013-01-01

    Background 454 sequencing technology is a promising approach for characterizing HIV-1 populations and for identifying low frequency mutations. The utility of 454 technology for determining allele frequencies and linkage associations in HIV infected individuals has not been extensively investigated. We evaluated the performance of 454 sequencing for characterizing HIV populations with defined allele frequencies. Results We constructed two HIV-1 RT clones. Clone A was a wild type sequence. Clone B was identical to clone A except it contained 13 introduced drug resistant mutations. The clones were mixed at ratios ranging from 1% to 50% and were amplified by standard PCR conditions and by PCR conditions aimed at reducing PCR-based recombination. The products were sequenced using 454 pyrosequencing. Sequence analysis from standard PCR amplification revealed that 14% of all sequencing reads from a sample with a 50:50 mixture of wild type and mutant DNA were recombinants. The majority of the recombinants were the result of a single crossover event which can happen during PCR when the DNA polymerase terminates synthesis prematurely. The incompletely extended template then competes for primer sites in subsequent rounds of PCR. Although less often, a spectrum of other distinct crossover patterns was also detected. In addition, we observed point mutation errors ranging from 0.01% to 1.0% per base as well as indel (insertion and deletion) errors ranging from 0.02% to nearly 50%. The point errors (single nucleotide substitution errors) were mainly introduced during PCR while indels were the result of pyrosequencing. We then used new PCR conditions designed to reduce PCR-based recombination. Using these new conditions, the frequency of recombination was reduced 27-fold. The new conditions had no effect on point mutation errors. We found that 454 pyrosequencing was capable of identifying minority HIV-1 mutations at frequencies down to 0.1% at some nucleotide positions. Conclusion

  16. Effect of mid- and high-spatial frequencies on optical performance. [surface error effects on reflecting telescopes

    NASA Technical Reports Server (NTRS)

    Noll, R. J.

    1979-01-01

    In many of today's telescopes the effects of surface errors on image quality and scattered light are very important. The influence of optical fabrication surface errors on the performance of an optical system is discussed. The methods developed by Hopkins (1957) for aberration tolerancing and Barakat (1972) for random wavefront errors are extended to the examination of mid- and high-spatial frequency surface errors. The discussion covers a review of the basic concepts of image quality, an examination of manufacturing errors as a function of image quality performance, a demonstration of mirror scattering effects in relation to surface errors, and some comments on the nature of the correlation functions. Illustrative examples are included.

  17. Frequency, Types, and Potential Clinical Significance of Medication-Dispensing Errors

    PubMed Central

    Bohand, Xavier; Simon, Laurent; Perrier, Eric; Mullot, Hélène; Lefeuvre, Leslie; Plotton, Christian

    2009-01-01

    INTRODUCTION AND OBJECTIVES: Many dispensing errors occur in the hospital, and these can endanger patients. The purpose of this study was to assess the rate of dispensing errors by a unit dose drug dispensing system, to categorize the most frequent types of errors, and to evaluate their potential clinical significance. METHODS: A prospective study using a direct observation method to detect medication-dispensing errors was used. From March 2007 to April 2007, “errors detected by pharmacists” and “errors detected by nurses” were recorded under six categories: unauthorized drug, incorrect form of drug, improper dose, omission, incorrect time, and deteriorated drug errors. The potential clinical significance of the “errors detected by nurses” was evaluated. RESULTS: Among the 734 filled medication cassettes, 179 errors were detected corresponding to a total of 7249 correctly fulfilled and omitted unit doses. An overall error rate of 2.5% was found. Errors detected by pharmacists and nurses represented 155 (86.6%) and 24 (13.4%) of the 179 errors, respectively. The most frequent types of errors were improper dose (n = 57, 31.8%) and omission (n = 54, 30.2%). Nearly 45% of the 24 errors detected by nurses had the potential to cause a significant (n = 7, 29.2%) or serious (n = 4, 16.6%) adverse drug event. CONCLUSIONS: Even if none of the errors reached the patients in this study, a 2.5% error rate indicates the need for improving the unit dose drug-dispensing system. Furthermore, it is almost certain that this study failed to detect some medication errors, further arguing for strategies to prevent their recurrence. PMID:19142545

  18. Error analysis for intrinsic quality factor measurement in superconducting radio frequency resonators

    NASA Astrophysics Data System (ADS)

    Melnychuk, O.; Grassellino, A.; Romanenko, A.

    2014-12-01

    In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].

  19. Control of mid-spatial frequency errors considering the pad groove feature in smoothing polishing process.

    PubMed

    Nie, Xuqing; Li, Shengyi; Hu, Hao; Li, Qi

    2014-10-01

    Mid-spatial frequency error (MSFR) should be strictly controlled in modern optical systems. As an effective approach to suppress MSFR, the smoothing polishing (SP) process is not easy to handle because it can be affected by many factors. This paper mainly focuses on the influence of the pad groove, which has not been researched yet. The SP process is introduced, and the important role of the pad groove is explained in detail. The relationship between the contact pressure distribution and the groove feature including groove section type, groove width, and groove depth is established, and the optimized result is achieved with the finite element method. The different kinds of groove patterns are compared utilizing the numerical superposition method established scrupulously. The optimal groove is applied in the verification experiment conducted on a self-developed SP machine. The root mean square value of the MSFR after the SP process is diminished from 2.38 to 0.68 nm, which reveals that the selected pad can smooth out the MSFR to a great extent with proper SP parameters, while the newly generated MSFR due to the groove can be suppressed to a very low magnitude. PMID:25322215

  20. Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES

    NASA Astrophysics Data System (ADS)

    Sarkar, B.; Bhunia, C. T.; Maulik, U.

    2012-06-01

    Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.

  1. To Err is Normable: The Computation of Frequency-Domain Error Bounds from Time-Domain Data

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Veillette, Robert J.; DeAbreuGarcia, J. Alexis; Chicatelli, Amy; Hartmann, Richard

    1998-01-01

    This paper exploits the relationships among the time-domain and frequency-domain system norms to derive information useful for modeling and control design, given only the system step response data. A discussion of system and signal norms is included. The proposed procedures involve only simple numerical operations, such as the discrete approximation of derivatives and integrals, and the calculation of matrix singular values. The resulting frequency-domain and Hankel-operator norm approximations may be used to evaluate the accuracy of a given model, and to determine model corrections to decrease the modeling errors.

  2. Accurate van der Waals coefficients between fullerenes and fullerene-alkali atoms and clusters: Modified single-frequency approximation

    NASA Astrophysics Data System (ADS)

    Tao, Jianmin; Mo, Yuxiang; Tian, Guocai; Ruzsinszky, Adrienn

    2016-08-01

    Long-range van der Waals (vdW) interaction is critically important for intermolecular interactions in molecular complexes and solids. However, accurate modeling of vdW coefficients presents a great challenge for nanostructures, in particular for fullerene clusters, which have huge vdW coefficients but also display very strong nonadditivity. In this work, we calculate the coefficients between fullerenes, fullerene and sodium clusters, and fullerene and alkali atoms with the hollow-sphere model within the modified single-frequency approximation (MSFA). In the MSFA, we assume that the electron density is uniform in a molecule and that only valence electrons in the outmost subshell of atoms contribute. The input to the model is the static multipole polarizability, which provides a sharp cutoff for the plasmon contribution outside the effective vdW radius. We find that the model can generate C6 in excellent agreement with expensive wave-function-based ab initio calculations, with a mean absolute relative error of only 3 % , without suffering size-dependent error. We show that the nonadditivities of the coefficients C6 between fullerenes and C60 and sodium clusters Nan revealed by the model agree remarkably well with those based on the accurate reference values. The great flexibility, simplicity, and high accuracy make the model particularly suitable for the study of the nonadditivity of vdW coefficients between nanostructures, advancing the development of better vdW corrections to conventional density functional theory.

  3. STATISTICAL DISTRIBUTIONS OF PARTICULATE MATTER AND THE ERROR ASSOCIATED WITH SAMPLING FREQUENCY. (R828678C010)

    EPA Science Inventory

    The distribution of particulate matter (PM) concentrations has an impact on human health effects and the setting of PM regulations. Since PM is commonly sampled on less than daily schedules, the magnitude of sampling errors needs to be determined. Daily PM data from Spokane, W...

  4. Modifiable variables affecting interdialytic weight gain include dialysis time, frequency, and dialysate sodium.

    PubMed

    Thomson, Benjamin K A; Dixon, Stephanie N; Huang, Shi-Han S; Leitch, Rosemary E; Suri, Rita S; Chan, Christopher T; Lindsay, Robert M

    2013-10-01

    Interdialytic weight gain (IDWG) is associated with hypertension, left ventricular hypertrophy, and all-cause mortality. Dialysate sodium concentration may cause diffusion gradients with plasma sodium and influence subsequent IDWG. Dialysis time and frequency may also influence the outcomes of this Na(+) gradient; these have been overlooked. Our objective was to identify modifiable factors influencing IDWG. We performed a retrospective multivariable regression analyses of data from 86 home hemodialysis patients treated by hemodialysis modalities differing in frequency and session duration to determine factors involved that predict IDWG. Age, diabetic status, and residual renal function did not correlate with IDWG in the univariable analysis. However, using a combination of backwards selection and Akaike information criterion to build our model, we created an equation that predicted IDWG on the basis of serum albumin, age, patient sex, dialysis frequency, and the diffusive balance of sodium, represented by the product of the duration of dialysis and the patient plasma to dialysate Na(+) gradient. This equation was internally validated using bootstrapping, and externally validated in a temporally distinct patient population. We have created an equation to predict IDWG on the basis of independent factors readily available before a dialysis session. The modifiable factors include dialysis time and frequency, and dialysate sodium. Patient sex, age, and serum albumin are also correlated with IDWG. Further work is required to establish how improvements in IDWG influence cardiovascular and other clinical outcomes. PMID:23782770

  5. Analytical simulation of water system capacity reliability, 1. Modified frequency-duration analysis

    NASA Astrophysics Data System (ADS)

    Hobbs, Benjamin F.; Beim, Gina K.

    1988-09-01

    The problem addressed is the computation of the unavailability and expected unserved demand of a water supply system having random demand, finished water storage, and unreliable capacity components. Examples of such components include pumps, treatment plants, and aqueducts. Modified frequency-duration analysis estimates these reliability statistics by, first, calculating how often demand exceeds available capacity and, second, comparing the amount of water in storage with how long such capacity deficits last. This approach builds upon frequency-duration methods developed by the power industry for analyzing generation capacity deficits. Three versions of the frequency-duration approach are presented. Two yield bounds to system unavailability and unserved demand and the third gives an estimate of their true values between those bounds.

  6. Application of modified AOGST to study the low frequency shadow zone in a gas reservoir

    NASA Astrophysics Data System (ADS)

    Abdollahi Aghdam, B.; Riahi, M. Ali

    2015-10-01

    The adaptive optimized window generalized S transform (AOGST) variant with frequency and time is a method for the time-frequency mapping of a signal. According to the AOGST method, an optimized regulation factor is calculated based on the energy concentration of the S transform. The value of this factor is 1 for standard S transform where in the AOGST method its value is limited by the interval of [0, 1]. However, AOGST may not produce an acceptable resolution for all parts of the time-frequency representation. We applied aggregation of confined interval-adaptive optimized generalized S transforms (ACI-AOGST) instead of the AOGST method. The proposed method applies the modified AOGST method to specific frequency and time intervals. By calculating regulation factors for limited frequency and time intervals of signal, arranging them in a suitable order and applying the ACI-AOGST one can provide a transformation with lowest distortion and highest resolution in comparison to other transformations. The proposed method has been used to analyse the time-frequency distribution of a synthetic signal as well as a real 2D seismic section of a producing gas reservoir located south of Iran. The results confirmed the robustness of the ACI-AOGST method.

  7. [The ability of hamadryas baboons' adolescents to decide the modified Piaget's A-not-B error test].

    PubMed

    Anikaev, A E; Calian, V G; Meĭshvili, N V

    2014-04-01

    We investigated the ability to the inhibition of a forced instrumental food-procuring reflex in hamadryas baboons (Papio hamadryas). The subjects of the study were immature animals of the two age groups: the eighteen-month-old group (six males and five females) and three-year-old group (seven males and seven females). To determine the capability we used the modified Piaget's A-not-B error test. Four monkeys correctly decided the test only. The inhibition of the forced conditioned reflex occurred in females only and in the equal degree in each age group. The findings also show the big variation in an activity among the individuals of the different sex and age during the decision of the task. Regarding animals have shown the ability to inhibit consolidation of the conditioned reflex, we tend to treat it as a manifestation of conscious choice, but more research is needed. PMID:25272451

  8. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  9. Measurement error and detectable change for the modified Fresno Test in first-year entry-level physical therapy students.

    PubMed

    Miller, Amy H; Cummings, Nydia; Tomlinson, Jamie

    2013-01-01

    Teaching evidence-based practice (EBP) skills is a core component in the education of health care professionals. Methods to assess individual student development of these skills are not well studied. The purpose of this study was to estimate the standard error of measurement (SEM) and minimal detectable change (MDC) for the modified Fresno Test (MFT) of Competence in EBP in first-year physical therapy students. Using a test-retest design, the MFT was administered two times to 35 participating first-year physical therapy students. Tests were scored by two trained physical therapist educators. Mean test scores clustered near the middle of the 232 point scoring range, 107 points (SD 14.9) and 103 points (SD 18.9). Inter-rater reliability [ICC (2, 1)] for scorers was 0.83 (95%CI 0.74-0.96). Intra-rater reliability was 0.85 (95%CI 0.60-0.97) and 0.94 (95%CI 0.86-0.99). Test-retest reliability [ICC (2, 1)] was 0.46 (95%CI 0.16-0.69), with a calculated SEM of 11 points, a confidence in a single measurement of 18.2 points, and MDC90 (90% confidence) of 25.7 points. Knowledge about estimates of SEM and MDC for specific student populations is important to assess change in individual student performance on the modified FT. PMID:24013248

  10. Frequency and Distribution of Refractive Error in Adult Life: Methodology and Findings of the UK Biobank Study

    PubMed Central

    Cumberland, Phillippa M.; Bao, Yanchun; Hysi, Pirro G.; Foster, Paul J.; Hammond, Christopher J.; Rahi, Jugnoo S.

    2015-01-01

    Purpose To report the methodology and findings of a large scale investigation of burden and distribution of refractive error, from a contemporary and ethnically diverse study of health and disease in adults, in the UK. Methods U K Biobank, a unique contemporary resource for the study of health and disease, recruited more than half a million people aged 40–69 years. A subsample of 107,452 subjects undertook an enhanced ophthalmic examination which provided autorefraction data (a measure of refractive error). Refractive error status was categorised using the mean spherical equivalent refraction measure. Information on socio-demographic factors (age, gender, ethnicity, educational qualifications and accommodation tenure) was reported at the time of recruitment by questionnaire and face-to-face interview. Results Fifty four percent of participants aged 40–69 years had refractive error. Specifically 27% had myopia (4% high myopia), which was more common amongst younger people, those of higher socio-economic status, higher educational attainment, or of White or Chinese ethnicity. The frequency of hypermetropia increased with age (7% at 40–44 years increasing to 46% at 65–69 years), was higher in women and its severity was associated with ethnicity (moderate or high hypermetropia at least 30% less likely in non-White ethnic groups compared to White). Conclusions Refractive error is a significant public health issue for the UK and this study provides contemporary data on adults for planning services, health economic modelling and monitoring of secular trends. Further investigation of risk factors is necessary to inform strategies for prevention. There is scope to do this through the planned longitudinal extension of the UK Biobank study. PMID:26430771

  11. Error correction coding for frequency-hopping multiple-access spread spectrum communication systems

    NASA Technical Reports Server (NTRS)

    Healy, T. J.

    1982-01-01

    A communication system which would effect channel coding for frequency-hopped multiple-access is described. It is shown that in theory coding can increase the spectrum utilization efficiency of a system with mutual interference to 100 percent. Various coding strategies are discussed and some initial comparisons are given. Some of the problems associated with implementing the type of system described here are discussed.

  12. An analysis of perceptual errors in reading mammograms using quasi-local spatial frequency spectra.

    PubMed

    Mello-Thoms, C; Dunn, S M; Nodine, C F; Kundel, H L

    2001-09-01

    In this pilot study the authors examined areas on a mammogram that attracted the visual attention of experienced mammographers and mammography fellows, as well as areas that were reported to contain a malignant lesion, and, based on their spatial frequency spectrum, they characterized these areas by the type of decision outcome that they yielded: true-positives (TP), false-positives (FP), true-negatives (TN), and false-negatives (FN). Five 2-view (craniocaudal and medial-lateral oblique) mammogram cases were examined by 8 experienced observers, and the eye position of the observers was tracked. The observers were asked to report the location and nature of any malignant lesions present in the case. The authors analyzed each area in which either the observer made a decision or in which the observer had prolonged (>1,000 ms) visual dwell using wavelet packets, and characterized these areas in terms of the energy contents of each spatial frequency band. It was shown that each decision outcome is characterized by a specific profile in the spatial frequency domain, and that these profiles are significantly different from one another. As a consequence of these differences, the profiles can be used to determine which type of decision a given observer will make when examining the area. Computer-assisted perception correctly predicted up to 64% of the TPs made by the observers, 77% of the FPs, and 70% of the TNs. PMID:11720333

  13. Analysis of frequency-hopped packet radio networks with random signal levels. Part 1: Error-only decoding

    NASA Astrophysics Data System (ADS)

    Mohamed, Khairi Ashour; Pap, Laszlo

    1994-05-01

    This paper is concerned with the performance analysis of frequency-hopped packet radio networks with random signal levels. We assume that a hit from an interfering packet necessitates a symbol error if and only if it brings on enough energy that exceeds the energy received from the wanted signal. The interdependence between symbol errors of an arbitrary packet is taken into consideration through the joint probability generating function of the so-called effective multiple access interference. Slotted networks, with both random and deterministic hopping patterns, are considered in the case of both synchronous and asynchronous hopping. A general closed-form expression is given for packet capture probability, in the case of Reed-Solomon error only decoding. After introducing a general description method, the following examples are worked out in details: (1) networks with random spatial distribution of stations (a model for mobile packet radio networks); (2) networks operating in slow fading channels; (3) networks with different power levels which are chosen randomly according to either discrete or continuous probability distribution (created captures).

  14. Stable radio frequency phase delivery by rapid and endless post error cancellation.

    PubMed

    Wu, Zhongle; Dai, Yitang; Yin, Feifei; Xu, Kun; Li, Jianqiang; Lin, Jintong

    2013-04-01

    We propose and demonstrate a phase stabilization method for transfer and downconvert radio frequency (RF) signal from remote antenna to center station via a radio-over-fiber (ROF) link. Different from previous phase-locking-loop-based schemes, we post-correct any phase fluctuation by mixing during the downconversion process at the center station. A rapid and endless operation is predicted. The ROF technique transfers the received RF signal directly, which will reduce the electronic complexity at the antenna end. The proposed scheme is experimentally demonstrated, with a phase fluctuation compression factor of about 200. The theory and performance are also discussed. PMID:23546256

  15. Magnitude error bounds for sampled-data frequency response obtained from the truncation of an infinite series, and compensator improvement program

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.

  16. Bit error rate performance of pi/4-DQPSK in a frequency-selective fast Rayleigh fading channel

    NASA Technical Reports Server (NTRS)

    Liu, Chia-Liang; Feher, Kamilo

    1991-01-01

    The bit error rate (BER) performance of pi/4-differential quadrature phase shift keying (DQPSK) modems in cellular mobile communication systems is derived and analyzed. The system is modeled as a frequency-selective fast Rayleigh fading channel corrupted by additive white Gaussian noise (AWGN) and co-channel interference (CCI). The probability density function of the phase difference between two consecutive symbols of M-ary differential phase shift keying (DPSK) signals is first derived. In M-ary DPSK systems, the information is completely contained in this phase difference. For pi/4-DQPSK, the BER is derived in a closed form and calculated directly. Numerical results show that for the 24 kBd (48 kb/s) pi/4-DQPSK operated at a carrier frequency of 850 MHz and C/I less than 20 dB, the BER will be dominated by CCI if the vehicular speed is below 100 mi/h. In this derivation, frequency-selective fading is modeled by two independent Rayleigh signal paths. Only one co-channel is assumed in this derivation. The results obtained are also shown to be valid for discriminator detection of M-ary DPSK signals.

  17. Improvement of Bit Error Rate in Holographic Data Storage Using the Extended High-Frequency Enhancement Filter

    NASA Astrophysics Data System (ADS)

    Kim, Do-Hyung; Cho, Janghyun; Moon, Hyungbae; Jeon, Sungbin; Park, No-Cheol; Yang, Hyunseok; Park, Kyoung-Su; Park, Young-Pil

    2013-09-01

    Optimized image restoration is suggested in angular-multiplexing-page-based holographic data storage. To improve the bit error rate (BER), an extended high frequency enhancement filter is recalculated from the point spread function (PSF) and Gaussian mask as the image restoration filter. Using the extended image restoration filter, the proposed system reduces the number of processing steps compared with the image upscaling method and provides better performance in BER and SNR. Numerical simulations and experiments were performed to verify the proposed method. The proposed system exhibited a marked improvement in BER from 0.02 to 0.002 for a Nyquist factor of 1.1, and from 0.006 to 0 for a Nyquist factor of 1.2. Moreover, more than 3 times faster performance in calculation time was achieved compared with image restoration with PSF upscaling owing to the reductions in the number of system process and calculation load.

  18. The use of ionospheric tomography and elevation masks to reduce the overall error in single-frequency GPS timing applications

    NASA Astrophysics Data System (ADS)

    Rose, Julian A. R.; Tong, Jenna R.; Allain, Damien J.; Mitchell, Cathryn N.

    2011-01-01

    Signals from Global Positioning System (GPS) satellites at the horizon or at low elevations are often excluded from a GPS solution because they experience considerable ionospheric delays and multipath effects. Their exclusion can degrade the overall satellite geometry for the calculations, resulting in greater errors; an effect known as the Dilution of Precision (DOP). In contrast, signals from high elevation satellites experience less ionospheric delays and multipath effects. The aim is to find a balance in the choice of elevation mask, to reduce the propagation delays and multipath whilst maintaining good satellite geometry, and to use tomography to correct for the ionosphere and thus improve single-frequency GPS timing accuracy. GPS data, collected from a global network of dual-frequency GPS receivers, have been used to produce four GPS timing solutions, each with a different ionospheric compensation technique. One solution uses a 4D tomographic algorithm, Multi-Instrument Data Analysis System (MIDAS), to compensate for the ionospheric delay. Maps of ionospheric electron density are produced and used to correct the single-frequency pseudorange observations. This method is compared to a dual-frequency solution and two other single-frequency solutions: one does not include any ionospheric compensation and the other uses the broadcast Klobuchar model. Data from the solar maximum year 2002 and October 2003 have been investigated to display results when the ionospheric delays are large and variable. The study focuses on Europe and results are produced for the chosen test site, VILL (Villafranca, Spain). The effects of excluding all of the GPS satellites below various elevation masks, ranging from 5° to 40°, on timing solutions for fixed (static) and mobile (moving) situations are presented. The greatest timing accuracies when using the fixed GPS receiver technique are obtained by using a 40° mask, rather than a 5° mask. The mobile GPS timing solutions are most

  19. Super-hydrophobicity and oleophobicity of silicone rubber modified by CF 4 radio frequency plasma

    NASA Astrophysics Data System (ADS)

    Gao, Song-Hua; Gao, Li-Hua; Zhou, Ke-Sheng

    2011-03-01

    Owing to excellent electric properties, silicone rubber (SIR) has been widely employed in outdoor insulator. For further improving its hydrophobicity and service life, the SIR samples are treated by CF 4 radio frequency (RF) capacitively coupled plasma. The hydrophobic and oleophobic properties are characterized by static contact angle method. The surface morphology of modified SIR is observed by atom force microscope (AFM). X-ray photoelectron spectroscopy (XPS) is used to test the variation of the functional groups on the SIR surface due to the treatment by CF 4 plasma. The results indicate that the static contact angle of SIR surface is improved from 100.7° to 150.2° via the CF 4 plasma modification, and the super-hydrophobic surface of modified SIR, which the corresponding static contact angle is 150.2°, appears at RF power of 200 W for a 5 min treatment time. It is found that the super-hydrophobic surface ascribes to the coaction of the increase of roughness created by the ablation action and the formation of [-SiF x(CH 3) 2- x-O-] n ( x = 1, 2) structure produced by F atoms replacement methyl groups reaction, more importantly, the formation of [-SiF 2-O-] n structure is the major factor for super-hydrophobic surface, and it is different from the previous studies, which proposed the fluorocarbon species such as C-F, C-F 2, C-F 3, CF-CF n, and C-CF n, were largely introduced to the polymer surface and responsible for the formation of low surface energy.

  20. The Relative Importance of Random Error and Observation Frequency in Detecting Trends in Upper Tropospheric Water Vapor

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-01-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  1. Large Scale Parameter Estimation Problems in Frequency-Domain Elastodynamics Using an Error in Constitutive Equation Functional

    PubMed Central

    Banerjee, Biswanath; Walsh, Timothy F.; Aquino, Wilkins; Bonnet, Marc

    2012-01-01

    This paper presents the formulation and implementation of an Error in Constitutive Equations (ECE) method suitable for large-scale inverse identification of linear elastic material properties in the context of steady-state elastodynamics. In ECE-based methods, the inverse problem is postulated as an optimization problem in which the cost functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses. Furthermore, in a more recent modality of this methodology introduced by Feissel and Allix (2007), referred to as the Modified ECE (MECE), the measured data is incorporated into the formulation as a quadratic penalty term. We show that a simple and efficient continuation scheme for the penalty term, suggested by the theory of quadratic penalty methods, can significantly accelerate the convergence of the MECE algorithm. Furthermore, a (block) successive over-relaxation (SOR) technique is introduced, enabling the use of existing parallel finite element codes with minimal modification to solve the coupled system of equations that arises from the optimality conditions in MECE methods. Our numerical results demonstrate that the proposed methodology can successfully reconstruct the spatial distribution of elastic material parameters from partial and noisy measurements in as few as ten iterations in a 2D example and fifty in a 3D example. We show (through numerical experiments) that the proposed continuation scheme can improve the rate of convergence of MECE methods by at least an order of magnitude versus the alternative of using a fixed penalty parameter. Furthermore, the proposed block SOR strategy coupled with existing parallel solvers produces a computationally efficient MECE method that can be used for large scale materials identification problems, as demonstrated on a 3D example involving about 400,000 unknown moduli. Finally, our numerical results suggest that the proposed MECE

  2. Application of a modified complementary filtering technique for increased aircraft control system frequency bandwidth in high vibration environment

    NASA Technical Reports Server (NTRS)

    Garren, J. F., Jr.; Niessen, F. R.; Abbott, T. S.; Yenni, K. R.

    1977-01-01

    A modified complementary filtering technique for estimating aircraft roll rate was developed and flown in a research helicopter to determine whether higher gains could be achieved. Use of this technique did, in fact, permit a substantial increase in system frequency bandwidth because, in comparison with first-order filtering, it reduced both noise amplification and control limit-cycle tendencies.

  3. Generalization of the model of Hawking radiation with modified high frequency dispersion relation

    NASA Astrophysics Data System (ADS)

    Himemoto, Yoshiaki; Tanaka, Takahiro

    2000-03-01

    Hawking radiation is one of the most interesting phenomena predicted by the theory of quantum fields in curved space. The origin of Hawking radiation is closely related to the fact that a particle which marginally escapes from collapsing into a black hole is observed at future infinity with an infinitely large redshift. In other words, such a particle had a very high frequency when it was near the event horizon. Motivated by the possibility that the property of Hawking radiation may be altered by some unknown physics which may exist beyond some critical scale, Unruh proposed a model which has higher order spatial derivative terms. In his model, the effects of unknown physics are modeled so as to be suppressed for waves with a wavelength much longer than the critical scale k-10. Surprisingly, it was shown that the thermal spectrum is recovered for such modified models. To introduce such higher order spatial derivative terms, Lorentz invariance must be violated because one special direction needs to be chosen. In previous works, the rest frame of freely falling observers was employed as this special reference frame. Here we give an extension by allowing a more general choice of the reference frame. Developing the method taken by Corley, we show that the resulting spectrum of created particles again becomes the thermal one at the Hawking temperature even if the choice of the reference frame is generalized. Using the technique of the matched asymptotic expansion, we also show that the correction to the thermal radiation stays of order k-20 or smaller when the spectrum of radiated particle around its peak is concerned.

  4. Frequency and Modifiability of Children's Preferences for Sex-Typed Toys, Games, and Occupations.

    ERIC Educational Resources Information Center

    DiLeo, Jean Cohen; And Others

    1979-01-01

    This study investigated the strength and modifiability of young children's sex-typed behavior in toy choices, toy play, and preferences for games and occupations. Subjects were nursery school, kindergarten and first grade children. (CM)

  5. Quantification of landfill methane using modified Intergovernmental Panel on Climate Change's waste model and error function analysis.

    PubMed

    Govindan, Siva Shangari; Agamuthu, P

    2014-10-01

    Waste management can be regarded as a cross-cutting environmental 'mega-issue'. Sound waste management practices support the provision of basic needs for general health, such as clean air, clean water and safe supply of food. In addition, climate change mitigation efforts can be achieved through reduction of greenhouse gas emissions from waste management operations, such as landfills. Landfills generate landfill gas, especially methane, as a result of anaerobic degradation of the degradable components of municipal solid waste. Evaluating the mode of generation and collection of landfill gas has posted a challenge over time. Scientifically, landfill gas generation rates are presently estimated using numerical models. In this study the Intergovernmental Panel on Climate Change's Waste Model is used to estimate the methane generated from a Malaysian sanitary landfill. Key parameters of the model, which are the decay rate and degradable organic carbon, are analysed in two different approaches; the bulk waste approach and waste composition approach. The model is later validated using error function analysis and optimum decay rate, and degradable organic carbon for both approaches were also obtained. The best fitting values for the bulk waste approach are a decay rate of 0.08 y(-1) and degradable organic carbon value of 0.12; and for the waste composition approach the decay rate was found to be 0.09 y(-1) and degradable organic carbon value of 0.08. From this validation exercise, the estimated error was reduced by 81% and 69% for the bulk waste and waste composition approach, respectively. In conclusion, this type of modelling could constitute a sensible starting point for landfills to introduce careful planning for efficient gas recovery in individual landfills. PMID:25323145

  6. Analysis and design of modified window shapes for S-transform to improve time-frequency localization

    NASA Astrophysics Data System (ADS)

    Ma, Jianping; Jiang, Jin

    2015-06-01

    This paper deals with window design issues for modified S-transform (MST) to improve the performance of time-frequency analysis (TFA). After analyzing the drawbacks of existing window functions, a window design technique is proposed. The technique uses a sigmoid function to control the window width in frequency domain. By proper selection of certain tuning parameters of a sigmoid function, windows with different width profiles can be obtained for multi-component signals. It is also interesting to note that the MST algorithm can be considered as a special case of a generalized method that adds a tunable shaping function to the standard window in frequency domain to meet specific frequency localization needs. The proposed design technique has been validated on a physical vibration test system using signals with different characteristics. The results have demonstrated that the proposed MST algorithm has superior time-frequency localization capabilities over standard ST, as well as other classical TFA methods. Subsequently, the proposed MST algorithm is applied to vibration monitoring of pipes in a water supply process controlled by a diaphragm pump for fault detection purposes.

  7. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  8. Suppressing gate errors through extra ions coupled to a cavity in frequency-domain quantum computation using rare-earth-ion-doped crystal

    NASA Astrophysics Data System (ADS)

    Nakamura, Satoshi; Goto, Hayato; Kujiraoka, Mamiko; Ichimura, Kouichi; Quantum Computer Team

    The rare-earth-ion-doped crystals, such as Pr3+: Y2SiO5, are promising materials for scalable quantum computers, because the crystals contain a large number of ions which have long coherence time. The frequency-domain quantum computation (FDQC) enables us to employ individual ions coupled to a common cavity mode as qubits by identifying with their transition frequencies. In the FDQC, operation lights with detuning interact with transitions which are not intended to operate, because ions are irradiated regardless of their positions. This crosstalk causes serious errors of the quantum gates in the FDQC. When ``resonance conditions'' between eigenenergies of the whole system and transition-frequency differences among ions are satisfied, the gate errors increase. Ions for qubits must have transitions avoiding the conditions for high-fidelity gate. However, when a large number of ions are employed as qubits, it is difficult to avoid the conditions because of many combinations of eigenenergies and transitions. We propose new implementation using extra ions to control the resonance conditions, and show the effect of the extra ions by a numerical simulation. Our implementation is useful to realize a scalable quantum computer using rare-earth-ion-doped crystal based on the FDQC.

  9. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations.

    PubMed

    Seoane, Fernando; Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar; Ward, Leigh C

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  10. Classification of radiological errors in chest radiographs, using support vector machine on the spatial frequency features of false- negative and false-positive regions

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Donovan, Tim; Brennan, Patrick C.; Dix, Alan; Manning, David J.

    2011-03-01

    Aim: To optimize automated classification of radiological errors during lung nodule detection from chest radiographs (CxR) using a support vector machine (SVM) run on the spatial frequency features extracted from the local background of selected regions. Background: The majority of the unreported pulmonary nodules are visually detected but not recognized; shown by the prolonged dwell time values at false-negative regions. Similarly, overestimated nodule locations are capturing substantial amounts of foveal attention. Spatial frequency properties of selected local backgrounds are correlated with human observer responses either in terms of accuracy in indicating abnormality position or in the precision of visual sampling the medical images. Methods: Seven radiologists participated in the eye tracking experiments conducted under conditions of pulmonary nodule detection from a set of 20 postero-anterior CxR. The most dwelled locations have been identified and subjected to spatial frequency (SF) analysis. The image-based features of selected ROI were extracted with un-decimated Wavelet Packet Transform. An analysis of variance was run to select SF features and a SVM schema was implemented to classify False-Negative and False-Positive from all ROI. Results: A relative high overall accuracy was obtained for each individually developed Wavelet-SVM algorithm, with over 90% average correct ratio for errors recognition from all prolonged dwell locations. Conclusion: The preliminary results show that combined eye-tracking and image-based features can be used for automated detection of radiological error with SVM. The work is still in progress and not all analytical procedures have been completed, which might have an effect on the specificity of the algorithm.

  11. Low frequency vibrational modes of oxygenated myoglobin, hemoglobins, and modified derivatives.

    PubMed

    Jeyarajah, S; Proniewicz, L M; Bronder, H; Kincaid, J R

    1994-12-01

    The low frequency resonance Raman spectra of the dioxygen adducts of myoglobin, hemoglobin, its isolated subunits, mesoheme-substituted hemoglobin, and several deuteriated heme derivatives are reported. The observed oxygen isotopic shifts are used to assign the iron-oxygen stretching (approximately 570 cm-1) and the heretofore unobserved delta (Fe-O-O) bending (approximately 420 cm-1) modes. Although the delta (Fe-O-O) is not enhanced in the case of oxymyoglobin, it is observed for all the hemoglobin derivatives, its exact frequency being relatively invariable among the derivatives. The lack of sensitivity to H2O/D2O buffer exchange is consistent with our previous interpretation of H2O/D2O-induced shifts of v(O-O) in the resonance Raman spectra of dioxygen adducts of cobalt-substituted heme proteins; namely, that those shifts are associated with alterations in vibrational coupling of v(O-O) with internal modes of proximal histidyl imidazole rather than to steric or electronic effects of H/D exchange at the active site. No evidence is obtained for enhancement of the v(Fe-N) stretching frequency of the linkage between the heme iron and the imidazole group of the proximal histidine. PMID:7983043

  12. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors—Air Gap Effect

    PubMed Central

    Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  13. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors--Air Gap Effect.

    PubMed

    Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  14. The effect of verb semantic class and verb frequency (entrenchment) on children's and adults' graded judgements of argument-structure overgeneralization errors.

    PubMed

    Ambridge, Ben; Pine, Julian M; Rowland, Caroline F; Young, Chris R

    2008-01-01

    Participants (aged 5-6 yrs, 9-10 yrs and adults) rated (using a five-point scale) grammatical (intransitive) and overgeneralized (transitive causative)(1) uses of a high frequency, low frequency and novel intransitive verb from each of three semantic classes [Pinker, S. (1989a). Learnability and cognition: The acquisition of argument structure. Cambridge, MA: MIT Press]: "directed motion" (fall, tumble), "going out of existence" (disappear, vanish) and "semivoluntary expression of emotion" (laugh, giggle). In support of Pinker's semantic verb class hypothesis, participants' preference for grammatical over overgeneralized uses of novel (and English) verbs increased between 5-6 yrs and 9-10 yrs, and was greatest for the latter class, which is associated with the lowest degree of direct external causation (the prototypical meaning of the transitive causative construction). In support of Braine and Brooks's [Braine, M.D.S., & Brooks, P.J. (1995). Verb argument strucure and the problem of avoiding an overgeneral grammar. In M. Tomasello & W. E. Merriman (Eds.), Beyond names for things: Young children's acquisition of verbs (pp. 352-376). Hillsdale, NJ: Erlbaum] entrenchment hypothesis, all participants showed the greatest preference for grammatical over ungrammatical uses of high frequency verbs, with this preference smaller for low frequency verbs, and smaller again for novel verbs. We conclude that both the formation of semantic verb classes and entrenchment play a role in children's retreat from argument-structure overgeneralization errors. PMID:17316595

  15. Study of Low-Frequency Earth motions from Earthquakes and a Hurricane using a Modified Standard Seismometer

    NASA Astrophysics Data System (ADS)

    Peters, R. D.

    2004-12-01

    The modification of a WWSSN Sprengnether vertical seismometer has resulted in significantly improved performance at low frequencies. Instead of being used as a velocity detector as originally designed, the Faraday subsystem is made to function as an actuator to provide a type of force feedback. Added to the instrument to detect ground motions is an array form of the author's symmetric differential capacitive (SDC) sensor. The feedback circuit is not conventional, but rather is used to eliminate long-term drift by placing between sensor and actuator an operational amplifier integrator having a time constant of several thousand seconds. Signal to noise ratio at low frequencies is increased, since the modified instrument does not suffer from the 20dB/decade falloff in sensitivity that characterizes conventional force-feedback seismometers. A Hanning-windowed FFT algorithm is employed in the analysis of recorded earthquakes, including that of the very large Indonesia earthquake (M 7.9) of 25 July 2004. The improved low frequency response allows the study of the free oscillations of the Earth that accompany large earthquakes. Data will be provided showing oscillations with spectral components in the vicinity of 1 mHz, that frequently have been observed with this instrument to occur both before as well as after an earthquake. Additionally, microseisms and other interesting data will be shown from records collected by the instrument as Hurricane Charley moved across Florida and up the eastern seaboard.

  16. Cognitive training modifies frequency EEG bands and neuropsychological measures in Rett syndrome.

    PubMed

    Fabio, Rosa Angela; Billeci, Lucia; Crifaci, Giulia; Troise, Emilia; Tortorella, Gaetano; Pioggia, Giovanni

    2016-01-01

    Rett syndrome (RS) is a childhood neurodevelopmental disorder characterized by a primary disturbance in neuronal development. Neurological abnormalities in RS are reflected in several behavioral and cognitive impairments such as stereotypies, loss of speech and hand skills, gait apraxia, irregular breathing with hyperventilation while awake, and frequent seizures. Cognitive training can enhance both neuropsychological and neurophysiological parameters. The aim of this study was to investigate whether behaviors and brain activity were modified by training in RS. The modifications were assessed in two phases: (a) after a short-term training (STT) session, i.e., after 30min of training and (b) after long-term training (LTT), i.e., after 5 days of training. Thirty-four girls with RS were divided into two groups: a training group (21 girls) who underwent the LTT and a control group (13 girls) that did not undergo LTT. The gaze and quantitative EEG (QEEG) data were recorded during the administration of the tasks. A gold-standard eye-tracker and a wearable EEG equipment were used. Results suggest that the participants in the STT task showed a habituation effect, decreased beta activity and increased right asymmetry. The participants in the LTT task looked faster and longer at the target, and show increased beta activity and decreased theta activity, while a leftward asymmetry was re-established. The overall result of this study indicates a positive effect of long-term cognitive training on brain and behavioral parameters in subject with RS. PMID:26859707

  17. Modified surface boundary conditions for elastic waveform inversion of low-frequency wide-angle active land seismic data

    NASA Astrophysics Data System (ADS)

    Plessix, René-Édouard; Pérez Solano, Carlos A.

    2015-06-01

    In presence of large wavelength-scale shear-velocity variations in the Earth, acoustic waveform inversion may not be sufficient even when inverting long-offset data to retrieve the long-to-intermediate wavelengths of the compressional velocity. An acoustic modelling does not always correctly represent the compressional/primary waves when tuning effects and energy conversion between compressional and shear waves occur. Elastic waveform inversion with land data is challenging not only because of its computational cost but also due to the presence of the very energetic ground roll. To avoid inverting the ground roll and focus the inversion on the body waves recorded at long offsets, we propose to modify the surface boundary conditions in the elastic modelling. Zeroing the normal derivatives of the shear stress components parallel to the surface instead of the shear stress components themselves as with the free-surface boundary conditions leads to an elastic modelling that does not generate ground roll. These modified elastic surface conditions allow us to invert the seismic data that have been pre-processed to remove the ground roll as we do in acoustic waveform inversion. In this way, the inversion can focus on the retrieval of the long-to-intermediate wavelengths of the compressional velocity and we can apply the standard frequency continuation approach without having to process out the ground roll in the (elastic) synthetic data. An analysis of the modified surface conditions based on a plane wave decomposition shows that the reflection coefficients at the surface do not depend on incident angles and earth parameters. With a not too high shear-to-compressional (S-to-P) velocity ratio at the surface, the PP-reflection coefficients are close to the ones with the free-surface conditions, but with a high ratio they differ significantly. The approximation is then valid when the (S-to-P) velocity ratio is not too high at the surface in the actual Earth. Based on some

  18. Low-Frequency Tropical Pacific Sea-Surface Temperature over the Past Millennium: Reconstruction and Error Estimates

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J.; Cobb, K.; Mann, M. E.; Rutherford, S. D.; Wittenberg, A. T.

    2009-12-01

    Since surface conditions over the tropical Pacific can organize climate variability at near-global scales, and since there is wide disagreement over their projected course under greenhouse forcing, it is of considerable interest to reconstruct their low-frequency evolution over the past millennium. To this end, we make use of the hybrid RegEM climate reconstruction technique (Mann et al. 2008; Schneider 2001), which aims to reconstruct decadal and longer-scale variations of sea-surface temperature (SST) from an array of climate proxies. We first assemble a database of published and new, high-resolution proxy data from ENSO-sensitive regions, screened for significant correlation with a common ENSO metric (NINO3 index). Proxy observations come primarily from coral, speleothem, marine and lake sediment, and ice core sources, as well as long tree-ring chronologies. The hybrid RegEM methodology is then validated within a pseudoproxy context using two coupled general circulation model simulations of the past millennium’s climate; one using the NCAR CSM1.4, the other the GFDL CM2.1, models (Ammann et al. 2007; Wittenberg 2009). Validation results are found to be sensitive to the ratio of interannual to lower-frequency variability, with poor reconstruction skill for CM2.1 but good skill for CSM1.4. The latter features prominent changes in NINO3 at decadal-to-centennial timescales, which the network and method detect relatively easily. In contrast, the unforced CM2.1 NINO3 is dominated by interannual variations, and its long-term oscillations are more difficult to reconstruct. These two limit cases bracket the observed NINO3 behavior over the historical period. We then apply the method to the proxy observations and extend the decadal-scale history of tropical Pacific SSTs over the past millennium, analyzing the sensitivity of such reconstruction to the inclusion of various key proxy timeseries and details of the statistical analysis, emphasizing metrics of uncertainty

  19. Modified cytotoxic T lymphocyte precursor frequency assay by measuring released europium in a time resolved fluorometer.

    PubMed

    Haque, K; Truman, C; Dittmer, I; Laundy, G; Denning-Kendall, P; Hows, J; Feest, T; Bradley, B

    1997-01-01

    The frequency of cytotoxic T lymphocyte precursors (CTLpf) can be quantified by using the principle of limiting dilution analysis (LDA). Chromium 51 (51Cr) and europium (Eu) release assays are based on the measurement of marker release after lysis of targets by the effector cells. Although, 51Cr release has been widely used to quantify cell lysis since its introduction, it has several disadvantages such as handling and disposal of radioisotopes as well as health risk to personnel involved performing the assay. This situation has led us to adopt a non-radioactive cytotoxicity assay. After 7 days culture the PHA-stimulated targets are labeled with europium DTPA chelate. Lysis of labeled targets by effectors releases the Eu-DTPA complex in culture medium--a highly fluorescent substance. The amount of fluorescence can be measured in a time resolved fluorometer. We describe here some modifications of the original protocol which include optimising IL-2 requirements, reduction of incubation times, addition of an extra spin before 37 degrees C incubation, readjustment of target cells per volume of labeling buffer and other crucial parameters increasing the specificity and sensitivity of CTLpf assay. We are in agreement with others that the Eu-release assay is specific and reproducible. It can be used for the CTLpf estimation as well as other T cell and non-T cell cytotoxicity assays. PMID:9090438

  20. Photocatalytic characteristic and photodegradation kinetics of toluene using N-doped TiO2 modified by radio frequency plasma.

    PubMed

    Shie, Je-Lueng; Lee, Chiu-Hsuan; Chiou, Chyow-San; Chen, Yi-Hung; Chang, Ching-Yuan

    2014-01-01

    This study investigates the feasibility of applications of the plasma surface modification of photocatalysts and the removal of toluene from indoor environments. N-doped TiO2 is prepared by precipitation methods and calcined using a muffle furnace (MF) and modified by radio frequency plasma (RF) at different temperatures with light sources from a visible light lamp (VLL), a white light-emitting diode (WLED) and an ultraviolet light-emitting diode (UVLED). The operation parameters and influential factors are addressed and prepared for characteristic analysis and photo-decomposition examination. Furthermore, related kinetic models are established and used to simulate the experimental data. The characteristic analysis results show that the RF plasma-calcination method enhanced the Brunauer Emmett Teller surface area of the modified photocatalysts effectively. For the elemental analysis, the mass percentages of N for the RF-modified photocatalyst are larger than those of MF by six times. The aerodynamic diameters of the RF-modifiedphotocatalyst are all smaller than those of MF. Photocatalytic decompositions of toluene are elucidated according to the Langmuir-Hinshelwood model. Decomposition efficiencies (eta) of toluene for RF-calcined methods are all higher than those of commercial TiO2 (P25). Reaction kinetics ofphoto-decomposition reactions using RF-calcined methods with WLED are proposed. A comparison of the simulation results with experimental data is also made and indicates good agreement. All the results provide useful information and design specifications. Thus, this study shows the feasibility and potential use of plasma modification via LED in photocatalysis. PMID:24645445

  1. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  2. High frequency electromagnetic properties of interstitial-atom-modified Ce2Fe17NX and its composites

    NASA Astrophysics Data System (ADS)

    Li, L. Z.; Wei, J. Z.; Xia, Y. H.; Wu, R.; Yun, C.; Yang, Y. B.; Yang, W. Y.; Du, H. L.; Han, J. Z.; Liu, S. Q.; Yang, Y. C.; Wang, C. S.; Yang, J. B.

    2014-07-01

    The magnetic and microwave absorption properties of the interstitial atom modified intermetallic compound Ce2Fe17NX have been investigated. The Ce2Fe17NX compound shows a planar anisotropy with saturation magnetization of 1088 kA/m at room temperature. The Ce2Fe17NX paraffin composite with a mass ratio of 1:1 exhibits a permeability of μ ' = 2.7 at low frequency, together with a reflection loss of -26 dB at 6.9 GHz with a thickness of 1.5 mm and -60 dB at 2.2 GHz with a thickness of 4.0 mm. It was found that this composite increases the Snoek limit and exhibits both high working frequency and permeability due to its high saturation magnetization and high ratio of the c-axis anisotropy field to the basal plane anisotropy field. Hence, it is possible that this composite can be used as a high-performance thin layer microwave absorber.

  3. Resonant frequency and sensitivity of a caliper formed with assembled cantilever probes based on the modified strain gradient theory.

    PubMed

    Abbasi, Mohammad; Afkhami, Seyed E

    2014-12-01

    The resonant frequency and sensitivity of an atomic force microscope (AFM) with an assembled cantilever probe (ACP) is analyzed utilizing strain gradient theory, and then the governing equation and boundary conditions are derived by a combination of the basic equations of strain gradient theory and Hamilton's principle. The resonant frequency and sensitivity of the proposed AFM microcantilever are then obtained numerically. The proposed ACP includes a horizontal cantilever, two vertical extensions, and two tips located at the free ends of the extensions that form a caliper. As one of the extensions is located between the clamped and free ends of the AFM microcantilever, the cantilever is modeled as two beams. The results of the current model are compared with those evaluated by both modified couple stress and classical beam theories. The difference in results evaluated by the strain gradient theory and those predicted by the couple stress and classical beam theories is significant, especially when the microcantilever thickness is approximately the same as the material length-scale parameters. The results also indicate that at the low values of contact stiffness, scanning in the higher cantilever modes decrease the accuracy of the proposed AFM ACP. PMID:25205330

  4. Reducing epistemic errors in water quality modelling through high-frequency data and stakeholder collaboration: the case of an industrial spill

    NASA Astrophysics Data System (ADS)

    Krueger, Tobias; Inman, Alex; Paling, Nick

    2014-05-01

    Catchment management, as driven by legislation such as the EU WFD or grassroots initiatives, requires the apportionment of in-stream pollution to point and diffuse sources so that mitigation measures can be targeted and costs and benefits shared. Source apportionment is typically done via modelling. Given model imperfections and input data errors, it has become state-of-the-art to employ an uncertainty framework. However, what is not easily incorporated in such a framework, and currently much discussed in hydrology, are epistemic uncertainties, i.e. those uncertainties that relate to lack of knowledge about processes and data. For example, what if an otherwise negligible source suddenly matters because of an accidental pollution incident? In this paper we present such a case of epistemic error, an industrial spill ignored in a water quality model, demonstrate the bias of the resulting model simulations, and show how the error was discovered somewhat incidentally by auxiliary high-frequency data and finally corrected through the collective intelligence of a stakeholder network. We suggest that accidental pollution incidents like this are a wide-spread, though largely ignored, problem. Hence our discussion will reflect on the practice of catchment monitoring, modelling and management in general. The case itself occurred as part of ongoing modelling support in the Tamar catchment, one of the priority catchments of the UK government's new approach to managing water resources more decentralised and collaboratively. An Extended Export Coefficient Model (ECM+) had been developed with stakeholders to simulate transfers of nutrients (N & P), sediment and Faecal Coliforms from land to water and down the river network as a function of sewage treatment options, land use, livestock densities and farm management practices. In the process of updating the model for the hydrological years 2008-2012 an over-prediction of the annual average P concentration by the model was found at

  5. In Vitro Culture Increases the Frequency of Stochastic Epigenetic Errors at Imprinted Genes in Placental Tissues from Mouse Concepti Produced Through Assisted Reproductive Technologies1

    PubMed Central

    de Waal, Eric; Mak, Winifred; Calhoun, Sondra; Stein, Paula; Ord, Teri; Krapp, Christopher; Coutifaris, Christos; Schultz, Richard M.; Bartolomei, Marisa S.

    2014-01-01

    ABSTRACT Assisted reproductive technologies (ART) have enabled millions of couples with compromised fertility to conceive children. Nevertheless, there is a growing concern regarding the safety of these procedures due to an increased incidence of imprinting disorders, premature birth, and low birth weight in ART-conceived offspring. An integral aspect of ART is the oxygen concentration used during in vitro development of mammalian embryos, which is typically either atmospheric (∼20%) or reduced (5%). Both oxygen tension levels have been widely used, but 5% oxygen improves preimplantation development in several mammalian species, including that of humans. To determine whether a high oxygen tension increases the frequency of epigenetic abnormalities in mouse embryos subjected to ART, we measured DNA methylation and expression of several imprinted genes in both embryonic and placental tissues from concepti generated by in vitro fertilization (IVF) and exposed to 5% or 20% oxygen during culture. We found that placentae from IVF embryos exhibit an increased frequency of abnormal methylation and expression profiles of several imprinted genes, compared to embryonic tissues. Moreover, IVF-derived placentae exhibit a variety of epigenetic profiles at the assayed imprinted genes, suggesting that these epigenetic defects arise by a stochastic process. Although culturing embryos in both of the oxygen concentrations resulted in a significant increase of epigenetic defects in placental tissues compared to naturally conceived controls, we did not detect significant differences between embryos cultured in 5% and those cultured in 20% oxygen. Thus, further optimization of ART should be considered to minimize the occurrence of epigenetic errors in the placenta. PMID:24337315

  6. Characterization of magnetic property depth profiles of surface-modified materials using a model-assisted swept frequency modulation field technique

    NASA Astrophysics Data System (ADS)

    Lo, C. C. H.

    2009-04-01

    This paper reports on a model-assisted approach to characterizing surface-modified materials whose magnetic properties vary continuously with depth. The technique involves measuring ac permeability profiles under a quasistatic biasing field superimposed with an ac modulation field of adjustable frequency and amplitude to control field penetration depth. A frequency dependent magnetic hysteresis model was used to model ac permeability profiles at different modulation field frequencies for direct comparison with measurement results. The approach was applied to characterize a series of surface hardened Fe-C samples. The depth dependence of the magnetic properties was determined by obtaining the best fits of the modeled ac permeability profiles to experimental data at multiple modulation frequencies. The midpoints of the inverted magnetic property profiles and the measured hardness profiles were found to be in agreement.

  7. Astigmatism error modification for absolute shape reconstruction using Fourier transform method

    NASA Astrophysics Data System (ADS)

    He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun

    2014-12-01

    A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.

  8. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close ...

  9. Tunable error-free optical frequency conversion of a 4ps optical short pulse over 25 nm by four-wave mixing in a polarisation-maintaining optical fibre

    NASA Astrophysics Data System (ADS)

    Morioka, T.; Kawanishi, S.; Saruwatari, M.

    1994-05-01

    Error-free, tunable optical frequency conversion of a transform-limited 4.0 ps optical pulse signalis demonstrated at 6.3 Gbit/s using four-wave mixing in a polarization-maintaining optical fibre. The process generates 4.0-4.6 ps pulses over a 25nm range with time-bandwidth products of 0.31-0.43 and conversion power penalties of less than 1.5 dB.

  10. Modified impulse method for the measurement of the frequency response of acoustic filters to weakly nonlinear transient excitations

    PubMed

    Payri; Desantes; Broatch

    2000-02-01

    In this paper, a modified impulse method is proposed which allows the determination of the influence of the excitation characteristics on acoustic filter performance. Issues related to nonlinear propagation, namely wave steepening and wave interactions, have been addressed in an approximate way, validated against one-dimensional unsteady nonlinear flow calculations. The results obtained for expansion chambers and extended duct resonators indicate that the amplitude threshold for the onset of nonlinear phenomena is related to the geometry considered. PMID:10687682

  11. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  12. Error coding simulations

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1993-11-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  13. The Effect of Verb Semantic Class and Verb Frequency (Entrenchment) on Children's and Adults' Graded Judgements of Argument-Structure Overgeneralization Errors

    ERIC Educational Resources Information Center

    Ambridge, Ben; Pine, Julian M.; Rowland, Caroline F.; Young, Chris R.

    2008-01-01

    Participants (aged 5-6 yrs, 9-10 yrs and adults) rated (using a five-point scale) grammatical (intransitive) and overgeneralized (transitive causative) uses of a high frequency, low frequency and novel intransitive verb from each of three semantic classes [Pinker, S. (1989a). "Learnability and cognition: the acquisition of argument structure."…

  14. Infliximab therapy increases the frequency of circulating CD16(+) monocytes and modifies macrophage cytokine response to bacterial infection.

    PubMed

    Nazareth, N; Magro, F; Silva, J; Duro, M; Gracio, D; Coelho, R; Appelberg, R; Macedo, G; Sarmento, A

    2014-09-01

    Crohn's disease (CD) has been correlated with altered macrophage response to microorganisms. Considering the efficacy of infliximab treatment on CD remission, we investigated infliximab effects on circulating monocyte subsets and on macrophage cytokine response to bacteria. Human peripheral blood monocyte-derived macrophages were obtained from CD patients, treated or not with infliximab. Macrophages were infected with Escherichia coli, Enterococcus faecalis, Mycobacterium avium subsp. paratuberculosis (MAP) or M. avium subsp avium, and cytokine levels [tumour necrosis factor (TNF) and interleukin (IL)-10] were evaluated at different time-points. To evaluate infliximab-dependent effects on monocyte subsets, we studied CD14 and CD16 expression by peripheral blood monocytes before and after different infliximab administrations. We also investigated TNF secretion by macrophages obtained from CD16(+) and CD16(-) monocytes and the frequency of TNF(+) cells among CD16(+) and CD16(-) monocyte-derived macrophages from CD patients. Infliximab treatment resulted in elevated TNF and IL-10 macrophage response to bacteria. An infliximab-dependent increase in the frequency of circulating CD16(+) monocytes (particularly the CD14(++) CD16(+) subset) was also observed (before infliximab: 4·65 ± 0·58%; after three administrations: 10·68 ± 2·23%). In response to MAP infection, macrophages obtained from CD16(+) monocytes were higher TNF producers and CD16(+) macrophages from infliximab-treated CD patients showed increased frequency of TNF(+) cells. In conclusion, infliximab treatment increased the TNF production of CD macrophages in response to bacteria, which seemed to depend upon enrichment of CD16(+) circulating monocytes, particularly of the CD14(++) CD16(+) subset. Infliximab treatment of CD patients also resulted in increased macrophage IL-10 production in response to bacteria, suggesting an infliximab-induced shift to M2 macrophages. PMID:24816497

  15. Infliximab therapy increases the frequency of circulating CD16+ monocytes and modifies macrophage cytokine response to bacterial infection

    PubMed Central

    Nazareth, N; Magro, F; Silva, J; Duro, M; Gracio, D; Coelho, R; Appelberg, R; Macedo, G; Sarmento, A

    2014-01-01

    Crohn's disease (CD) has been correlated with altered macrophage response to microorganisms. Considering the efficacy of infliximab treatment on CD remission, we investigated infliximab effects on circulating monocyte subsets and on macrophage cytokine response to bacteria. Human peripheral blood monocyte-derived macrophages were obtained from CD patients, treated or not with infliximab. Macrophages were infected with Escherichia coli, Enterococcus faecalis, Mycobacterium avium subsp. paratuberculosis (MAP) or M. avium subsp avium, and cytokine levels [tumour necrosis factor (TNF) and interleukin (IL)-10] were evaluated at different time-points. To evaluate infliximab-dependent effects on monocyte subsets, we studied CD14 and CD16 expression by peripheral blood monocytes before and after different infliximab administrations. We also investigated TNF secretion by macrophages obtained from CD16+ and CD16− monocytes and the frequency of TNF+ cells among CD16+ and CD16− monocyte-derived macrophages from CD patients. Infliximab treatment resulted in elevated TNF and IL-10 macrophage response to bacteria. An infliximab-dependent increase in the frequency of circulating CD16+ monocytes (particularly the CD14++CD16+ subset) was also observed (before infliximab: 4·65 ± 0·58%; after three administrations: 10·68 ± 2·23%). In response to MAP infection, macrophages obtained from CD16+ monocytes were higher TNF producers and CD16+ macrophages from infliximab-treated CD patients showed increased frequency of TNF+ cells. In conclusion, infliximab treatment increased the TNF production of CD macrophages in response to bacteria, which seemed to depend upon enrichment of CD16+ circulating monocytes, particularly of the CD14++CD16+ subset. Infliximab treatment of CD patients also resulted in increased macrophage IL-10 production in response to bacteria, suggesting an infliximab-induced shift to M2 macrophages. PMID:24816497

  16. Wound healing treatment by high frequency ultrasound, microcurrent, and combined therapy modifies the immune response in rats

    PubMed Central

    Korelo, Raciele I. G.; Kryczyk, Marcelo; Garcia, Carolina; Naliwaiko, Katya; Fernandes, Luiz C.

    2016-01-01

    BACKGROUND: Therapeutic high-frequency ultrasound, microcurrent, and a combination of the two have been used as potential interventions in the soft tissue healing process, but little is known about their effect on the immune system. OBJECTIVE: To evaluate the effects of therapeutic high frequency ultrasound, microcurrent, and the combined therapy of the two on the size of the wound area, peritoneal macrophage function, CD4+ and CD8+, T lymphocyte populations, and plasma concentration of interleukins (ILs). METHOD: Sixty-five Wistar rats were randomized into five groups, as follows: uninjured control (C, group 1), lesion and no treatment (L, group 2), lesion treated with ultrasound (LU, group 3), lesion treated with microcurrent (LM, group 4), and lesion treated with combined therapy (LUM, group 5). For groups 3, 4 and 5, treatment was initiated 24 hours after surgery under anesthesia and each group was allocated into three different subgroups (n=5) to allow for the use of the different therapy resources at on days 3, 7 and 14 Photoplanimetry was performed daily. After euthanasia, blood was collected for immune analysis. RESULTS: Ultrasound increased the phagocytic capacity and the production of nitric oxide by macrophages and induced the reduction of CD4+ cells, the CD4+/CD8+ ratio, and the plasma concentration of IL-1β. Microcurrent and combined therapy decreased the production of superoxide anion, nitric oxide, CD4+-positive cells, the CD4+/CD8+ ratio, and IL-1β concentration. CONCLUSIONS: Therapeutic high-frequency ultrasound, microcurrent, and combined therapy changed the activity of the innate and adaptive immune system during healing process but did not accelerate the closure of the wound. PMID:26786082

  17. Modified structural and frequency dependent impedance formalism of nanoscale BaTiO3 due to Tb inclusion

    NASA Astrophysics Data System (ADS)

    Borah, Manjit; Mohanta, Dambarudhar

    2016-05-01

    We report the effect of Tb-doping on the structural and high frequency impedance response of the nanoscale BaTiO3 (BT) systems. While exhibiting a mixed phase crystal structure, the nano-BT systems are found to evolve with edges, and facets. The interplanar spacing of crystal lattice fringes is ~0.25 nm. The Cole-Cole plots, in the impedance formalism, have demonstrated semicircles which are the characteristic feature of grain boundary resistance of several MΩ. A lowering of ac conductivity with doping was believed to be due to the manifestation of oxygen vacancies and vacancy ordering.

  18. Influence of low-spatial frequency ripples in machined potassium dihydrogen phosphate crystal surfaces on wavefront errors based on the wavelet method

    NASA Astrophysics Data System (ADS)

    Chen, Wanqun; Sun, Yazhou

    2015-02-01

    In using a fly cutter to machine potassium dihydrogen phosphate (KDP) crystals, rippling in machined surfaces will remain that will have a significant impact on the optical performance. An analysis of these low-spatial frequency ripples is presented and its influence on the root-mean-squared gradient (GRMS) of the wavefront discussed. A frequency analysis of the machined KDP crystal surfaces is performed using wavelet transform and power spectral density methods. Based on a classification of the time frequencies for these macroripples, the multimode vibration of the machine tool is found to be the main reason surface ripples are produced. Improvements in the machine design parameters are proposed to limit such effects on the wavefront performance of the KDP crystal.

  19. Exposure to an extremely low-frequency electromagnetic field only slightly modifies the proteome of Chromobacterium violaceumATCC 12472

    PubMed Central

    Baraúna, Rafael A.; Santos, Agenor V.; Graças, Diego A.; Santos, Daniel M.; Ghilardi, Rubens; Pimenta, Adriano M. C.; Carepo, Marta S. P.; Schneider, Maria P.C.; Silva, Artur

    2015-01-01

    Several studies of the physiological responses of different organisms exposed to extremely low-frequency electromagnetic fields (ELF-EMF) have been described. In this work, we report the minimal effects of in situ exposure to ELF-EMF on the global protein expression of Chromobacterium violaceum using a gel-based proteomic approach. The protein expression profile was only slightly altered, with five differentially expressed proteins detected in the exposed cultures; two of these proteins (DNA-binding stress protein, Dps, and alcohol dehydrogenase) were identified by MS/MS. The enhanced expression of Dps possibly helped to prevent physical damage to DNA. Although small, the changes in protein expression observed here were probably beneficial in helping the bacteria to adapt to the stress generated by the electromagnetic field. PMID:26273227

  20. Exposure to an extremely low-frequency electromagnetic field only slightly modifies the proteome of Chromobacterium violaceumATCC 12472.

    PubMed

    Baraúna, Rafael A; Santos, Agenor V; Graças, Diego A; Santos, Daniel M; Ghilardi, Rubens; Pimenta, Adriano M C; Carepo, Marta S P; Schneider, Maria P C; Silva, Artur

    2015-05-01

    Several studies of the physiological responses of different organisms exposed to extremely low-frequency electromagnetic fields (ELF-EMF) have been described. In this work, we report the minimal effects of in situ exposure to ELF-EMF on the global protein expression of Chromobacterium violaceum using a gel-based proteomic approach. The protein expression profile was only slightly altered, with five differentially expressed proteins detected in the exposed cultures; two of these proteins (DNA-binding stress protein, Dps, and alcohol dehydrogenase) were identified by MS/MS. The enhanced expression of Dps possibly helped to prevent physical damage to DNA. Although small, the changes in protein expression observed here were probably beneficial in helping the bacteria to adapt to the stress generated by the electromagnetic field. PMID:26273227

  1. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  2. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  3. Evolution of error diffusion

    NASA Astrophysics Data System (ADS)

    Knox, Keith T.

    1999-10-01

    As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm--to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lessons learned along the way.

  4. Evolution of error diffusion

    NASA Astrophysics Data System (ADS)

    Knox, Keith T.

    1998-12-01

    As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm - to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lesions learned along the way.

  5. New Gear Transmission Error Measurement System Designed

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2001-01-01

    The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.

  6. Frequency of Aminoglycoside-Modifying Enzymes and ArmA Among Different Sequence Groups of Acinetobacter baumannii in Iran.

    PubMed

    Hasani, Alka; Sheikhalizadeh, Vajihe; Ahangarzadeh Rezaee, Mohammad; Rahmati-Yamchi, Mohammad; Hasani, Akbar; Ghotaslou, Reza; Goli, Hamid Reza

    2016-07-01

    We evaluated aminoglycoside resistance in 87 Acinetobacter baumannii strains isolated from four hospitals located in the North West region of Iran and typed them in sequence groups (SGs) using trilocus sequence-based scheme to compare their clonal relationships with international clones. Resistance toward aminoglycosides was assayed by minimum inhibitory concentration (MIC) and presence of aminoglycoside-modifying enzymes (AMEs), and ArmA-encoding genes were evaluated in different SGs. The majority of isolates belonged to SG1 (39%), SG2 (33.3%), and SG3 (12.6%), whereas the remaining ones were assigned to six novel variants of SGs. MIC determination revealed netilmicin as the most and kanamycin as the least active aminoglycosides against all groups. Among the varied SGs, isolates of SG2 showed more susceptibility toward all tested aminoglycosides. APH(3'')-VIa-encoding gene was predominant in SG1 (47%), SG2 (62%), and SG6-9 (100%). However, AAC(3')-Ia (100%) and ANT(2')-Ia (90.9%) were the dominant AMEs in SG3. There was significant association between harboring of aminoglycoside resistance genes and specific aminoglycosides: gene encoded by APH(3')-VIa was allied to resistance against amikacin and kanamycin, whereas ANT(2')-Ia was related to the resistance toward gentamicin and tobramycin in SG2. In SG1, tobramycin resistance was correlated with harboring of AAC(6')-Ib. Screening of armA demonstrated the presence of this gene in SG1 (58.8%), SG2 (10.3%), as well as SG3 (9%). Our results revealed definite correlation between the phenotypes and genotypes of aminoglycoside resistance in different clonal lineages of A. baumannii. PMID:26779992

  7. The Efficiency of a Modified Real-time Wireless Brain Electric Activity Calculator to Reveal the Subliminal Psychological Instability of Surgeons that Possibly Leads to Errors in Surgical Procedures.

    PubMed

    Akimoto, Saori; Ohdaira, Takeshi; Nakamura, Seiji; Yamazaki, Tokihisa; Yano, Shinichiro; Higashihara, Nobuhiko

    2015-05-01

    We know that experienced endoscopic surgeons, despite having extensive training, may make a rare but fatal mistake. Prof. Takeshi Ohdaira developed a device visualizing brain action potential to reflect the latent psychological instability of the surgeon. The Ohdaira system consists of three components: a real-time brain action potential measurement unit, a simulated abdominal cavity, and an intra-abdominal monitor. We conducted two psychological stress tests by using an artificial laparoscopic simulator and an animal model. There were five male subjects aged between 41 to 61 years. The psychological instability scores were considered to reflect, to some extent, the number of years of experience of the surgeon in medical care. However, very high inter-individual variability was noted. Furthermore, we discovered the following: 1) bleeding during simulated laparoscopic surgery--an episode generally considered to be psychological stress for the surgeon--did not form the greatest psychological stress; 2) the greatest psychological stress was elicited at the moment when the surgeon became faced with a setting in which his anatomical knowledge was lacking or a setting in which he presumed imminent bleeding; and 3) the excessively activated action potential of the brain possibly leads to a procedural error during surgery. A modified brain action potential measurement unit can reveal the latent psychological instability of surgeons that possibly leads to errors in surgical procedures. PMID:26054987

  8. Design and implementation of a new modified sliding mode controller for grid-connected inverter to controlling the voltage and frequency.

    PubMed

    Ghanbarian, Mohammad Mehdi; Nayeripour, Majid; Rajaei, Amirhossein; Mansouri, Mohammad Mahdi

    2016-03-01

    As the output power of a microgrid with renewable energy sources should be regulated based on the grid conditions, using robust controllers to share and balance the power in order to regulate the voltage and frequency of microgrid is critical. Therefore a proper control system is necessary for updating the reference signals and determining the proportion of each inverter in the microgrid control. This paper proposes a new adaptive method which is robust while the conditions are changing. This controller is based on a modified sliding mode controller which provides adapting conditions in linear and nonlinear loads. The performance of the proposed method is validated by representing the simulation results and experimental lab results. PMID:26704720

  9. Structure and properties of the surface of a laminate polyimide-fluoropolymer film modified in a low-frequency glow discharge plasma

    SciTech Connect

    Gil`man, A.B.; Kuznetsov, A.A.; Vengerskaya, L.E.

    1995-07-01

    Variations in the wettability of a PMR-351 laminate (polyimide and tetrafluoroethylene-hexafluoropropylene copolymer) film under the action of a low-frequency glow discharge plasma depending on the duration and conditions of the treatment was studied. For the modified films, the stability of contact angles {theta} upon film storage in air was studied. Experimental values of {theta} were used for calculations of the total free energy of the surface and its polar and dispersion components. Chemical and structural changes on the polyimide and the fluoropolymer surface of the film under the action of plasma were studied using the technique of multiple internal-reflection IR spectroscopy. A possible mechanism of these changes is suggested.

  10. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  11. High frequency electromagnetic properties of interstitial-atom-modified Ce{sub 2}Fe{sub 17}N{sub X} and its composites

    SciTech Connect

    Li, L. Z.; Wei, J. Z.; Xia, Y. H.; Wu, R.; Yun, C.; Yang, Y. B.; Yang, W. Y.; Du, H. L.; Han, J. Z.; Liu, S. Q.; Yang, Y. C.; Wang, C. S. E-mail: jbyang@pku.edu.cn; Yang, J. B. E-mail: jbyang@pku.edu.cn

    2014-07-14

    The magnetic and microwave absorption properties of the interstitial atom modified intermetallic compound Ce{sub 2}Fe{sub 17}N{sub X} have been investigated. The Ce{sub 2}Fe{sub 17}N{sub X} compound shows a planar anisotropy with saturation magnetization of 1088 kA/m at room temperature. The Ce{sub 2}Fe{sub 17}N{sub X} paraffin composite with a mass ratio of 1:1 exhibits a permeability of μ′ = 2.7 at low frequency, together with a reflection loss of −26 dB at 6.9 GHz with a thickness of 1.5 mm and −60 dB at 2.2 GHz with a thickness of 4.0 mm. It was found that this composite increases the Snoek limit and exhibits both high working frequency and permeability due to its high saturation magnetization and high ratio of the c-axis anisotropy field to the basal plane anisotropy field. Hence, it is possible that this composite can be used as a high-performance thin layer microwave absorber.

  12. Analyzing the properties of acceptor mode in two-dimensional plasma photonic crystals based on a modified finite-difference frequency-domain method

    SciTech Connect

    Zhang, Hai-Feng; Ding, Guo-Wen; Lin, Yi-Bing; Chen, Yu-Qing

    2015-05-15

    In this paper, the properties of acceptor mode in two-dimensional plasma photonic crystals (2D PPCs) composed of the homogeneous and isotropic dielectric cylinders inserted into nonmagnetized plasma background with square lattices under transverse-magnetic wave are theoretically investigated by a modified finite-difference frequency-domain (FDFD) method with supercell technique, whose symmetry of every supercell is broken by removing a central rod. A new FDFD method is developed to calculate the band structures of such PPCs. The novel FDFD method adopts a general function to describe the distribution of dielectric in the present PPCs, which can easily transform the complicated nonlinear eigenvalue equation to the simple linear equation. The details of convergence and effectiveness of proposed FDFD method are analyzed using a numerical example. The simulated results demonstrate that the enough accuracy of the proposed FDFD method can be observed compared to the plane wave expansion method, and the good convergence can also be obtained if the number of meshed grids is large enough. As a comparison, two different configurations of photonic crystals (PCs) but with similar defect are theoretically investigated. Compared to the conventional dielectric-air PCs, not only the acceptor mode has a higher frequency but also an additional photonic bandgap (PBG) can be found in the low frequency region. The calculated results also show that PBGs of proposed PPCs can be enlarged as the point defect is introduced. The influences of the parameters for present PPCs on the properties of acceptor mode are also discussed in detail. Numerical simulations reveal that the acceptor mode in the present PPCs can be easily tuned by changing those parameters. Those results can hold promise for designing the tunable applications in the signal process or time delay devices based on the present PPCs.

  13. A non-paraxial scattering theory for specifying and analyzing fabrication errors in optical surfaces

    NASA Astrophysics Data System (ADS)

    Vernold, Cynthia Louise

    There are three fundamental mechanisms in optical systems that contribute to image degradation: aperture diffraction, geometrical aberrations caused by residual design errors, and scattering effects due to optical fabrication errors. Diffraction effects, as well as optical design errors and fabrication errors that are laterally large in nature (generally referred to as figure errors), are accurately modeled using conventional ray trace analysis codes. However, these ray-trace codes fall short of providing a complete picture of image degradation; they routinely ignore fabrication-induced errors with spatial periods that are too small to be considered figure errors. These errors are typically referred to as mid-spatial-frequency (ripple) and high- spatial-frequency (micro-roughness) surface errors. These overlooked, but relevant, fabrication-induced errors affect image quality in different ways. Mid-spatial- frequency errors produce small-angle scatter that tends to widen the diffraction-limited image core (i.e. for a system with a circular exit pupil, this is the central lobe of the Airy pattern), and in doing so, reduces the optical resolution of a system. High-spatial-frequency errors tend to scatter energy out of the image core into a wide-angle halo, causing a reduction in image contrast. Micro-roughness and ripple are inherent aspects of the less conventional, small-tool-based optical fabrication approaches. It is especially important in these cases to specify these errors accurately during the design phase of a project, and deterministically monitor and control them during the fabrication phase of a project. Surprisingly, most current approaches to this issue employ some guessing and ``gut feel'' based on past experience, because accurate theories and analysis tools are not readily available. This dissertation takes the first step towards solving this problem by describing a Fourier-based approach for classifying and quantifying surface errors that can be

  14. Combination of modified mixing technique and low frequency ultrasound to control the elution profile of vancomycin-loaded acrylic bone cement

    PubMed Central

    Wendling, A.; Mar, D.; Wischmeier, N.; Anderson, D.

    2016-01-01

    provides a reasonable means for increasing both short- and long-term antibiotic elution without affecting mechanical strength. Cite this article: Dr. T. McIff. Combination of modified mixing technique and low frequency ultrasound to control the elution profile of vancomycin-loaded acrylic bone cement. Bone Joint Res 2016;5:26–32. DOI: 10.1302/2046-3758.52.2000412 PMID:26843512

  15. Phase Errors and the Capture Effect

    SciTech Connect

    Blair, J., and Machorro, E.

    2011-11-01

    This slide-show presents analysis of spectrograms and the phase error of filtered noise in a signal. When the filtered noise is smaller than the signal amplitude, the phase error can never exceed 90{deg}, so the average phase error over many cycles is zero: this is called the capture effect because the largest signal captures the phase and frequency determination.

  16. Using a modified time-reverse imaging technique to locate low-frequency earthquakes on the San Andreas Fault near Cholame, California

    NASA Astrophysics Data System (ADS)

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2015-11-01

    We present a new method to locate low-frequency earthquakes (LFEs) within tectonic tremor episodes based on time-reverse imaging techniques. The modified time-reverse imaging technique presented here is the first method that locates individual LFEs within tremor episodes within 5 km uncertainty without relying on high-amplitude P-wave arrivals and that produces similar hypocentral locations to methods that locate events by stacking hundreds of LFEs without having to assume event co-location. In contrast to classic time-reverse imaging algorithms, we implement a modification to the method that searches for phase coherence over a short time period rather than identifying the maximum amplitude of a superpositioned wavefield. The method is independent of amplitude and can help constrain event origin time. The method uses individual LFE origin times, but does not rely on a priori information on LFE templates and families. We apply the method to locate 34 individual LFEs within tremor episodes that occur between 2010 and 2011 on the San Andreas Fault, near Cholame, California. Individual LFE location accuracies range from 2.6 to 5 km horizontally and 4.8 km vertically. Other methods that have been able to locate individual LFEs with accuracy of less than 5 km have mainly used large-amplitude events where a P-phase arrival can be identified. The method described here has the potential to locate a larger number of individual low-amplitude events with only the S-phase arrival. Location accuracy is controlled by the velocity model resolution and the wavelength of the dominant energy of the signal. Location results are also dependent on the number of stations used and are negligibly correlated with other factors such as the maximum gap in azimuthal coverage, source-station distance and signal-to-noise ratio.

  17. Using a modified time-reverse imaging technique to locate low-frequency earthquakes on the San Andreas Fault near Cholame, California

    USGS Publications Warehouse

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2015-01-01

    We present a new method to locate low-frequency earthquakes (LFEs) within tectonic tremor episodes based on time-reverse imaging techniques. The modified time-reverse imaging technique presented here is the first method that locates individual LFEs within tremor episodes within 5 km uncertainty without relying on high-amplitude P-wave arrivals and that produces similar hypocentral locations to methods that locate events by stacking hundreds of LFEs without having to assume event co-location. In contrast to classic time-reverse imaging algorithms, we implement a modification to the method that searches for phase coherence over a short time period rather than identifying the maximum amplitude of a superpositioned wavefield. The method is independent of amplitude and can help constrain event origin time. The method uses individual LFE origin times, but does not rely on a priori information on LFE templates and families.We apply the method to locate 34 individual LFEs within tremor episodes that occur between 2010 and 2011 on the San Andreas Fault, near Cholame, California. Individual LFE location accuracies range from 2.6 to 5 km horizontally and 4.8 km vertically. Other methods that have been able to locate individual LFEs with accuracy of less than 5 km have mainly used large-amplitude events where a P-phase arrival can be identified. The method described here has the potential to locate a larger number of individual low-amplitude events with only the S-phase arrival. Location accuracy is controlled by the velocity model resolution and the wavelength of the dominant energy of the signal. Location results are also dependent on the number of stations used and are negligibly correlated with other factors such as the maximum gap in azimuthal coverage, source–station distance and signal-to-noise ratio.

  18. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  19. Error diffusion with a more symmetric error distribution

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang

    1994-05-01

    In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.

  20. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  1. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  2. Error-Based Design Space Windowing

    NASA Technical Reports Server (NTRS)

    Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman

    2002-01-01

    Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.

  3. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  4. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. PMID:23999403

  5. Modified hyper-Ramsey methods for the elimination of probe shifts in optical clocks

    NASA Astrophysics Data System (ADS)

    Hobson, R.; Bowden, W.; King, S. A.; Baird, P. E. G.; Hill, I. R.; Gill, P.

    2016-01-01

    We develop a method of modified hyper-Ramsey spectroscopy in optical clocks, achieving complete immunity to the frequency shifts induced by the probing fields themselves. Using particular pulse sequences with tailored phases, frequencies, and durations, we can derive an error signal centered exactly at the unperturbed atomic resonance with a steep discriminant which is robust against variations in the probe shift. We experimentally investigate the scheme using the magnetically induced 1S0-3P0 transition in 88Sr, demonstrating automatic suppression of a sizable 2 ×10-13 probe Stark shift to below 1 ×10-16 even with very large errors in shift compensation.

  6. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  7. Immediate error correction process following sleep deprivation.

    PubMed

    Hsieh, Shulan; Cheng, I-Chen; Tsai, Ling-Ling

    2007-06-01

    Previous studies have suggested that one night of sleep deprivation decreases frontal lobe metabolic activity, particularly in the anterior cingulated cortex (ACC), resulting in decreased performance in various executive function tasks. This study thus attempted to address whether sleep deprivation impaired the executive function of error detection and error correction. Sixteen young healthy college students (seven women, nine men, with ages ranging from 18 to 23 years) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event-related potentials (ERPs) during the flanker task were obtained using a within-subject, repeated-measure design. The error negativity or error-related negativity (Ne/ERN) and the error positivity (Pe) seen immediately after errors were analyzed. The results show that the amplitude of the Ne/ERN was reduced significantly following sleep deprivation. Reduction also occurred for error trials with subsequent correction, indicating that sleep deprivation influenced error correction ability. This study further demonstrated that the impairment in immediate error correction following sleep deprivation was confined to specific stimulus types, with both Ne/ERN and behavioral correction rates being reduced only for trials in which flanker stimuli were incongruent with the target stimulus, while the response to the target was compatible with that of the flanker stimuli following sleep deprivation. The results thus warrant future systematic investigation of the interaction between stimulus type and error correction following sleep deprivation. PMID:17542943

  8. Error analysis of quartz crystal resonator applications

    SciTech Connect

    Lucklum, R.; Behling, C.; Hauptmann, P.; Cernosek, R.W.; Martin, S.J.

    1996-12-31

    Quartz crystal resonators in chemical sensing applications are usually configured as the frequency determining element of an electrical oscillator. By contrast, the shear modulus determination of a polymer coating needs a complete impedance analysis. The first part of this contribution reports the error made if common approximations are used to relate the frequency shift to the sorbed mass. In the second part the authors discuss different error sources in the procedure to determine shear parameters.

  9. A Review of Errors in the Journal Abstract

    ERIC Educational Resources Information Center

    Lee, Eunpyo; Kim, Eun-Kyung

    2013-01-01

    (percentage) of abstracts that involved with errors, the most erroneous part of the abstract, and the types and frequency of errors. Also the purpose expanded to compare the results with those of the previous…

  10. First results from HF (High-Frequency) oblique backscatter soundings to the northwest of College, Alaska using a modified ULCAR digisonde D-256

    NASA Astrophysics Data System (ADS)

    Hunsucker, Robert D.; Delana, Brett S.

    1989-03-01

    The modified air Weather Service Digital Ionospheric Sounding System (DISS, AN/FMQ12) is described in detail. The HF sounding system at College, Alaska is used to investigate the behavior of ground scatter, plus E and F-region direct backscatter caused by high latitude ionospheric irregularities. These echoes may manifest themselves as clutter in the proposed Alaskan OTH radar system. Soundings to the northwest of College were identified as ground scatter plus several types of auroral clutter echoes.

  11. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  12. Topology of modified helical gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Zhang, J.; Handschuh, R. F.; Coy, J. J.

    1989-01-01

    The topology of several types of modified surfaces of helical gears is proposed. The modified surfaces allow absorption of a linear or almost linear function of transmission errors. These errors are caused by gear misalignment and an improvement of the contact of gear tooth surfaces. Principles and corresponding programs for computer aided simulation of meshing and contact of gears have been developed. The results of this investigation are illustrated with numerical examples.

  13. How does human error affect safety in anesthesia?

    PubMed

    Gravenstein, J S

    2000-01-01

    Anesthesia morbidity and mortality, while acceptable, are not zero. Most mishaps have a multifactorial cause in which human error plays a significant part. Good design of anesthesia machines, ventilators, and monitors can prevent some, but not all, human error. Attention to the system in which the errors occur is important. Modern training with simulators is designed to reduce the frequency of human errors and to teach anesthesiologists how to deal with the consequences of such errors. PMID:10601526

  14. Nonlinear amplification of side-modes in frequency combs.

    PubMed

    Probst, R A; Steinmetz, T; Wilken, T; Hundertmark, H; Stark, S P; Wong, G K L; Russell, P St J; Hänsch, T W; Holzwarth, R; Udem, Th

    2013-05-20

    We investigate how suppressed modes in frequency combs are modified upon frequency doubling and self-phase modulation. We find, both experimentally and by using a simplified model, that these side-modes are amplified relative to the principal comb modes. Whereas frequency doubling increases their relative strength by 6 dB, the growth due to self-phase modulation can be much stronger and generally increases with nonlinear propagation length. Upper limits for this effect are derived in this work. This behavior has implications for high-precision calibration of spectrographs with frequency combs used for example in astronomy. For this application, Fabry-Pérot filter cavities are used to increase the mode spacing to exceed the resolution of the spectrograph. Frequency conversion and/or spectral broadening after non-perfect filtering reamplify the suppressed modes, which can lead to calibration errors. PMID:23736390

  15. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  16. Meal frequencies modify the effect of common genetic variants on body mass index in adolescents of the northern Finland birth cohort 1986.

    PubMed

    Jääskeläinen, Anne; Schwab, Ursula; Kolehmainen, Marjukka; Kaakinen, Marika; Savolainen, Markku J; Froguel, Philippe; Cauchi, Stéphane; Järvelin, Marjo-Riitta; Laitinen, Jaana

    2013-01-01

    Recent studies suggest that meal frequencies influence the risk of obesity in children and adolescents. It has also been shown that multiple genetic loci predispose to obesity already in youth. However, it is unknown whether meal frequencies could modulate the association between single nucleotide polymorphisms (SNPs) and the risk of obesity. We examined the effect of two meal patterns on weekdays -5 meals including breakfast (regular) and ≤ 4 meals with or without breakfast (meal skipping) - on the genetic susceptibility to increased body mass index (BMI) in Finnish adolescents. Eight variants representing 8 early-life obesity-susceptibility loci, including FTO and MC4R, were genotyped in 2215 boys and 2449 girls aged 16 years from the population-based Northern Finland Birth Cohort 1986. A genetic risk score (GRS) was calculated for each individual by summing the number of BMI-increasing alleles across the 8 loci. Weight and height were measured and dietary data were collected using self-administered questionnaires. Among meal skippers, the difference in BMI between high-GRS and low-GRS (<8 and ≥ 8 BMI-increasing alleles) groups was 0.90 (95% CI 0.63,1.17) kg/m(2), whereas in regular eaters, this difference was 0.32 (95% CI 0.06,0.57) kg/m(2) (p interaction = 0.003). The effect of each MC4R rs17782313 risk allele on BMI in meal skippers (0.47 [95% CI 0.22,0.73] kg/m(2)) was nearly three-fold compared with regular eaters (0.18 [95% CI -0.06,0.41] kg/m(2)) (p interaction = 0.016). Further, the per-allele effect of the FTO rs1421085 was 0.24 (95% CI 0.05,0.42) kg/m(2) in regular eaters and 0.46 (95% CI 0.27,0.66) kg/m(2) in meal skippers but the interaction between FTO genotype and meal frequencies on BMI was significant only in boys (p interaction = 0.015). In summary, the regular five-meal pattern attenuated the increasing effect of common SNPs on BMI in adolescents. Considering the epidemic of obesity in youth, the promotion of regular eating may have

  17. A Frequency and Error Analysis of the Use of Determiners, the Relationships between Noun Phrases, and the Structure of Discourse in English Essays by Native English Writers and Native Chinese, Taiwanese, and Korean Learners of English as a Second Language

    ERIC Educational Resources Information Center

    Gressang, Jane E.

    2010-01-01

    Second language (L2) learners notoriously have trouble using articles in their target languages (e.g., "a", "an", "the" in English). However, researchers disagree about the patterns and causes of these errors. Past studies have found that L2 English learners: (1) Predominantly omit articles (White 2003, Robertson 2000), (2) Overuse "the" (Huebner…

  18. Reduced discretization error in HZETRN

    SciTech Connect

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.

  19. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  20. [The influence of low-frequency pulsed electric and magnetic signals or their combination on the normal and modified fibroblasts (an experimental study)].

    PubMed

    Ulitko, M V; Medvedeva, S Yu; Malakhov, V V

    2016-01-01

    The results of clinical studies give evidence of the beneficial preventive and therapeutic effects of the «Tiline-EM» physiotherapeutic device designed for the combined specific treatment of the skin regions onto which both discomfort and pain sensations are directly projected, reflectively active sites and zones, as well as trigger zones with the use of low-frequency pulsed electric current and magnetic field. The efficient application of the device requires the understanding of the general mechanisms underlying such action on the living systems including those operating at the cellular and subcellular levels. The objective of the present study was the investigation of the specific and complex effects produced by the low-frequency pulses of electric current and magnetic field generated in the physiotherapeutic device «Tiline-EM» on the viability, proliferative activity, and morphofunctional characteristics of normal skin fibroblasts and the transformed fibroblast line K-22. It has been demonstrated that the biological effects of the electric and magnetic signals vary depending on the type of the cell culture and the mode of impact. The transformed fibroblasts proved to be more sensitive to the specific and complex effects of electric and magnetic pulses than the normal skin fibroblasts. The combined action of the electric and magnetic signals was shown to have the greatest influence on both varieties of fibroblasts. It manifests itself in the form of enhanced viability, elevated proliferative and synthetic activity in the cultures of transformed fibroblasts and as the acceleration of cell differentiation in the cultures of normal fibroblasts. The effect of stimulation of dermal fibroblast differentiation in response to the combined treatment by the electric and magnetic signals is of interest from the standpoint of the physiotherapeutic use of the «Tiline-EM» device for the purpose of obtaining fibroblasts cultures to be employed in regenerative therapy and

  1. Motion error compensation of multi-legged walking robots

    NASA Astrophysics Data System (ADS)

    Wang, Liangwen; Chen, Xuedong; Wang, Xinjie; Tang, Weigang; Sun, Yi; Pan, Chunmei

    2012-07-01

    Existing errors in the structure and kinematic parameters of multi-legged walking robots, the motion trajectory of robot will diverge from the ideal sports requirements in movement. Since the existing error compensation is usually used for control compensation of manipulator arm, the error compensation of multi-legged robots has seldom been explored. In order to reduce the kinematic error of robots, a motion error compensation method based on the feedforward for multi-legged mobile robots is proposed to improve motion precision of a mobile robot. The locus error of a robot body is measured, when robot moves along a given track. Error of driven joint variables is obtained by error calculation model in terms of the locus error of robot body. Error value is used to compensate driven joint variables and modify control model of robot, which can drive the robots following control model modified. The model of the relation between robot's locus errors and kinematic variables errors is set up to achieve the kinematic error compensation. On the basis of the inverse kinematics of a multi-legged walking robot, the relation between error of the motion trajectory and driven joint variables of robots is discussed. Moreover, the equation set is obtained, which expresses relation among error of driven joint variables, structure parameters and error of robot's locus. Take MiniQuad as an example, when the robot MiniQuad moves following beeline tread, motion error compensation is studied. The actual locus errors of the robot body are measured before and after compensation in the test. According to the test, variations of the actual coordinate value of the robot centroid in x-direction and z-direction are reduced more than one time. The kinematic errors of robot body are reduced effectively by the use of the motion error compensation method based on the feedforward.

  2. Systemic factors of errors in the case identification process of the national routine health information system: A case study of Modified Field Health Services Information System in the Philippines

    PubMed Central

    2011-01-01

    Background The quality of data in national health information systems has been questionable in most developing countries. However, the mechanisms of errors in the case identification process are not fully understood. This study aimed to investigate the mechanisms of errors in the case identification process in the existing routine health information system (RHIS) in the Philippines by measuring the risk of committing errors for health program indicators used in the Field Health Services Information System (FHSIS 1996), and characterizing those indicators accordingly. Methods A structured questionnaire on the definitions of 12 selected indicators in the FHSIS was administered to 132 health workers in 14 selected municipalities in the province of Palawan. A proportion of correct answers (difficulty index) and a disparity of two proportions of correct answers between higher and lower scored groups (discrimination index) were calculated, and the patterns of wrong answers for each of the 12 items were abstracted from 113 valid responses. Results None of 12 items reached a difficulty index of 1.00. The average difficulty index of 12 items was 0.266 and the discrimination index that showed a significant difference was 0.216 and above. Compared with these two cut-offs, six items showed non-discrimination against lower difficulty indices of 0.035 (4/113) to 0.195 (22/113), two items showed a positive discrimination against lower difficulty indices of 0.142 (16/113) and 0.248 (28/113), and four items showed a positive discrimination against higher difficulty indices of 0.469 (53/113) to 0.673 (76/113). Conclusions The results suggest three characteristics of definitions of indicators such as those that are (1) unsupported by the current conditions in the health system, i.e., (a) data are required from a facility that cannot directly generate the data and, (b) definitions of indicators are not consistent with its corresponding program; (2) incomplete or ambiguous, which allow

  3. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  4. New Modified Band Limited Impedance (BLIMP) Inversion Method Using Envelope Attribute

    NASA Astrophysics Data System (ADS)

    Maulana, Z. L.; Saputro, O. D.; Latief, F. D. E.

    2016-01-01

    Earth attenuates high frequencies from seismic wavelet. Low frequency seismics cannot be obtained by low quality geophone. The low frequencies (0-10 Hz) that are not present in seismic data are important to obtain a good result in acoustic impedance (AI) inversion. AI is important to determine reservoir quality by converting AI to reservoir properties like porosity, permeability and water saturation. The low frequencies can be supplied from impedance log (AI logs), velocity analysis, and from the combination of both data. In this study, we propose that the low frequencies could be obtained from the envelope seismic attribute. This new proposed method is essentially a modified BLIMP (Band Limited Impedance) inversion method, in which the AI logs for BLIMP substituted with the envelope attribute. In low frequency domain (0-10 Hz), the envelope attribute produces high amplitude. This low frequency from the envelope attribute is utilized to replace low frequency from AI logs in BLIMP. Linear trend in this method is acquired from the AI logs. In this study, the method is applied on synthetic seismograms created from impedance log from well ‘X’. The mean squared error from the modified BLIMP inversion is 2-4% for each trace (variation in error is caused by different normalization constant), lower than the conventional BLIMP inversion which produces error of 8%. The new method is also applied on Marmousi2 dataset and show promising result. The modified BLIMP inversion result from Marmousi2 by using one log AI is better than the one produced from the conventional method.

  5. Refractive errors in children.

    PubMed

    Tongue, A C

    1987-12-01

    Optical correction of refractive errors in infants and young children is indicated when the refractive errors are sufficiently large to cause unilateral or bilateral amblyopia, if they are impairing the child's ability to function normally, or if the child has accommodative strabismus. Screening for refractive errors is important and should be performed as part of the annual physical examination in all verbal children. Screening for significant refractive errors in preverbal children is more difficult; however, the red reflex test of Bruckner is useful for the detection of anisometropic refractive errors. The photorefraction test, which is an adaptation of Bruckner's red reflex test, may prove to be a useful screening device for detecting bilateral as well as unilateral refractive errors. Objective testing as well as subjective testing enables ophthalmologists to prescribe proper optical correction for refractive errors for infants and children of any age. PMID:3317238

  6. Error-prone signalling.

    PubMed

    Johnstone, R A; Grafen, A

    1992-06-22

    The handicap principle of Zahavi is potentially of great importance to the study of biological communication. Existing models of the handicap principle, however, make the unrealistic assumption that communication is error free. It seems possible, therefore, that Zahavi's arguments do not apply to real signalling systems, in which some degree of error is inevitable. Here, we present a general evolutionarily stable strategy (ESS) model of the handicap principle which incorporates perceptual error. We show that, for a wide range of error functions, error-prone signalling systems must be honest at equilibrium. Perceptual error is thus unlikely to threaten the validity of the handicap principle. Our model represents a step towards greater realism, and also opens up new possibilities for biological signalling theory. Concurrent displays, direct perception of quality, and the evolution of 'amplifiers' and 'attenuators' are all probable features of real signalling systems, yet handicap models based on the assumption of error-free communication cannot accommodate these possibilities. PMID:1354361

  7. Derivational Morphophonology: Exploring Errors in Third Graders' Productions

    ERIC Educational Resources Information Center

    Jarmulowicz, Linda; Hay, Sarah E.

    2009-01-01

    Purpose: This study describes a post hoc analysis of segmental, stress, and syllabification errors in third graders' productions of derived English words with the stress-changing suffixes "-ity" and "-ic." We investigated whether (a) derived word frequency influences error patterns, (b) stress and syllabification errors always co-occur, and (c)…

  8. The Nature of Error in Adolescent Student Writing

    ERIC Educational Resources Information Center

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  9. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  10. Grammatical Errors Produced by English Majors: The Translation Task

    ERIC Educational Resources Information Center

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  11. CORRELATED ERRORS IN EARTH POINTING MISSIONS

    NASA Technical Reports Server (NTRS)

    Bilanow, Steve; Patt, Frederick S.

    2005-01-01

    Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor

  12. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES..., modifying or otherwise disturbing an otherwise appropriate ruling or order or act, unless refusal to...

  13. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  14. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  15. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  16. A Review of the Literature on Computational Errors With Whole Numbers. Mathematics Education Diagnostic and Instructional Centre (MEDIC).

    ERIC Educational Resources Information Center

    Burrows, J. K.

    Research on error patterns associated with whole number computation is reviewed. Details of the results of some of the individual studies cited are given in the appendices. In Appendix A, 33 addition errors, 27 subtraction errors, 41 multiplication errors, and 41 division errors are identified, and the frequency of these errors made by 352…

  17. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  18. Medical error and disclosure.

    PubMed

    White, Andrew A; Gallagher, Thomas H

    2013-01-01

    Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. PMID:24182370

  19. Modified cyanobacteria

    DOEpatents

    Vermaas, Willem F J.

    2014-06-17

    Disclosed is a modified photoautotrophic bacterium comprising genes of interest that are modified in terms of their expression and/or coding region sequence, wherein modification of the genes of interest increases production of a desired product in the bacterium relative to the amount of the desired product production in a photoautotrophic bacterium that is not modified with respect to the genes of interest.

  20. Frequency-Offset Cartesian Feedback Based on Polyphase Difference Amplifiers

    PubMed Central

    Zanchi, Marta G.; Pauly, John M.; Scott, Greig C.

    2010-01-01

    A modified Cartesian feedback method called “frequency-offset Cartesian feedback” and based on polyphase difference amplifiers is described that significantly reduces the problems associated with quadrature errors and DC-offsets in classic Cartesian feedback power amplifier control systems. In this method, the reference input and feedback signals are down-converted and compared at a low intermediate frequency (IF) instead of at DC. The polyphase difference amplifiers create a complex control bandwidth centered at this low IF, which is typically offset from DC by 200–1500 kHz. Consequently, the loop gain peak does not overlap DC where voltage offsets, drift, and local oscillator leakage create errors. Moreover, quadrature mismatch errors are significantly attenuated in the control bandwidth. Since the polyphase amplifiers selectively amplify the complex signals characterized by a +90° phase relationship representing positive frequency signals, the control system operates somewhat like single sideband (SSB) modulation. However, the approach still allows the same modulation bandwidth control as classic Cartesian feedback. In this paper, the behavior of the polyphase difference amplifier is described through both the results of simulations, based on a theoretical analysis of their architecture, and experiments. We then describe our first printed circuit board prototype of a frequency-offset Cartesian feedback transmitter and its performance in open and closed loop configuration. This approach should be especially useful in magnetic resonance imaging transmit array systems. PMID:20814450

  1. Frequency-Offset Cartesian Feedback Based on Polyphase Difference Amplifiers.

    PubMed

    Zanchi, Marta G; Pauly, John M; Scott, Greig C

    2010-05-01

    A modified Cartesian feedback method called "frequency-offset Cartesian feedback" and based on polyphase difference amplifiers is described that significantly reduces the problems associated with quadrature errors and DC-offsets in classic Cartesian feedback power amplifier control systems.In this method, the reference input and feedback signals are down-converted and compared at a low intermediate frequency (IF) instead of at DC. The polyphase difference amplifiers create a complex control bandwidth centered at this low IF, which is typically offset from DC by 200-1500 kHz. Consequently, the loop gain peak does not overlap DC where voltage offsets, drift, and local oscillator leakage create errors. Moreover, quadrature mismatch errors are significantly attenuated in the control bandwidth. Since the polyphase amplifiers selectively amplify the complex signals characterized by a +90° phase relationship representing positive frequency signals, the control system operates somewhat like single sideband (SSB) modulation. However, the approach still allows the same modulation bandwidth control as classic Cartesian feedback.In this paper, the behavior of the polyphase difference amplifier is described through both the results of simulations, based on a theoretical analysis of their architecture, and experiments. We then describe our first printed circuit board prototype of a frequency-offset Cartesian feedback transmitter and its performance in open and closed loop configuration. This approach should be especially useful in magnetic resonance imaging transmit array systems. PMID:20814450

  2. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  3. Asymmetric error field interaction with rotating conducting walls

    SciTech Connect

    Paz-Soldan, C.; Brookhart, M. I.; Hegna, C. C.; Forest, C. B.

    2012-07-15

    The interaction of error fields with a system of differentially rotating conducting walls is studied analytically and compared to experimental data. Wall rotation causes eddy currents to persist indefinitely, attenuating and rotating the original error field. Superposition of error fields from external coils and plasma currents are found to break the symmetry in wall rotation direction. The vacuum and plasma eigenmodes are modified by wall rotation, with the error field penetration time decreased and the kink instability stabilized, respectively. Wall rotation is also predicted to reduce error field amplification by the marginally stable plasma.

  4. Design and analysis of vector color error diffusion halftoning systems.

    PubMed

    Damera-Venkata, N; Evans, B L

    2001-01-01

    Traditional error diffusion halftoning is a high quality method for producing binary images from digital grayscale images. Error diffusion shapes the quantization noise power into the high frequency regions where the human eye is the least sensitive. Error diffusion may be extended to color images by using error filters with matrix-valued coefficients to take into account the correlation among color planes. For vector color error diffusion, we propose three contributions. First, we analyze vector color error diffusion based on a new matrix gain model for the quantizer, which linearizes vector error diffusion. The model predicts the key characteristics of color error diffusion, esp. image sharpening and noise shaping. The proposed model includes linear gain models for the quantizer by Ardalan and Paulos (1987) and by Kite et al. (1997) as special cases. Second, based on our model, we optimize the noise shaping behavior of color error diffusion by designing error filters that are optimum with respect to any given linear spatially-invariant model of the human visual system. Our approach allows the error filter to have matrix-valued coefficients and diffuse quantization error across color channels in an opponent color representation. Thus, the noise is shaped into frequency regions of reduced human color sensitivity. To obtain the optimal filter, we derive a matrix version of the Yule-Walker equations which we solve by using a gradient descent algorithm. Finally, we show that the vector error filter has a parallel implementation as a polyphase filterbank. PMID:18255498

  5. Uncorrected refractive errors

    PubMed Central

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  6. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  7. Insulin use: preventable errors.

    PubMed

    2014-01-01

    Insulin is vital for patients with type 1 diabetes and useful for certain patients with type 2 diabetes. The serious consequences of insulin-related medication errors are overdose, resulting in severe hypoglycaemia, causing seizures, coma and even death; or underdose, resulting in hyperglycaemia and sometimes ketoacidosis. Errors associated with the preparation and administration of insulin are often reported, both outside and inside the hospital setting. These errors are preventable. By analysing reports from organisations devoted to medication error prevention and from poison control centres, as well as a few studies and detailed case reports of medication errors, various types of error associated with insulin use have been identified, especially in the hospital setting. Generally, patients know more about the practicalities of their insulin treatment than healthcare professionals with intermittent involvement. Medication errors involving insulin can occur at each step of the medication-use process: prescribing, data entry, preparation, dispensing and administration. When prescribing insulin, wrong-dose errors have been caused by the use of abbreviations, especially "U" instead of the word "units" (often resulting in a 10-fold overdose because the "U" is read as a zero), or by failing to write the drug's name correctly or in full. In electronic prescribing, the sheer number of insulin products is a source of confusion and, ultimately, wrong-dose errors, and often overdose. Prescribing, dispensing or administration software is rarely compatible with insulin prescriptions in which the dose is adjusted on the basis of the patient's subsequent capillary blood glucose readings, and can therefore generate errors. When preparing and dispensing insulin, a tuberculin syringe is sometimes used instead of an insulin syringe, leading to overdose. Other errors arise from confusion created by similar packaging, between different insulin products or between insulin and other

  8. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  9. Error Detection Processes during Observational Learning

    ERIC Educational Resources Information Center

    Badets, Arnaud; Blandin, Yannick; Wright, David L.; Shea, Charles H.

    2006-01-01

    The purpose of this experiment was to determine whether a faded knowledge of results (KR) frequency during observation of a model's performance enhanced error detection capabilities. During the observation phase, participants observed a model performing a timing task and received KR about the model's performance on each trial or on one of two…

  10. Verb-Form Errors in EAP Writing

    ERIC Educational Resources Information Center

    Wee, Roselind; Sim, Jacqueline; Jusoff, Kamaruzaman

    2010-01-01

    This study was conducted to identify and describe the written verb-form errors found in the EAP writing of 39 second year learners pursuing a three-year Diploma Programme from a public university in Malaysia. Data for this study, which were collected from a written 350-word discursive essay, were analyzed to determine the types and frequency of…

  11. Facts about Refractive Errors

    MedlinePlus

    ... the lens can cause refractive errors. What is refraction? Refraction is the bending of light as it passes ... rays entering the eye, causing a more precise refraction or focus. In many cases, contact lenses provide ...

  12. Errors in prenatal diagnosis.

    PubMed

    Anumba, Dilly O C

    2013-08-01

    Prenatal screening and diagnosis are integral to antenatal care worldwide. Prospective parents are offered screening for common fetal chromosomal and structural congenital malformations. In most developed countries, prenatal screening is routinely offered in a package that includes ultrasound scan of the fetus and the assay in maternal blood of biochemical markers of aneuploidy. Mistakes can arise at any point of the care pathway for fetal screening and diagnosis, and may involve individual or corporate systemic or latent errors. Special clinical circumstances, such as maternal size, fetal position, and multiple pregnancy, contribute to the complexities of prenatal diagnosis and to the chance of error. Clinical interventions may lead to adverse outcomes not caused by operator error. In this review I discuss the scope of the errors in prenatal diagnosis, and highlight strategies for their prevention and diagnosis, as well as identify areas for further research and study to enhance patient safety. PMID:23725900

  13. Error mode prediction.

    PubMed

    Hollnagel, E; Kaarstad, M; Lee, H C

    1999-11-01

    The study of accidents ('human errors') has been dominated by efforts to develop 'error' taxonomies and 'error' models that enable the retrospective identification of likely causes. In the field of Human Reliability Analysis (HRA) there is, however, a significant practical need for methods that can predict the occurrence of erroneous actions--qualitatively and quantitatively. The present experiment tested an approach for qualitative performance prediction based on the Cognitive Reliability and Error Analysis Method (CREAM). Predictions of possible erroneous actions were made for operators using different types of alarm systems. The data were collected as part of a large-scale experiment using professional nuclear power plant operators in a full scope simulator. The analysis showed that the predictions were correct in more than 70% of the cases, and also that the coverage of the predictions depended critically on the comprehensiveness of the preceding task analysis. PMID:10582035

  14. Pronominal Case-Errors

    ERIC Educational Resources Information Center

    Kaper, Willem

    1976-01-01

    Contradicts a previous assertion by C. Tanz that children commit substitution errors usually using objective pronoun forms for nominative ones. Examples from Dutch and German provide evidence that substitutions are made in both directions. (CHK)

  15. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  16. Software errors and complexity: An empirical investigation

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Perricone, Berry T.

    1983-01-01

    The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

  17. Error-Compensated Telescope

    NASA Technical Reports Server (NTRS)

    Meinel, Aden B.; Meinel, Marjorie P.; Stacy, John E.

    1989-01-01

    Proposed reflecting telescope includes large, low-precision primary mirror stage and small, precise correcting mirror. Correcting mirror machined under computer control to compensate for error in primary mirror. Correcting mirror machined by diamond cutting tool. Computer analyzes interferometric measurements of primary mirror to determine shape of surface of correcting mirror needed to compensate for errors in wave front reflected from primary mirror and commands position and movement of cutting tool accordingly.

  18. TOA/FOA geolocation error analysis.

    SciTech Connect

    Mason, John Jeffrey

    2008-08-01

    This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

  19. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  20. Gear Transmission Error Measurement System Made Operational

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2002-01-01

    A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.

  1. [Occurrence and prevention of errors in intensive care units].

    PubMed

    Valentin, A

    2012-05-01

    Recognition and analysis of error constitutes an essential tool for quality improvement in intensive care units (ICUs). The potential for the occurrence of error is considerably high in ICUs. Although errors will never be completely preventable, it is necessary to reduce frequency and consequences of error. A system approach needs to consider human limitations and to design working conditions, workplace, and processes in ICUs in a way that promotes reduction of error. The development of a preventive safety culture must be seen as an essential task for ICUs. PMID:22476763

  2. SAR image quality effects of damped phase and amplitude errors

    NASA Astrophysics Data System (ADS)

    Zelenka, Jerry S.; Falk, Thomas

    The effects of damped multiplicative, amplitude, or phase errors on the image quality of synthetic-aperture radar systems are considered. These types of errors can result from aircraft maneuvers or the mechanical steering of an antenna. The proper treatment of damped multiplicative errors can lead to related design specifications and possibly an enhanced collection capability. Only small, high-frequency errors are considered. Expressions for the average intensity and energy associated with a damped multiplicative error are presented and used to derive graphic results. A typical example is used to show how to apply the results of this effort.

  3. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  4. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  5. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  6. Frequency stabilized laser

    NASA Astrophysics Data System (ADS)

    Mongeon, R. J.; Henschke, R. W.

    1984-08-01

    The document describes a frequency control system for a laser for compensating for thermally-induced laser resonator length changes. The frequency control loop comprises a frequency reference for producing an error signal and electrical means to move a length-controlling transducer in response thereto. The transducer has one of the laser mirrors attached thereto. The effective travel of the transducer is multiplied severalfold by circuitry for sensing when the transducer is running out of extension and in response thereto rapidly moving the transducer and its attached mirror toward its midrange position.

  7. Quantification of model error via an interval model with nonparametric error bound

    NASA Technical Reports Server (NTRS)

    Lew, Jiann-Shiun; Keel, Lee H.; Juang, Jer-Nan

    1993-01-01

    The quantification of model uncertainty is becoming increasingly important as robust control is an important tool for control system design and analysis. This paper presents an algorithm that effectively characterizes the model uncertainty in terms of parametric and nonparametric uncertainties. The algorithm utilizes the frequency domain model error which is estimated from the spectra of output error and input data. The parametric uncertainty is represented as an interval transfer function while the nonparametric uncertainty is bounded by a designed error bound transfer function. Both discrete and continuous systems are discussed in this paper. The algorithm is applied to the Mini-Mast example, and the detail analysis is given.

  8. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  9. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  10. Measurement error revisited

    NASA Astrophysics Data System (ADS)

    Henderson, Robert K.

    1999-12-01

    It is widely accepted in the electronics industry that measurement gauge error variation should be no larger than 10% of the related specification window. In a previous paper, 'What Amount of Measurement Error is Too Much?', the author used a framework from the process industries to evaluate the impact of measurement error variation in terms of both customer and supplier risk (i.e., Non-conformance and Yield Loss). Application of this framework in its simplest form suggested that in many circumstances the 10% criterion might be more stringent than is reasonably necessary. This paper reviews the framework and results of the earlier work, then examines some of the possible extensions to this framework suggested in that paper, including variance component models and sampling plans applicable in the photomask and semiconductor businesses. The potential impact of imperfect process control practices will be examined as well.

  11. Investigation of Measurement Errors in Doppler Global Velocimetry

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Lee, Joseph W.

    1999-01-01

    While the initial development phase of Doppler Global Velocimetry (DGV) has been successfully completed, there remains a critical next phase to be conducted, namely the determination of an error budget to provide quantitative bounds for measurements obtained by this technology. This paper describes a laboratory investigation that consisted of a detailed interrogation of potential error sources to determine their contribution to the overall DGV error budget. A few sources of error were obvious; e.g., iodine vapor adsorption lines, optical systems, and camera characteristics. However, additional non-obvious sources were also discovered; e.g., laser frequency and single-frequency stability, media scattering characteristics, and interference fringes. This paper describes each identified error source, its effect on the overall error budget, and where possible, corrective procedures to reduce or eliminate its effect.

  12. Surprise beyond prediction error

    PubMed Central

    Chumbley, Justin R; Burke, Christopher J; Stephan, Klaas E; Friston, Karl J; Tobler, Philippe N; Fehr, Ernst

    2014-01-01

    Surprise drives learning. Various neural “prediction error” signals are believed to underpin surprise-based reinforcement learning. Here, we report a surprise signal that reflects reinforcement learning but is neither un/signed reward prediction error (RPE) nor un/signed state prediction error (SPE). To exclude these alternatives, we measured surprise responses in the absence of RPE and accounted for a host of potential SPE confounds. This new surprise signal was evident in ventral striatum, primary sensory cortex, frontal poles, and amygdala. We interpret these findings via a normative model of surprise. PMID:24700400

  13. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  14. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    SciTech Connect

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  15. Measuring Cyclic Error in Laser Heterodyne Interferometers

    NASA Technical Reports Server (NTRS)

    Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter

    2010-01-01

    An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-

  16. Speech Errors in Progressive Non-Fluent Aphasia

    ERIC Educational Resources Information Center

    Ash, Sharon; McMillan, Corey; Gunawardena, Delani; Avants, Brian; Morgan, Brianna; Khan, Alea; Moore, Peachie; Gee, James; Grossman, Murray

    2010-01-01

    The nature and frequency of speech production errors in neurodegenerative disease have not previously been precisely quantified. In the present study, 16 patients with a progressive form of non-fluent aphasia (PNFA) were asked to tell a story from a wordless children's picture book. Errors in production were classified as either phonemic,…

  17. Parental Reports of Children's Scale Errors in Everyday Life

    ERIC Educational Resources Information Center

    Rosengren, Karl S.; Gutierrez, Isabel T.; Anderson, Kathy N.; Schein, Stevie S.

    2009-01-01

    Scale errors refer to behaviors where young children attempt to perform an action on an object that is too small to effectively accommodate the behavior. The goal of this study was to examine the frequency and characteristics of scale errors in everyday life. To do so, the researchers collected parental reports of children's (age range = 13-21…

  18. Optical linear algebra processors - Noise and error-source modeling

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  19. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  20. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  1. Elimination of error factors, affecting EM and seismic inversions

    NASA Astrophysics Data System (ADS)

    Magomedov, M.; Zuev, M. A.; Korneev, V. A.; Goloshubin, G.; Zuev, J.; Brovman, Y.

    2013-12-01

    EM or seismic data inversions are affected by many factors, which may conceal the responses from target objects. We address here the contributions from the following effects: 1) Pre-survey spectral sensitivity factor. Preliminary information about a target layer can be used for a pre-survey estimation of the required frequency domain and signal level. A universal approach allows making such estimations in real time, helping the survey crew to optimize an acquisition process. 2) Preliminary velocities' identification and their dispersions for all the seismic waves, arising in a stratified media became a fast working tool, based on the exact analytical solution. 3) Vertical gradients effect. For most layers the log data scatter, requiring an averaging pattern. A linear gradient within each representative layer is a reasonable compromise between required inversion accuracy and forward modeling complexity. 4) An effect from the seismic source's radial component becomes comparable with vertical part for explosive sources. If this effect is not taken into account, a serious modeling error takes place. This problem has an algorithmic solution. 5) Seismic modeling is often based on different representations for a source formulated either for a force or to a potential. The wave amplitudes depend on the formulation, making an inversion result sensitive to it. 6) Asymmetrical seismic waves (modified Rayleigh) in symmetrical geometry around liquid fracture come from S-wave and merge with the modified Krauklis wave at high frequencies. A detail analysis of this feature allows a spectral range optimization for the proper wave's extraction. 7) An ultrasonic experiment was conducted to show different waves appearance for a super-thin water-saturated fracture between two Plexiglas plates, being confirmed by comparison with theoretical computations. 8) A 'sandwich effect' was detected by comparison with averaged layer's effect. This opens an opportunity of the shale gas direct

  2. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  3. The characteristics of key analysis errors

    NASA Astrophysics Data System (ADS)

    Caron, Jean-Francois

    This thesis investigates the characteristics of the corrections to the initial state of the atmosphere. The technique employed is the key analysis error algorithm, recently developed to estimate the initial state errors responsible for poor short-range to medium-range numerical weather prediction (NWP) forecasts. The main goal of this work is to determine to which extent the initial corrections obtained with this method can be associated with analysis errors. A secondary goal is to understand their dynamics in improving the forecast. In the first part of the thesis, we examine the realism of the initial corrections obtained from the key analysis error algorithm in terms of dynamical balance and closeness to the observations. The result showed that the initial corrections are strongly out of balance and systematically increase the departure between the control analysis and the observations suggesting that the key analysis error algorithm produced initial corrections that represent more than analysis errors. Significant artificial correction to the initial state seems to be present. The second part of this work examines a few approaches to isolate the balanced component of the initial corrections from the key analysis error method. The best results were obtained with the nonlinear balance potential vorticity (PV) inversion technique. The removal of the imbalance part of the initial corrections makes the corrected analysis slightly closer to the observations, but remains systematically further away as compared to the control analysis. Thus the balanced part of the key analysis errors cannot justifiably be associated with analysis errors. In light of the results presented, some recommendations to improve the key analysis error algorithm were proposed. In the third and last part of the thesis, a diagnosis of the evolution of the initial corrections from the key analysis error method is presented using a PV approach. The initial corrections tend to grow rapidly in time

  4. Scientific Impacts of Wind Direction Errors

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Kim, Seung-Bum; Lee, Tong; Song, Y. Tony; Tang, Wen-Qing; Atlas, Robert

    2004-01-01

    An assessment on the scientific impact of random errors in wind direction (less than 45 deg) retrieved from space-based observations under weak wind (less than 7 m/s ) conditions was made. averages, and these weak winds cover most of the tropical, sub-tropical, and coastal oceans. Introduction of these errors in the semi-daily winds causes, on average, 5% changes of the yearly mean Ekman and Sverdrup volume transports computed directly from the winds, respectively. These poleward movements of water are the main mechanisms to redistribute heat from the warmer tropical region to the colder high- latitude regions, and they are the major manifestations of the ocean's function in modifying Earth's climate. Simulation by an ocean general circulation model shows that the wind errors introduce a 5% error in the meridional heat transport at tropical latitudes. The simulation also shows that the erroneous winds cause a pile-up of warm surface water in the eastern tropical Pacific, similar to the conditions during El Nino episode. Similar wind directional errors cause significant change in sea-surface temperature and sea-level patterns in coastal oceans in a coastal model simulation. Previous studies have shown that assimilation of scatterometer winds improves 3-5 day weather forecasts in the Southern Hemisphere. When directional information below 7 m/s was withheld, approximately 40% of the improvement was lost

  5. [The error, source of learning].

    PubMed

    Joyeux, Stéphanie; Bohic, Valérie

    2016-05-01

    The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. PMID:27155272

  6. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  7. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  8. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  9. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  10. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  11. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  12. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  13. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  14. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  15. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  16. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  17. Pulse Shaping Entangling Gates and Error Supression

    NASA Astrophysics Data System (ADS)

    Hucul, D.; Hayes, D.; Clark, S. M.; Debnath, S.; Quraishi, Q.; Monroe, C.

    2011-05-01

    Control of spin dependent forces is important for generating entanglement and realizing quantum simulations in trapped ion systems. Here we propose and implement a composite pulse sequence based on the Molmer-Sorenson gate to decrease gate infidelity due to frequency and timing errors. The composite pulse sequence uses an optical frequency comb to drive Raman transitions simultaneously detuned from trapped ion transverse motional red and blue sideband frequencies. The spin dependent force displaces the ions in phase space, and the resulting spin-dependent geometric phase depends on the detuning. Voltage noise on the rf electrodes changes the detuning between the trapped ions' motional frequency and the laser, decreasing the fidelity of the gate. The composite pulse sequence consists of successive pulse trains from counter-propagating frequency combs with phase control of the microwave beatnote of the lasers to passively suppress detuning errors. We present the theory and experimental data with one and two ions where a gate is performed with a composite pulse sequence. This work supported by the U.S. ARO, IARPA, the DARPA OLE program, the MURI program; the NSF PIF Program; the NSF Physics Frontier Center at JQI; the European Commission AQUTE program; and the IC postdoc program administered by the NGA.

  18. Frequency curves

    USGS Publications Warehouse

    Riggs, H.C.

    1968-01-01

    This manual describes graphical and mathematical procedures for preparing frequency curves from samples of hydrologic data. It also discusses the theory of frequency curves, compares advantages of graphical and mathematical fitting, suggests methods of describing graphically defined frequency curves analytically, and emphasizes the correct interpretations of a frequency curve.

  19. Cognitive control of conscious error awareness: error awareness and error positivity (Pe) amplitude in moderate-to-severe traumatic brain injury (TBI)

    PubMed Central

    Logan, Dustin M.; Hill, Kyle R.; Larson, Michael J.

    2015-01-01

    Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI. PMID:26217212

  20. Frequency division multiplex technique

    NASA Technical Reports Server (NTRS)

    Brey, H. (Inventor)

    1973-01-01

    A system for monitoring a plurality of condition responsive devices is described. It consists of a master control station and a remote station. The master control station is capable of transmitting command signals which includes a parity signal to a remote station which transmits the signals back to the command station so that such can be compared with the original signals in order to determine if there are any transmission errors. The system utilizes frequency sources which are 1.21 multiples of each other so that no linear combination of any harmonics will interfere with another frequency.

  1. On the Routh approximation technique and least squares errors

    NASA Technical Reports Server (NTRS)

    Aburdene, M. F.; Singh, R.-N. P.

    1979-01-01

    A new method for calculating the coefficients of the numerator polynomial of the direct Routh approximation method (DRAM) using the least square error criterion is formulated. The necessary conditions have been obtained in terms of algebraic equations. The method is useful for low frequency as well as high frequency reduced-order models.

  2. Evaluating a medical error taxonomy.

    PubMed Central

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting. PMID:12463789

  3. Position error propagation in the simplex strapdown navigation system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The results of an analysis of the effects of deterministic error sources on position error in the simplex strapdown navigation system were documented. Improving the long term accuracy of the system was addressed in two phases: understanding and controlling the error within the system, and defining methods of damping the net system error through the use of an external reference velocity or position. Review of the flight and ground data revealed error containing the Schuler frequency as well as non-repeatable trends. The only unbounded terms are those involving gyro bias and azimuth error coupled with velocity. All forms of Schuler-periodic position error were found to be sufficiently large to require update or damping capability unless the source coefficients can be limited to values less than those used in this analysis for misalignment and gyro and accelerometer bias. The first-order effects of the deterministic error sources were determined with a simple error propagator which provided plots of error time functions in response to various source error values.

  4. Error analysis and data reduction for interferometric surface measurements

    NASA Astrophysics Data System (ADS)

    Zhou, Ping

    High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.

  5. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  6. Time Interval Errors of a Flicker-noise Generator

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1984-01-01

    Time interval error (TIE) is the error of a clock at time t after it is synchronized and syntonized at time zero. Previous simulations of Flicker FM noise yielded a mean-square TIE proportional to sq t. It is shown that the order of growth is actually sq t log t. The earlier sq t result is explained and a modified version of the Barnes-Jarvis simulation algorithm is given.

  7. Influence of modulation frequency in rubidium cell frequency standards

    NASA Technical Reports Server (NTRS)

    Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.

    1983-01-01

    The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.

  8. Reducing medication errors in critical care: a multimodal approach

    PubMed Central

    Kruer, Rachel M; Jarrell, Andrew S; Latif, Asad

    2014-01-01

    The Institute of Medicine has reported that medication errors are the single most common type of error in health care, representing 19% of all adverse events, while accounting for over 7,000 deaths annually. The frequency of medication errors in adult intensive care units can be as high as 947 per 1,000 patient-days, with a median of 105.9 per 1,000 patient-days. The formulation of drugs is a potential contributor to medication errors. Challenges related to drug formulation are specific to the various routes of medication administration, though errors associated with medication appearance and labeling occur among all drug formulations and routes of administration. Addressing these multifaceted challenges requires a multimodal approach. Changes in technology, training, systems, and safety culture are all strategies to potentially reduce medication errors related to drug formulation in the intensive care unit. PMID:25210478

  9. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  10. Standard Errors for Matrix Correlations.

    ERIC Educational Resources Information Center

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  11. A binary spelling interface with random errors.

    PubMed

    Perelmouter, J; Birbaumer, N

    2000-06-01

    An algorithm for design of a spelling interface based on a modified Huffman's algorithm is presented. This algorithm builds a full binary tree that allows to maximize an average probability to reach a leaf where a required character is located when a choice at each node is made with possible errors. A means to correct errors (a delete-function) and an optimization method to build this delete-function into the binary tree are also discussed. Such a spelling interface could be successfully applied to any menu-orientated alternative communication system when a user (typically, a patient with devastating neuromuscular handicap) is not able to express an intended single binary response, either through motor responses or by using of brain-computer interfaces, with an absolute reliability. PMID:10896195

  12. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  13. Grammatical Errors and Communication Breakdown.

    ERIC Educational Resources Information Center

    Tomiyama, Machiko

    This study investigated the relationship between grammatical errors and communication breakdown by examining native speakers' ability to correct grammatical errors. The assumption was that communication breakdown exists to a certain degree if a native speaker cannot correct the error or if the correction distorts the information intended to be…

  14. Errors inducing radiation overdoses.

    PubMed

    Grammaticos, Philip C

    2013-01-01

    There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304

  15. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  16. Medical device error.

    PubMed

    Goodman, Gerald R

    2002-12-01

    This article discusses principal concepts for the analysis, classification, and reporting of problems involving medical device technology. We define a medical device in regulatory terminology and define and discuss concepts and terminology used to distinguish the causes and sources of medical device problems. Database classification systems for medical device failure tracking are presented, as are sources of information on medical device failures. The importance of near-accident reporting is discussed to alert users that reported medical device errors are typically limited to those that have caused an injury or death. This can represent only a fraction of the true number of device problems. This article concludes with a summary of the most frequently reported medical device failures by technology type, clinical application, and clinical setting. PMID:12400632

  17. Financial errors in dementia: Testing a neuroeconomic conceptual framework

    PubMed Central

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.

    2013-01-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884

  18. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors. PMID:26592783

  19. Numerical error in groundwater flow and solute transport simulation

    NASA Astrophysics Data System (ADS)

    Woods, Juliette A.; Teubner, Michael D.; Simmons, Craig T.; Narayan, Kumar A.

    2003-06-01

    Models of groundwater flow and solute transport may be affected by numerical error, leading to quantitative and qualitative changes in behavior. In this paper we compare and combine three methods of assessing the extent of numerical error: grid refinement, mathematical analysis, and benchmark test problems. In particular, we assess the popular solute transport code SUTRA [Voss, 1984] as being a typical finite element code. Our numerical analysis suggests that SUTRA incorporates a numerical dispersion error and that its mass-lumped numerical scheme increases the numerical error. This is confirmed using a Gaussian test problem. A modified SUTRA code, in which the numerical dispersion is calculated and subtracted, produces better results. The much more challenging Elder problem [Elder, 1967; Voss and Souza, 1987] is then considered. Calculation of its numerical dispersion coefficients and numerical stability show that the Elder problem is prone to error. We confirm that Elder problem results are extremely sensitive to the simulation method used.

  20. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. PMID:25649961

  1. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  2. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  3. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  4. Estimating the Modified Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    1995-01-01

    The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.

  5. Identifying and Reducing Systematic Errors in Chromosome Conformation Capture Data

    PubMed Central

    Hahn, Seungsoo; Kim, Dongsup

    2015-01-01

    Chromosome conformation capture (3C)-based techniques have recently been used to uncover the mystic genomic architecture in the nucleus. These techniques yield indirect data on the distances between genomic loci in the form of contact frequencies that must be normalized to remove various errors. This normalization process determines the quality of data analysis. In this study, we describe two systematic errors that result from the heterogeneous local density of restriction sites and different local chromatin states, methods to identify and remove those artifacts, and three previously described sources of systematic errors in 3C-based data: fragment length, mappability, and local DNA composition. To explain the effect of systematic errors on the results, we used three different published data sets to show the dependence of the results on restriction enzymes and experimental methods. Comparison of the results from different restriction enzymes shows a higher correlation after removing systematic errors. In contrast, using different methods with the same restriction enzymes shows a lower correlation after removing systematic errors. Notably, the improved correlation of the latter case caused by systematic errors indicates that a higher correlation between results does not ensure the validity of the normalization methods. Finally, we suggest a method to analyze random error and provide guidance for the maximum reproducibility of contact frequency maps. PMID:26717152

  6. Identifying and Reducing Systematic Errors in Chromosome Conformation Capture Data.

    PubMed

    Hahn, Seungsoo; Kim, Dongsup

    2015-01-01

    Chromosome conformation capture (3C)-based techniques have recently been used to uncover the mystic genomic architecture in the nucleus. These techniques yield indirect data on the distances between genomic loci in the form of contact frequencies that must be normalized to remove various errors. This normalization process determines the quality of data analysis. In this study, we describe two systematic errors that result from the heterogeneous local density of restriction sites and different local chromatin states, methods to identify and remove those artifacts, and three previously described sources of systematic errors in 3C-based data: fragment length, mappability, and local DNA composition. To explain the effect of systematic errors on the results, we used three different published data sets to show the dependence of the results on restriction enzymes and experimental methods. Comparison of the results from different restriction enzymes shows a higher correlation after removing systematic errors. In contrast, using different methods with the same restriction enzymes shows a lower correlation after removing systematic errors. Notably, the improved correlation of the latter case caused by systematic errors indicates that a higher correlation between results does not ensure the validity of the normalization methods. Finally, we suggest a method to analyze random error and provide guidance for the maximum reproducibility of contact frequency maps. PMID:26717152

  7. Frequency Combs

    NASA Astrophysics Data System (ADS)

    Hänsch, Theodor W.; Picqué, Nathalie

    Much of modern research in the field of atomic, molecular, and optical science relies on lasers, which were invented some 50 years ago and perfected in five decades of intense research and development. Today, lasers and photonic technologies impact most fields of science and they have become indispensible in our daily lives. Laser frequency combs were conceived a decade ago as tools for the precision spectroscopy of atomic hydrogen. Through the development of optical frequency comb techniques, technique a setup of the size 1 ×1 m2, good for precision measurements of any frequency, and even commercially available, has replaced the elaborate previous frequency-chain schemes for optical frequency measurements, which only worked for selected frequencies. A true revolution in optical frequency measurements has occurred, paving the way for the creation of all-optical clocks clock with a precision that might approach 10-18. A decade later, frequency combs are now common equipment in all frequency metrology-oriented laboratories. They are also becoming enabling tools for an increasing number of applications, from the calibration of astronomical spectrographs to molecular spectroscopy. This chapter first describes the principle of an optical frequency comb synthesizer. Some of the key technologies to generate such a frequency comb are then presented. Finally, a non-exhaustive overview of the growing applications is given.

  8. Social aspects of clinical errors.

    PubMed

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors. PMID:19201405

  9. Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.

    ERIC Educational Resources Information Center

    Monagle, E. Brette

    The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…

  10. Error analysis in the measurement of average power with application to switching controllers

    NASA Technical Reports Server (NTRS)

    Maisel, J. E.

    1979-01-01

    The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.

  11. Examining IFOV error and demodulation strategies for infrared microgrid polarimeter imagery

    NASA Astrophysics Data System (ADS)

    Ratliff, Bradley M.; Tyo, J. Scott; LaCasse, Charles F.; Black, Wiley T.

    2009-08-01

    For the past several years we have been working on strategies to mitigate the effects of IFOV errors on LWIR microgrid polarimeters. In this paper we present a detailed, theoretical analysis of the source of IFOV error in the frequency domain, and show a frequency domain strategy to mitigate those effects.

  12. Sub-nanometer periodic nonlinearity error in absolute distance interferometers.

    PubMed

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°. PMID:26026510

  13. Parametric registration of cross test error maps for optical surfaces

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Dai, Yifan; Nie, Xuqing; Li, Shengyi

    2015-07-01

    It is necessary to quantitatively compare two measurement results which are typically in the form of error maps of the same surface figure for the purpose of cross test. The error maps are obtained by different methods or even different instruments. Misalignment exists between them including the tip-tilt, lateral shift, clocking and scaling. A fast registration algorithm is proposed to correct the misalignment before we can calculate the pixel-to-pixel difference of the two maps. It is formulated as simply a linear least-squares problem. Sensitivity of registration error to the misalignment is simulated with low-frequency features and mid-frequency features in the surface error maps represented by Zernike polynomials and spatially correlated functions, respectively. Finally by applying it to two cases of real datasets, the algorithm is validated to be comparable in accuracy to general non-linear optimization method based on sequential quadratic programming while the computation time is superiorly incomparable.

  14. The 13 errors.

    PubMed

    Flower, J

    1998-01-01

    The reality is that most change efforts fail. McKinsey & Company carried out a fascinating research project on change to "crack the code" on creating and managing change in large organizations. One of the questions they asked--and answered--is why most organizations fail in their efforts to manage change. They found that 80 percent of these failures could be traced to 13 common errors. They are: (1) No winning strategy; (2) failure to make a compelling and urgent case for change; (3) failure to distinguish between decision-driven and behavior-dependent change; (4) over-reliance on structure and systems to change behavior; (5) lack of skills and resources; (6) failure to experiment; (7) leaders' inability or unwillingness to confront how they and their roles must change; (8) failure to mobilize and engage pivotal groups; (9) failure to understand and shape the informal organization; (10) inability to integrate and align all the initiatives; (11) no performance focus; (12) excessively open-ended process; and (13) failure to make the whole process transparent and meaningful to individuals. PMID:10351717

  15. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  16. Improved Quantum Metrology Using Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Dür, W.; Skotiniotis, M.; Fröwis, F.; Kraus, B.

    2014-02-01

    We consider quantum metrology in noisy environments, where the effect of noise and decoherence limits the achievable gain in precision by quantum entanglement. We show that by using tools from quantum error correction this limitation can be overcome. This is demonstrated in two scenarios, including a many-body Hamiltonian with single-qubit dephasing or depolarizing noise and a single-body Hamiltonian with transversal noise. In both cases, we show that Heisenberg scaling, and hence a quadratic improvement over the classical case, can be retained. Moreover, for the case of frequency estimation we find that the inclusion of error correction allows, in certain instances, for a finite optimal interrogation time even in the asymptotic limit.

  17. Measurement of errors in clinical laboratories.

    PubMed

    Agarwal, Rachna

    2013-07-01

    Laboratories have a major impact on patient safety as 80-90 % of all the diagnosis are made on the basis of laboratory tests. Laboratory errors have a reported frequency of 0.012-0.6 % of all test results. Patient safety is a managerial issue which can be enhanced by implementing active system to identify and monitor quality failures. This can be facilitated by reactive method which includes incident reporting followed by root cause analysis. This leads to identification and correction of weaknesses in policies and procedures in the system. Another way is proactive method like Failure Mode and Effect Analysis. In this focus is on entire examination process, anticipating major adverse events and pre-emptively prevent them from occurring. It is used for prospective risk analysis of high-risk processes to reduce the chance of errors in the laboratory and other patient care areas. PMID:24426216

  18. Error analysis in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.

    1998-06-01

    Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

  19. A QUANTITATIVE MODEL OF ERROR ACCUMULATION DURING PCR AMPLIFICATION

    PubMed Central

    Pienaar, E; Theron, M; Nelson, M; Viljoen, HJ

    2006-01-01

    The amplification of target DNA by the polymerase chain reaction (PCR) produces copies which may contain errors. Two sources of errors are associated with the PCR process: (1) editing errors that occur during DNA polymerase-catalyzed enzymatic copying and (2) errors due to DNA thermal damage. In this study a quantitative model of error frequencies is proposed and the role of reaction conditions is investigated. The errors which are ascribed to the polymerase depend on the efficiency of its editing function as well as the reaction conditions; specifically the temperature and the dNTP pool composition. Thermally induced errors stem mostly from three sources: A+G depurination, oxidative damage of guanine to 8-oxoG and cytosine deamination to uracil. The post-PCR modifications of sequences are primarily due to exposure of nucleic acids to elevated temperatures, especially if the DNA is in a single-stranded form. The proposed quantitative model predicts the accumulation of errors over the course of a PCR cycle. Thermal damage contributes significantly to the total errors; therefore consideration must be given to thermal management of the PCR process. PMID:16412692

  20. Skills, rules and knowledge in aircraft maintenance: errors in context

    NASA Technical Reports Server (NTRS)

    Hobbs, Alan; Williamson, Ann

    2002-01-01

    Automatic or skill-based behaviour is generally considered to be less prone to error than behaviour directed by conscious control. However, researchers who have applied Rasmussen's skill-rule-knowledge human error framework to accidents and incidents have sometimes found that skill-based errors appear in significant numbers. It is proposed that this is largely a reflection of the opportunities for error which workplaces present and does not indicate that skill-based behaviour is intrinsically unreliable. In the current study, 99 errors reported by 72 aircraft mechanics were examined in the light of a task analysis based on observations of the work of 25 aircraft mechanics. The task analysis identified the opportunities for error presented at various stages of maintenance work packages and by the job as a whole. Once the frequency of each error type was normalized in terms of the opportunities for error, it became apparent that skill-based performance is more reliable than rule-based performance, which is in turn more reliable than knowledge-based performance. The results reinforce the belief that industrial safety interventions designed to reduce errors would best be directed at those aspects of jobs that involve rule- and knowledge-based performance.

  1. Random errors in egocentric networks.

    PubMed

    Almquist, Zack W

    2012-10-01

    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5-20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  2. Random errors in egocentric networks

    PubMed Central

    Almquist, Zack W.

    2013-01-01

    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5–20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  3. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  4. [Error factors in spirometry].

    PubMed

    Quadrelli, S A; Montiel, G C; Roncoroni, A J

    1994-01-01

    Spirometry is the more frequently used method to estimate pulmonary function in the clinical laboratory. It is important to comply with technical requisites to approximate the real values sought as well as adequate interpretation of results. Recommendations are made to establish: 1--quality control 2--define abnormality 3--classify the change from normal and its degree 4--define reversibility. In relation to quality control several criteria are pointed out such as end of the test, back-extrapolation and extrapolated volume in order to delineate most common errors. Daily calibration is advised. Inspection of graphical records of the test is mandatory. The limitations to the common use of 80% of predicted values to establish abnormality is stressed. The reasons for employing 95% confidence limits are detailed. It is important to select the reference values equation (in view of the differences in predicted values). It is advisable to validate the selection with local population normal values. In relation to the definition of the defect as restrictive or obstructive, the limitations of vital capacity (VC) to establish restriction, when obstruction is also present, are defined. Also the limitations of maximal mid-expiratory flow 25-75 (FMF 25-75) as an isolated marker of obstruction. Finally the qualities of forced expiratory volume in 1 sec (VEF1) and the difficulties with other indicators (CVF, FMF 25-75, VEF1/CVF) to estimate reversibility after bronchodilators are evaluated. The value of different methods used to define reversibility (% of change in initial value, absolute change or % of predicted), is commented. Clinical spirometric studies in order to be valuable should be performed with the same technical rigour as any other more complex studies. PMID:7990690

  5. Statistical errors in Monte Carlo estimates of systematic errors

    NASA Astrophysics Data System (ADS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  6. Frequency spectrum analyzer with phase-lock

    DOEpatents

    Boland, Thomas J.

    1984-01-01

    A frequency-spectrum analyzer with phase-lock for analyzing the frequency and amplitude of an input signal is comprised of a voltage controlled oscillator (VCO) which is driven by a ramp generator, and a phase error detector circuit. The phase error detector circuit measures the difference in phase between the VCO and the input signal, and drives the VCO locking it in phase momentarily with the input signal. The input signal and the output of the VCO are fed into a correlator which transfers the input signal to a frequency domain, while providing an accurate absolute amplitude measurement of each frequency component of the input signal.

  7. Development of transmission error tester for face gears

    NASA Astrophysics Data System (ADS)

    Shi, Zhao-yao; Lu, Xiao-ning; Chen, Chang-he; Lin, Jia-chun

    2013-10-01

    A tester for measuring face gears' transmission error was developed based on single-flank rolling principle. The mechanical host was of hybrid configuration of the vertical and horizontal structures. The tester is mainly constituted by base, precision spindle, grating measurement system and control unit. The structure of precision spindles was designed, and rotation accuracy of the spindleswas improved. The key techniques, such as clamping, positioning and adjustment of the gears were researched. In order to collect the data of transmission error, high-frequency clock pulse subdivision count method with higher measurement resolution was proposed. The developed tester can inspect the following errors, such as transmission error of the pair, tangential composite deviation for the measured face gear, pitch deviation, eccentricity error, and so on. The results of measurement can be analyzed by the tester; The tester can meet face gear quality testing requirements for accuracy of grade 5.

  8. Nonresponse Error in Mail Surveys: Top Ten Problems

    PubMed Central

    Daly, Jeanette M.; Jones, Julie K.; Gereau, Patricia L.; Levy, Barcey T.

    2011-01-01

    Conducting mail surveys can result in nonresponse error, which occurs when the potential participant is unwilling to participate or impossible to contact. Nonresponse can result in a reduction in precision of the study and may bias results. The purpose of this paper is to describe and make readers aware of a top ten list of mailed survey problems affecting the response rate encountered over time with different research projects, while utilizing the Dillman Total Design Method. Ten nonresponse error problems were identified, such as inserter machine gets sequence out of order, capitalization in databases, and mailing discarded by postal service. These ten mishaps can potentiate nonresponse errors, but there are ways to minimize their frequency. Suggestions offered stem from our own experiences during research projects. Our goal is to increase researchers' knowledge of nonresponse error problems and to offer solutions which can decrease nonresponse error in future projects. PMID:21994846

  9. Teratogenic inborn errors of metabolism.

    PubMed Central

    Leonard, J. V.

    1986-01-01

    Most children with inborn errors of metabolism are born healthy without malformations as the fetus is protected by the metabolic activity of the placenta. However, certain inborn errors of the fetus have teratogenic effects although the mechanisms responsible for the malformations are not generally understood. Inborn errors in the mother may also be teratogenic. The adverse effects of these may be reduced by improved metabolic control of the biochemical disorder. PMID:3540927

  10. Coherent error suppression in multiqubit entangling gates.

    PubMed

    Hayes, D; Clark, S M; Debnath, S; Hucul, D; Inlek, I V; Lee, K W; Quraishi, Q; Monroe, C

    2012-07-13

    We demonstrate a simple pulse shaping technique designed to improve the fidelity of spin-dependent force operations commonly used to implement entangling gates in trapped ion systems. This extension of the Mølmer-Sørensen gate can theoretically suppress the effects of certain frequency and timing errors to any desired order and is demonstrated through Walsh modulation of a two qubit entangling gate on trapped atomic ions. The technique is applicable to any system of qubits coupled through collective harmonic oscillator modes. PMID:23030141

  11. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  12. Compensating For GPS Ephemeris Error

    NASA Technical Reports Server (NTRS)

    Wu, Jiun-Tsong

    1992-01-01

    Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.

  13. Retransmission error control with memory

    NASA Technical Reports Server (NTRS)

    Sindhu, P. S.

    1977-01-01

    In this paper, an error control technique that is a basic improvement over automatic-repeat-request ARQ is presented. Erroneously received blocks in an ARQ system are used for error control. The technique is termed ARQ-with-memory (MRQ). The general MRQ system is described, and simple upper and lower bounds are derived on the throughput achievable by MRQ. The performance of MRQ with respect to throughput, message delay and probability of error is compared to that of ARQ by simulating both systems using error data from a VHF satellite channel being operated in the ALOHA packet broadcasting mode.

  14. Medication Errors in Outpatient Pediatrics.

    PubMed

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086

  15. Physical examination. Frequently observed errors.

    PubMed

    Wiener, S; Nathanson, M

    1976-08-16

    A method allowing for direct observation of intern and resident physicians while interviewing and examining patients has been in use on our medical wards for the last five years. A large number of errors in the performance of the medical examination by young physicians were noted and a classification of these errors into those of technique, omission, detection, interpretation, and recording was made. An approach to detection and correction of each of these kinds of errors is presented, as well as a discussion of possible reasons for the occurrence of these errors in physician performance. PMID:947266

  16. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  17. Quality and Safety in Health Care, Part XIII: Detecting and Analyzing Diagnostic Errors.

    PubMed

    Harolds, Jay A

    2016-08-01

    There are many ways to help determine the incidence of errors in diagnosis including reviewing autopsy data, health insurance and malpractice claims, patient health records, and surveys of doctors and patients. However, all of these methods have positive and negative points. There are also a variety of ways to analyze diagnostic errors and many recommendations about how to decrease the frequency of errors in diagnosis. Overdiagnosis is an important quality and safety issue but is not considered an error. PMID:27163458

  18. Error estimates and specification parameters for functional renormalization

    SciTech Connect

    Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; Wetterich, Christof

    2013-07-15

    We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

  19. A posteriori error estimator and error control for contact problems

    NASA Astrophysics Data System (ADS)

    Weiss, Alexander; Wohlmuth, Barbara I.

    2009-09-01

    In this paper, we consider two error estimators for one-body contact problems. The first error estimator is defined in terms of H( div ) -conforming stress approximations and equilibrated fluxes while the second is a standard edge-based residual error estimator without any modification with respect to the contact. We show reliability and efficiency for both estimators. Moreover, the error is bounded by the first estimator with a constant one plus a higher order data oscillation term plus a term arising from the contact that is shown numerically to be of higher order. The second estimator is used in a control-based AFEM refinement strategy, and the decay of the error in the energy is shown. Several numerical tests demonstrate the performance of both estimators.

  20. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  1. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  2. Ultrashort-pulse measurement using noninstantaneous nonlinearities: Raman effects in frequency-resolved optical gating

    SciTech Connect

    DeLong, K.W.; Ladera, C.L.; Trebino, R.; Kohler, B.; Wilson, K.R.

    1995-03-01

    Ultrashort-pulse-characterization techniques generally require instantaneously responding media. We show that this is not the case for frequency-resolved optical gating (FROG). We include, as an example, the noninstantaneous Raman response of fused silica, which can cause errors in the retrieved pulse width of as much as 8% for a 25-fs pulse in polarization-gate FROG. We present a modified pulse-retrieval algorithm that deconvolves such slow effects and use it to retrieve pulses of any width. In experiments with 45-fs pulses this algorithm achieved better convergence and yielded a shorter pulse than previous FROG algorithms.

  3. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  4. Dyslexia and Oral Reading Errors

    ERIC Educational Resources Information Center

    Singleton, Chris

    2005-01-01

    Thomson was the first of very few researchers to have studied oral reading errors as a means of addressing the question: Are dyslexic readers different to other readers? Using the Neale Analysis of Reading Ability and Goodman's taxonomy of oral reading errors, Thomson concluded that dyslexic readers are different, but he found that they do not…

  5. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  6. Robustness and modeling error characterization

    NASA Technical Reports Server (NTRS)

    Lehtomaki, N. A.; Castanon, D. A.; Sandell, N. R., Jr.; Levy, B. C.; Athans, M.; Stein, G.

    1984-01-01

    The results on robustness theory presented here are extensions of those given in Lehtomaki et al., (1981). The basic innovation in these new results is that they utilize minimal additional information about the structure of the modeling error, as well as its magnitude, to assess the robustness of feedback systems for which robustness tests based on the magnitude of modeling error alone are inconclusive.

  7. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  8. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  9. Measurement Errors in Organizational Surveys.

    ERIC Educational Resources Information Center

    Dutka, Solomon; Frankel, Lester R.

    1993-01-01

    Describes three classes of measurement techniques: (1) interviewing methods; (2) record retrieval procedures; and (3) observation methods. Discusses primary reasons for measurement error. Concludes that, although measurement error can be defined and controlled for, there are other design factors that also must be considered. (CFR)

  10. Barriers to Medical Error Reporting

    PubMed Central

    Poorolajal, Jalal; Rezaie, Shirin; Aghighi, Negar

    2015-01-01

    Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan, Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%), lack of proper reporting form (51.8%), lack of peer supporting a person who has committed an error (56.0%), and lack of personal attention to the importance of medical errors (62.9%). The rate of committing medical errors was higher in men (71.4%), age of 50–40 years (67.6%), less-experienced personnel (58.7%), educational level of MSc (87.5%), and staff of radiology department (88.9%). Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement. PMID:26605018

  11. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  12. Reducing latent errors, drift errors, and stakeholder dissonance.

    PubMed

    Samaras, George M

    2012-01-01

    Healthcare information technology (HIT) is being offered as a transformer of modern healthcare delivery systems. Some believe that it has the potential to improve patient safety, increase the effectiveness of healthcare delivery, and generate significant cost savings. In other industrial sectors, information technology has dramatically influenced quality and profitability - sometimes for the better and sometimes not. Quality improvement efforts in healthcare delivery have not yet produced the dramatic results obtained in other industrial sectors. This may be that previously successful quality improvement experts do not possess the requisite domain knowledge (clinical experience and expertise). It also appears related to a continuing misconception regarding the origins and meaning of work errors in healthcare delivery. The focus here is on system use errors rather than individual user errors. System use errors originate in both the development and the deployment of technology. Not recognizing stakeholders and their conflicting needs, wants, and desires (NWDs) may lead to stakeholder dissonance. Mistakes translating stakeholder NWDs into development or deployment requirements may lead to latent errors. Mistakes translating requirements into specifications may lead to drift errors. At the sharp end, workers encounter system use errors or, recognizing the risk, expend extensive and unanticipated resources to avoid them. PMID:22317001

  13. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  14. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  15. Motion error analysis of the 3D coordinates of airborne lidar for typical terrains

    NASA Astrophysics Data System (ADS)

    Peng, Tao; Lan, Tian; Ni, Guoqiang

    2013-07-01

    A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.

  16. Canceling the momentum in a phase-shifting algorithm to eliminate spatially uniform errors.

    PubMed

    Hibino, Kenichi; Kim, Yangjin

    2016-08-10

    In phase-shifting interferometry, phase modulation nonlinearity causes both spatially uniform and nonuniform errors in the measured phase. Conventional linear-detuning error-compensating algorithms only eliminate the spatially variable error component. The uniform error is proportional to the inertial momentum of the data-sampling weight of a phase-shifting algorithm. This paper proposes a design approach to cancel the momentum by using characteristic polynomials in the Z-transform space and shows that an arbitrary M-frame algorithm can be modified to a new (M+2)-frame algorithm that acquires new symmetry to eliminate the uniform error. PMID:27534475

  17. Waveform error analysis for bistatic synthetic aperture radar systems

    NASA Astrophysics Data System (ADS)

    Adams, J. W.; Schifani, T. M.

    The signal phase histories at the transmitter, receiver, and radar signal processor in bistatic SAR systems are described. The fundamental problem of mismatches in the waveform generators for the illuminating and receiving radar systems is analyzed. The effects of errors in carrier frequency and chirp slope are analyzed for bistatic radar systems which use linear FM waveforms. It is shown that the primary effect of a mismatch in carrier frequencies is an azimuth displacement of the image.

  18. Accumulation of infectious mutants in stocks during the propagation of fiber-modified recombinant adenoviruses

    SciTech Connect

    Ugai, Hideyo; Inabe, Kumiko; Yamasaki, Takahito; Murata, Takehide; Obata, Yuichi; Hamada, Hirofumi; Yokoyama, Kazunari K. . E-mail: kazu@brc.riken.jp

    2005-11-25

    In infected cells, replication errors during viral proliferation generate mutations in adenoviruses (Ads), and the mutant Ads proliferate and evolve in the intracellular environment. Genetically fiber-modified recombinant Ads (rAd variants) were generated, by modification of the fiber gene, for therapeutic applications in host cells that lack or express reduced levels of the Coxsackievirus and adenovirus receptor. To assess the genetic modifications of rAd variants that might induce the instability of Ad virions, we examined the frequencies of mutants that accumulated in propagated stocks. Seven of 41 lines of Ad variants generated mutants in the stocks and all mutants were infectious. Moreover, all the mutations occurred in the modified region that had been added at the 3' end of the fiber gene. Our results show that some genetic modifications at the carboxyl terminus of Ad fiber protein lead to the instability of Ad virions.

  19. Critical evidence for the prediction error theory in associative learning

    PubMed Central

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an “auto-blocking”, which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  20. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  1. Quantum rms error and Heisenberg's error-disturbance relation

    NASA Astrophysics Data System (ADS)

    Busch, Paul

    2014-09-01

    Reports on experiments recently performed in Vienna [Erhard et al, Nature Phys. 8, 185 (2012)] and Toronto [Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)] include claims of a violation of Heisenberg's error-disturbance relation. In contrast, a Heisenberg-type tradeoff relation for joint measurements of position and momentum has been formulated and proven in [Phys. Rev. Lett. 111, 160405 (2013)]. Here I show how the apparent conflict is resolved by a careful consideration of the quantum generalization of the notion of root-mean-square error. The claim of a violation of Heisenberg's principle is untenable as it is based on a historically wrong attribution of an incorrect relation to Heisenberg, which is in fact trivially violated. We review a new general trade-off relation for the necessary errors in approximate joint measurements of incompatible qubit observables that is in the spirit of Heisenberg's intuitions. The experiments mentioned may directly be used to test this new error inequality.

  2. Syntactic and Semantic Errors in Radiology Reports Associated With Speech Recognition Software.

    PubMed

    Ringler, Michael D; Goss, Brian C; Bartholmai, Brian J

    2015-01-01

    Speech recognition software (SRS) has many benefits, but also increases the frequency of errors in radiology reports, which could impact patient care. As part of a quality control project, 13 trained medical transcriptionists proofread 213,977 SRS-generated signed reports from 147 different radiologists over a 40 month time interval. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods using .2 analysis and multiple logistic regression, as appropriate. 20,759 (9.7%) reports contained errors; 3,992 (1.9%) contained material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors (P<.001). Error proportion varied significantly among radiologists and between imaging subspecialties (P<.001). Errors were more common in cross-sectional reports (vs. plain radiography) (OR, 3.72), reports reinterpreting results of outside examinations (vs. in-house) (OR, 1.55), and procedural studies (vs. diagnostic) (OR, 1.91) (all P<.001). Dictation microphone upgrade did not affect error rate (P=.06). Error rate decreased over time (P<.001). PMID:26262224

  3. Frequency synthesizers for telemetry receivers

    NASA Astrophysics Data System (ADS)

    Stirling, Ronald C.

    1990-07-01

    The design of a frequency synthesizer is presented for telemetry receivers. The synthesizer contains two phase-locked loops, each with a programmable frequency counter, and incorporates fractional frequency synthesis but does not use a phase accumulator. The selected receiver design has a variable reference loop operating as a part of the output loop. Within the synthesizer, a single VTO generates the output frequency that is voltage-tunable from 375-656 MHz. The single-sideband phase noise is measured with an HP 8566B spectrum analyzer, and the receiver's bit error rate (BER) is measured with a carrier frequency of 250 MHz, synthesized LO at 410 MHz, and the conditions of BPSK, NRZ-L, and 2.3 kHz bit rate. The phase noise measurement limits and the BER performance data are presented in tabular form.

  4. Error compensation for thermally induced errors on a machine tool

    SciTech Connect

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  5. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  6. Detection and frequency tracking of chirping signals

    SciTech Connect

    Elliott, G.R.; Stearns, S.D.

    1990-08-01

    This paper discusses several methods to detect the presence of and track the frequency of a chirping signal in broadband noise. The dynamic behavior of each of the methods is described and tracking error bounds are investigated in terms of the chirp rate. Frequency tracking and behavior in the presence of varying levels of noise are illustrated in examples. 11 refs., 29 figs.

  7. Frequency spirals

    NASA Astrophysics Data System (ADS)

    Ottino-Löffler, Bertrand; Strogatz, Steven H.

    2016-09-01

    We study the dynamics of coupled phase oscillators on a two-dimensional Kuramoto lattice with periodic boundary conditions. For coupling strengths just below the transition to global phase-locking, we find localized spatiotemporal patterns that we call "frequency spirals." These patterns cannot be seen under time averaging; they become visible only when we examine the spatial variation of the oscillators' instantaneous frequencies, where they manifest themselves as two-armed rotating spirals. In the more familiar phase representation, they appear as wobbly periodic patterns surrounding a phase vortex. Unlike the stationary phase vortices seen in magnetic spin systems, or the rotating spiral waves seen in reaction-diffusion systems, frequency spirals librate: the phases of the oscillators surrounding the central vortex move forward and then backward, executing a periodic motion with zero winding number. We construct the simplest frequency spiral and characterize its properties using analytical and numerical methods. Simulations show that frequency spirals in large lattices behave much like this simple prototype.

  8. Canonical Correlation Analysis that Incorporates Measurement and Sampling Error Considerations.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Daniel, Larry

    Multivariate methods are being used with increasing frequency in educational research because these methods control "experimentwise" error rate inflation, and because the methods best honor the nature of the reality to which the researcher wishes to generalize. This paper: explains the basic logic of canonical analysis; illustrates that canonical…

  9. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2009-02-20

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  10. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2008-03-01

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  11. Shape error analysis for reflective nano focusing optics

    SciTech Connect

    Modi, Mohammed H.; Idir, Mourad

    2010-06-23

    Focusing performance of reflective x-ray optics is determined by surface figure accuracy. Any surface imperfection present on such optics introduces a phase error in the outgoing wave fields. Therefore converging beam at the focal spot will differ from the desired performance. Effect of these errors on focusing performance can be calculated by wave optical approach considering a coherent wave field illumination of optical elements. We have developed a wave optics simulator using Fresnel-Kirchhoff diffraction integral to calculate the mirror pupil function. Both analytically calculated and measured surface topography data can be taken as an aberration source to outgoing wave fields. Simulations are performed to study the effect of surface height fluctuations on focusing performances over wide frequency range in high, mid and low frequency band. The results using real shape profile measured with long trace profilometer (LTP) suggest that the shape error of {lambda}/4 PV (peak to valley) is tolerable to achieve diffraction limited performance. It is desirable to remove shape error of very low frequency as 0.1 mm{sup -1} which otherwise will generate beam waist or satellite peaks. All other frequencies above this limit will not affect the focused beam profile but only caused a loss in intensity.

  12. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  13. Static Detection of Disassembly Errors

    SciTech Connect

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K

    2009-10-13

    Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  14. Orbital and Geodetic Error Analysis

    NASA Technical Reports Server (NTRS)

    Felsentreger, T.; Maresca, P.; Estes, R.

    1985-01-01

    Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.

  15. Prospective errors determine motor learning.

    PubMed

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model's novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  16. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  17. Quantum error correction beyond qubits

    NASA Astrophysics Data System (ADS)

    Aoki, Takao; Takahashi, Go; Kajiya, Tadashi; Yoshikawa, Jun-Ichi; Braunstein, Samuel L.; van Loock, Peter; Furusawa, Akira

    2009-08-01

    Quantum computation and communication rely on the ability to manipulate quantum states robustly and with high fidelity. To protect fragile quantum-superposition states from corruption through so-called decoherence noise, some form of error correction is needed. Therefore, the discovery of quantum error correction (QEC) was a key step to turn the field of quantum information from an academic curiosity into a developing technology. Here, we present an experimental implementation of a QEC code for quantum information encoded in continuous variables, based on entanglement among nine optical beams. This nine-wave-packet adaptation of Shor's original nine-qubit scheme enables, at least in principle, full quantum error correction against an arbitrary single-beam error.

  18. Quantum error correction for state transfer in noisy spin chains

    NASA Astrophysics Data System (ADS)

    Kay, Alastair

    2016-04-01

    Can robustness against experimental imperfections and noise be embedded into a quantum simulation? In this paper, we report on a special case in which this is possible. A spin chain can be engineered such that, in the absence of imperfections and noise, an unknown quantum state is transported from one end of the chain to the other, due only to the intrinsic dynamics of the system. We show that an encoding into a standard error-correcting code (a Calderbank-Shor-Steane code) can be embedded into this simulation task such that a modified error-correction procedure on readout can recover from sufficiently low rates of noise during transport.

  19. THERP and HEART integrated methodology for human error assessment

    NASA Astrophysics Data System (ADS)

    Castiglia, Francesco; Giardina, Mariarosa; Tomarchio, Elio

    2015-11-01

    THERP and HEART integrated methodology is proposed to investigate accident scenarios that involve operator errors during high-dose-rate (HDR) treatments. The new approach has been modified on the basis of fuzzy set concept with the aim of prioritizing an exhaustive list of erroneous tasks that can lead to patient radiological overexposures. The results allow for the identification of human errors that are necessary to achieve a better understanding of health hazards in the radiotherapy treatment process, so that it can be properly monitored and appropriately managed.

  20. Addressee Errors in ATC Communications: The Call Sign Problem

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.

  1. Dissipation-induced continuous quantum error correction for superconducting circuits

    NASA Astrophysics Data System (ADS)

    Cohen, Joachim; Mirrahimi, Mazyar

    2014-12-01

    Quantum error correction (QEC) is a crucial step towards long coherence times required for efficient quantum information processing. One major challenge in this direction concerns the fast real-time analysis of error syndrome measurements and the associated feedback control. Recent proposals on autonomous QEC (AQEC) have opened new perspectives to overcome this difficulty. Here, we design an AQEC scheme based on quantum reservoir engineering adapted to superconducting qubits. We focus on a three-qubit bit-flip code, where three transmon qubits are dispersively coupled to a few low-Q resonator modes. By applying only continuous-wave drives of fixed but well-chosen frequencies and amplitudes, we engineer an effective interaction Hamiltonian to evacuate the entropy created by eventual bit-flip errors. We provide a full analytical and numerical study of the protocol while introducing the main limitations on the achievable error correction rates.

  2. Adaptive periodic error correction for Heidenhain tape encoders

    NASA Astrophysics Data System (ADS)

    Warner, Michael; Krabbendam, Victor; Schumacher, German

    2008-07-01

    Heidenhain position tape encoders are in use on almost all modern telescopes with excellent results. Performance of these systems can be limited by minor mechanical misalignments between the tape and read head causing errors at the grating period. The first and second harmonics of the measured signal are the dominant errors, and have a varying frequency dependant on axis rate. When the error spectrum is within the mount servo bandwidth it results in periodic telescope pointing jitter. This paper will describe an adaptive error correction using elliptic interpolation of the raw signals, based on the well known compensation technique developed by Heydemann [1]. The approach allows the compensation to track in real time with no need of a large static look-up table, or frequent calibrations. This paper also presents the results obtained after applying this approach on data measured on the SOAR telescope.

  3. Medical Error and Moral Luck.

    PubMed

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome. PMID:26662613

  4. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  5. Automatic oscillator frequency control system

    NASA Technical Reports Server (NTRS)

    Smith, S. F. (Inventor)

    1985-01-01

    A frequency control system makes an initial correction of the frequency of its own timing circuit after comparison against a frequency of known accuracy and then sequentially checks and corrects the frequencies of several voltage controlled local oscillator circuits. The timing circuit initiates the machine cycles of a central processing unit which applies a frequency index to an input register in a modulo-sum frequency divider stage and enables a multiplexer to clock an accumulator register in the divider stage with a cyclical signal derived from the oscillator circuit being checked. Upon expiration of the interval, the processing unit compares the remainder held as the contents of the accumulator against a stored zero error constant and applies an appropriate correction word to a correction stage to shift the frequency of the oscillator being checked. A signal from the accumulator register may be used to drive a phase plane ROM and, with periodic shifts in the applied frequency index, to provide frequency shift keying of the resultant output signal. Interposition of a phase adder between the accumulator register and phase plane ROM permits phase shift keying of the output signal by periodic variation in the value of a phase index applied to one input of the phase adder.

  6. Extended frequency turbofan model

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Park, J. W.; Jaekel, R. F.

    1980-01-01

    The fan model was developed using two dimensional modeling techniques to add dynamic radial coupling between the core stream and the bypass stream of the fan. When incorporated into a complete TF-30 engine simulation, the fan model greatly improved compression system frequency response to planar inlet pressure disturbances up to 100 Hz. The improved simulation also matched engine stability limits at 15 Hz, whereas the one dimensional fan model required twice the inlet pressure amplitude to stall the simulation. With verification of the two dimensional fan model, this program formulated a high frequency F-100(3) engine simulation using row by row compression system characteristics. In addition to the F-100(3) remote splitter fan, the program modified the model fan characteristics to simulate a proximate splitter version of the F-100(3) engine.

  7. The Study of Prescribing Errors Among General Dentists

    PubMed Central

    Araghi, Solmaz; Sharifi, Rohollah; Ahmadi, Goran; Esfehani, Mahsa; Rezaei, Fatemeh

    2016-01-01

    Introduction: In dentistry, medicine often prescribed to relieve pain and remove infections. Therefore, wrong prescription can lead to a range of problems including lack of pain, antimicrobial treatment failure and the development of resistance to antibiotics. Materials and Methods: In this cross-sectional study, the aim was to evaluate the common errors in written prescriptions by general dentists in Kermanshah in 2014. Dentists received a questionnaire describing five hypothetical patient and the appropriate prescription for the patient in question was asked. Information about age, gender, work experience and the admission in university was collected. The frequency of errors in prescriptions was determined. Data by SPSS 20 statistical software and using statistical t-test, chi-square and Pearson correlation were analyzed (0.05> P). Results: A total of 180 dentists (62.6% male and 37.4% female) with a mean age of 8.23 ± 39.199 participated in this study. Prescription errors include the wrong in pharmaceutical form (11%), not having to write therapeutic dose (13%), writing wrong dose (14%), typos (15%), error prescription (23%) and writing wrong number of drugs (24%). The most frequent errors in the administration of antiviral drugs (31%) and later stages of antifungal drugs (30%), analgesics (23%) and antibiotics (16%) was observed. Males dentists compared with females dentists showed more frequent errors (P=0.046). Error frequency among dentists with a long work history (P>0.001) and the acceptance in the university except for the entrance examination (P=0.041) had a statistically significant relationship. Conclusion: This study showed that the written prescription by general dentists examined contained significant errors and improve prescribing through continuing education of dentists is essential. PMID:26573049

  8. Sensitivity of actively damped structures to imperfections and modeling errors

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Kapania, Rakesh K.

    1989-01-01

    The sensitivity of actively damped response of structures with respect to errors in the structural modeling is studied. Two ways of representing errors are considered. The first approach assumes errors in the form of spatial variations (or imperfections) in the assumed mass and stiffness properties of the structures. The second approach assumes errors due to such factors as unknown joint stiffnesses, discretization errors, and nonlinearities. These errors are represented here as discrepancies between experimental and analytical mode shapes and frequencies. The actively damped system considered here is a direct-rate feedback regulator based on a number of colocated velocity sensors and force actuators. The response of the controlled structure is characterized by the eigenvalues of the closed-loop system. The effects of the modeling errors are thus presented as the sensitivity of the eigenvalues of the closed-loop system. Results are presented for two examples: (1) a three-span simply supported beam controlled by three sensors and actuators, and (2) a laboratory structure consisting of a cruciform beam supported by cables.

  9. Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca

    NASA Astrophysics Data System (ADS)

    Matteo, N. A.; Morton, Y. T.

    2010-12-01

    The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.

  10. Application of the Modified Urey-Bradley-Shimanouchi Force field of α-D-Glucopyranose and β-D-Fructopyranose to Predict the Vibrational Spectra of Disaccharides

    NASA Astrophysics Data System (ADS)

    Gafour, H. M.; Sekkal-Rahal, M.; Sail, K.

    2014-01-01

    The vibrational frequencies of the disaccharide isomaltulose in the solid state have been reproduced in the 50-4000 cm-1 range. The modified Urey-Bradley-Shimanouchi force field was used, combined with an inter molecular potential energy function that includes van der Waals interactions, electrostatic terms, and an explicit hydrogen bond function. The force constants previously established for α-D-glucopyranose and β-D-fructo pyranose, as well as the crystallographic data of isomaltulose monohydrate, were the starting parameters for the present work. The vibrational frequencies of isomaltulose were calculated and assigned to the experimentally observed vibrational frequencies. Overall, there was good agreement between the observed and calculated frequencies with an average error of 4 cm-1. Furthermore, good agreement was found between our calculated results and the vibration spectra of other disaccharides and monosaccharides.

  11. Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene a.

    2006-01-01

    Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

  12. Precision Analysis Based on Complicated Error Simulation for the Orbit Determination with the Space Tracking Ship

    NASA Astrophysics Data System (ADS)

    Lei, YANG; Caifa, GUO; Zhengxu, DAI; Xiaoyong, LI; Shaolin, WANG

    2016-02-01

    The space tracking ship is a moving platform in the TT&C network. The orbit determination precision of the ship plays a key role in the TT&C mission. Based on the measuring data obtained by the ship-borne equipments, the paper presents the mathematic models of the complicated error from the space tracking ship, which can separate the random error and the correction residual error with secondary low frequency from the complicated error. An error simulation algorithm is proposed to analyze the orbit determination precision based on the two set of the different equipments. With this algorithm, a group of complicated error can be simulated from a measured sample. The simulated error groups can meet the requirements of sufficient complicated error for the equipment tests before the mission execution, which is helpful to the practical application.

  13. 1.76Tb/s Nyquist PDM 16QAM signal transmission over 714km SSMF with the modified SCFDE technique.

    PubMed

    Zheng, Zhennan; Ding, Rui; Zhang, Fan; Chen, Zhangyuan

    2013-07-29

    Nyquist pulse shaping is a promising technique for high-speed optical fiber transmission. We experimentally demonstrate the generation and transmission of a 1.76Tb/s, polarization-division-multiplexing (PDM) 16 quadrature amplitude modulation (QAM) Nyquist pulse shaping super-channel over 714km standard single-mode fiber (SSMF) with Erbium-doped fiber amplifier (EDFA) only amplification. The superchannel consists of 40 subcarriers tightly spaced at 6.25GHz with a spectral efficiency of 7.06b/s/Hz. The experiment is successfully enabled with the modified single carrier frequency domain estimation and equalization (SCFDE) scheme by performing training sequence based channel estimation in frequency domain and subsequent channel equalization in time domain. After 714km transmission, the bit-error-rate (BER) of all subcarriers are lower than the forward error correction limit of 3.8 × 10(-3). PMID:23938621

  14. Error Detection Processes in Problem Solving.

    ERIC Educational Resources Information Center

    Allwood, Carl Martin

    1984-01-01

    Describes a study which analyzed problem solvers' error detection processes by instructing subjects to think aloud when solving statistical problems. Effects of evaluative episodes on error detection, detection of different error types, error detection processes per se, and relationship of error detection behavior to problem-solving proficiency…

  15. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  16. Measurement error caused by spatial misalignment in environmental epidemiology

    PubMed Central

    Gryparis, Alexandros; Paciorek, Christopher J.; Zeka, Ariana; Schwartz, Joel; Coull, Brent A.

    2009-01-01

    In many environmental epidemiology studies, the locations and/or times of exposure measurements and health assessments do not match. In such settings, health effects analyses often use the predictions from an exposure model as a covariate in a regression model. Such exposure predictions contain some measurement error as the predicted values do not equal the true exposures. We provide a framework for spatial measurement error modeling, showing that smoothing induces a Berkson-type measurement error with nondiagonal error structure. From this viewpoint, we review the existing approaches to estimation in a linear regression health model, including direct use of the spatial predictions and exposure simulation, and explore some modified approaches, including Bayesian models and out-of-sample regression calibration, motivated by measurement error principles. We then extend this work to the generalized linear model framework for health outcomes. Based on analytical considerations and simulation results, we compare the performance of all these approaches under several spatial models for exposure. Our comparisons underscore several important points. First, exposure simulation can perform very poorly under certain realistic scenarios. Second, the relative performance of the different methods depends on the nature of the underlying exposure surface. Third, traditional measurement error concepts can help to explain the relative practical performance of the different methods. We apply the methods to data on the association between levels of particulate matter and birth weight in the greater Boston area. PMID:18927119

  17. An analysis of the effects of secondary reflections on dual-frequency reflectometers

    NASA Technical Reports Server (NTRS)

    Hearn, C. P.; Cockrell, C. R.; Harrah, S. D.

    1990-01-01

    The error-producing mechanism involving secondary reflections in a dual-frequency, distance measuring reflectometer is examined analytically. Equations defining the phase, and hence distance, error are derived. The error-reducing potential of frequency-sweeping is demonstrated. It is shown that a single spurious return can be completely nullified by optimizing the sweep width.

  18. Analysis of coupling errors in a physically-based integrated surface water-groundwater model

    NASA Astrophysics Data System (ADS)

    Dagès, Cécile; Paniconi, Claudio; Sulis, Mauro

    2012-12-01

    Several physically-based models that couple 1D or 2D surface and 3D subsurface flow have recently been developed, but few studies have evaluated the errors directly associated with the different coupling schemes. In this paper we analyze the causes of mass balance error for a conventional and a modified sequential coupling scheme in worst-case scenario simulations of Hortonian runoff generation on a sloping plane catchment. The conventional scheme is noniterative, whereas for the modified scheme the surface-subsurface exchange fluxes are determined via a boundary condition switching procedure that is performed iteratively during resolution of the nonlinear subsurface flow equation. It is shown that the modified scheme generates much lower coupling mass balance errors than the conventional sequential scheme. While both coupling schemes are sensitive to time discretization, the iterative control of infiltration in the modified scheme greatly limits its sensitivity to temporal resolution. Little sensitivity to spatial discretization is observed for both schemes. For the modified scheme the different factors contributing to coupling error are isolated, and the error is observed to be highly correlated to the flood recession duration. More testing, under broader hydrologic contexts and including other coupling schemes, is recommended so that the findings from this first analysis of coupling errors can be extended to other surface water-groundwater models.

  19. Sensitivity in error detection of patient specific QA tools for IMRT plans

    NASA Astrophysics Data System (ADS)

    Lat, S. Z.; Suriyapee, S.; Sanghangthum, T.

    2016-03-01

    The high complexity of dose calculation in treatment planning and accurate delivery of IMRT plan need high precision of verification method. The purpose of this study is to investigate error detection capability of patient specific QA tools for IMRT plans. The two H&N and two prostate IMRT plans with MapCHECK2 and portal dosimetry QA tools were studied. Measurements were undertaken for original and modified plans with errors introduced. The intentional errors composed of prescribed dose (±2 to ±6%) and position shifting in X-axis and Y-axis (±1 to ±5mm). After measurement, gamma pass between original and modified plans were compared. The average gamma pass for original H&N and prostate plans were 98.3% and 100% for MapCHECK2 and 95.9% and 99.8% for portal dosimetry, respectively. In H&N plan, MapCHECK2 can detect position shift errors starting from 3mm while portal dosimetry can detect errors started from 2mm. Both devices showed similar sensitivity in detection of position shift error in prostate plan. For H&N plan, MapCHECK2 can detect dose errors starting at ±4%, whereas portal dosimetry can detect from ±2%. For prostate plan, both devices can identify dose errors starting from ±4%. Sensitivity of error detection depends on type of errors and plan complexity.

  20. Frequency-offset Cartesian feedback for MRI power amplifier linearization.

    PubMed

    Zanchi, Marta G; Stang, Pascal; Kerr, Adam; Pauly, John M; Scott, Greig C

    2011-02-01

    High-quality magnetic resonance imaging (MRI) requires precise control of the transmit radio-frequency (RF) field. In parallel excitation applications such as transmit SENSE, high RF power linearity is essential to cancel aliased excitations. In widely-employed class AB power amplifiers, gain compression, cross-over distortion, memory effects, and thermal drift all distort the RF field modulation and can degrade image quality. Cartesian feedback (CF) linearization can mitigate these effects in MRI, if the quadrature mismatch and dc offset imperfections inherent in the architecture can be minimized. In this paper, we present a modified Cartesian feedback technique called "frequency-offset Cartesian feedback" (FOCF) that significantly reduces these problems. In the FOCF architecture, the feedback control is performed at a low intermediate frequency rather than dc, so that quadrature ghosts and dc errors are shifted outside the control bandwidth. FOCF linearization is demonstrated with a variety of typical MRI pulses. Simulation of the magnetization obtained with the Bloch equation demonstrates that high-fidelity RF reproduction can be obtained even with inexpensive class AB amplifiers. Finally, the enhanced RF fidelity of FOCF over CF is demonstrated with actual images obtained in a 1.5 T MRI system. PMID:20959264

  1. Frequency-Offset Cartesian Feedback for MRI Power Amplifier Linearization

    PubMed Central

    Zanchi, Marta Gaia; Stang, Pascal; Kerr, Adam; Pauly, John Mark; Scott, Greig Cameron

    2011-01-01

    High-quality magnetic resonance imaging (MRI) requires precise control of the transmit radio-frequency field. In parallel excitation applications such as transmit SENSE, high RF power linearity is essential to cancel aliased excitations. In widely-employed class AB power amplifiers, gain compression, cross-over distortion, memory effects, and thermal drift all distort the RF field modulation and can degrade image quality. Cartesian feedback (CF) linearization can mitigate these effects in MRI, if the quadrature mismatch and DC offset imperfections inherent in the architecture can be minimized. In this paper, we present a modified Cartesian feedback technique called “frequency-offset Cartesian feedback” (FOCF) that significantly reduces these problems. In the FOCF architecture, the feedback control is performed at a low intermediate frequency rather than DC, so that quadrature ghosts and DC errors are shifted outside the control bandwidth. FOCF linearization is demonstrated with a variety of typical MRI pulses. Simulation of the magnetization obtained with the Bloch equation demonstrates that high-fidelity RF reproduction can be obtained even with inexpensive class AB amplifiers. Finally, the enhanced RF fidelity of FOCF over CF is demonstrated with actual images obtained in a 1.5 T MRI system. PMID:20959264

  2. Modified solar flux index for upper atmospheric applications

    NASA Astrophysics Data System (ADS)

    Maruyama, Takashi

    2011-08-01

    The F10.7 solar flux index was modified in order to better describe short-term variations in solar extreme ultraviolet (EUV) irradiance for application in ionosphere and thermosphere studies. Several parameters were computed from the F10.7 time series with the assistance of an artificial neural network (ANN) technique, and the daily F10.7 index value was converted to a new solar flux index, MEI10.7. The ANN consists of an input layer that includes an experimental solar input and the day of the year to take seasonal factors into account, one hidden layer, and a target layer of ionospheric total electron content (TEC). The ANN training and validation data set covered one solar cycle from 1997 to 2008. The parameter set that yielded the smallest root-mean-square errors (RMSEs) between the observed and ANN-predicted TECs was adopted for modifying the solar flux index. The MEI10.7 index was evaluated via the same ANN technique. MEI10.7 yielded a smaller RMSE than the magnesium index (Mg II core-to-wing ratio) and a similar RMSE to the EUV index based on the integrated 26-34 nm emission measured by the Solar and Heliospheric Observatory. An advantage of MEI10.7 is long-term availability since the 1940s, unlike satellite measurements. A long-term trend analysis of the ionospheric critical frequency (foF2) at Kokubunji, Japan, conducted for the period from 1957 to 2010 examined the difference between the ANN-modeled and measured foF2 values. The linear regression error when foF2 was modeled by MEI10.7 was appreciably smaller than when it was modeled by F10.7.

  3. Error analysis for relay type satellite-aided search and rescue systems

    NASA Technical Reports Server (NTRS)

    Marini, J. W.

    1977-01-01

    An analysis was made of the errors in the determination of the position of an emergency transmitter in a satellite aided search and rescue system. The satellite was assumed to be at a height of 820 km in a near circular near polar orbit. Short data spans of four minutes or less were used. The error sources considered were measurement noise, transmitter frequency drift, ionospheric effects and error in the assumed height of the transmitter. The errors were calculated for several different transmitter positions, data rates and data spans. The only transmitter frequency used was 406 MHz, but the results can be scaled to different frequencies. In a typical case, in which four Doppler measurements were taken over a span of two minutes, the position error was about 1.2 km.

  4. Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Niu, Qunjie; Liang, Kun

    2016-09-01

    Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.

  5. Investigation of error compensation in CGH-based form testing of aspheres

    NASA Astrophysics Data System (ADS)

    Stuerwald, S.; Brill, N.; Schmitt, R.

    2014-05-01

    Interferometric form testing using computer generated holograms is one of the main full-field measurement techniques. Till now, various modified measurement setups for optical form testing interferometry have been presented. Currently, typical form deviations in the region of several tens of nanometers occur in case of the widely used computer generated hologram (CGH) based interferometric form testing. Deviations occur due to a non-perfect alignment of the computer generated hologram (CGH) relative to the transmission sphere (Fizeau objective) and also of the asphere relative to the testing wavefront. Thus, measurement results are user and setup dependent which results in an unsatisfactory reproducibility of the form errors. In case of aligning a CGH, this usually requires a minimization of the spatial frequency of the fringe pattern by an operator. Finding the ideal position however often cannot be performed with sufficient accuracy by the operator as the position of minimum spatial fringe density is usually not unique. Therefore, the scientific and technical objectives of this paper comprise the development of a simulation based approach to explain and quantify the experimental errors due to misalignment of the specimen towards a computer generated hologram in an optical form testing measurement system. A further step is the programming of an iterative method to realize a virtual optimised realignment of the system on the basis of Zernike polynomial decomposition which should allow the calculation of the measured form for an ideal alignment and thus the subtraction of the alignment based form error. Different analysis approaches are investigated with regard to the final accuracy and reproducibility. To validate the theoretical models a series of systematic experiments is performed with hexapod-positioning systems in order to allow an exact and reproducible positioning of the optical CGH-based setup.

  6. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  7. Error Field Correction in ITER

    SciTech Connect

    Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.

    2008-05-22

    A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.

  8. Error analysis using organizational simulation.

    PubMed Central

    Fridsma, D. B.

    2000-01-01

    Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885

  9. Synthetic aperture interferometry: error analysis

    SciTech Connect

    Biswas, Amiya; Coupland, Jeremy

    2010-07-10

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  10. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. PMID:27184070

  11. 20 Tips to Help Prevent Medical Errors

    MedlinePlus

    ... Prevent Medical Errors 20 Tips to Help Prevent Medical Errors: Patient Fact Sheet This information is for ... current information. Select to Download PDF (295 KB). Medical errors can occur anywhere in the health care ...

  12. Analysis of Medication Error Reports

    SciTech Connect

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  13. Ligation errors in DNA computing.

    PubMed

    Aoi, Y; Yoshinobu, T; Tanizawa, K; Kinoshita, K; Iwasaki, H

    1999-10-01

    DNA computing is a novel method of computing proposed by Adleman (1994), in which the data is encoded in the sequences of oligonucleotides. Massively parallel reactions between oligonucleotides are expected to make it possible to solve huge problems. In this study, reliability of the ligation process employed in the DNA computing is tested by estimating the error rate at which wrong oligonucleotides are ligated. Ligation of wrong oligonucleotides would result in a wrong answer in the DNA computing. The dependence of the error rate on the number of mismatches between oligonucleotides and on the combination of bases is investigated. PMID:10636043

  14. Image pre-filtering for measurement error reduction in digital image correlation

    NASA Astrophysics Data System (ADS)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  15. Influence of Tooth Spacing Error on Gears With and Without Profile Modifications

    NASA Technical Reports Server (NTRS)

    Padmasolala, Giri; Lin, Hsiang H.; Oswald, Fred B.

    2000-01-01

    A computer simulation was conducted to investigate the effectiveness of profile modification for reducing dynamic loads in gears with different tooth spacing errors. The simulation examined varying amplitudes of spacing error and differences in the span of teeth over which the error occurs. The modification considered included both linear and parabolic tip relief. The analysis considered spacing error that varies around most of the gear circumference (similar to a typical sinusoidal error pattern) as well as a shorter span of spacing errors that occurs on only a few teeth. The dynamic analysis was performed using a revised version of a NASA gear dynamics code, modified to add tooth spacing errors to the analysis. Results obtained from the investigation show that linear tip relief is more effective in reducing dynamic loads on gears with small spacing errors but parabolic tip relief becomes more effective as the amplitude of spacing error increases. In addition, the parabolic modification is more effective for the more severe error case where the error is spread over a longer span of teeth. The findings of this study can be used to design robust tooth profile modification for improving dynamic performance of gear sets with different tooth spacing errors.

  16. Robust, Error-Tolerant Photometric Projector Compensation.

    PubMed

    Grundhöfer, Anselm; Iwai, Daisuke

    2015-12-01

    We propose a novel error tolerant optimization approach to generate a high-quality photometric compensated projection. The application of a non-linear color mapping function does not require radiometric pre-calibration of cameras or projectors. This characteristic improves the compensation quality compared with related linear methods if this approach is used with devices that apply complex color processing, such as single-chip digital light processing projectors. Our approach consists of a sparse sampling of the projector's color gamut and non-linear scattered data interpolation to generate the per-pixel mapping from the projector to camera colors in real time. To avoid out-of-gamut artifacts, the input image's luminance is automatically adjusted locally in an optional offline optimization step that maximizes the achievable contrast while preserving smooth input gradients without significant clipping errors. To minimize the appearance of color artifacts at high-frequency reflectance changes of the surface due to usually unavoidable slight projector vibrations and movement (drift), we show that a drift measurement and analysis step, when combined with per-pixel compensation image optimization, significantly decreases the visibility of such artifacts. PMID:26390454

  17. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  18. Assessment of load-frequency-control impacts caused by small wind turbines

    NASA Astrophysics Data System (ADS)

    Curtice, D. H.; Reddoch, T. W.

    A method is presented to analyze the effects small wind turbines' (WTs) output may have on the load frequency control process. A simulation model of a utility's automatic generation control (AGC) process is used with recorded real time system load data modified by synthesized data characterizing the aggregate output of small WTs. A series of WT output scenarios are defined for various WT penetrations of the total system load. WT output scenarios, varying in frequency and magnitude, are combined with system load variations to test the effectiveness of present AGC control strategies. System performance change from the base case is assessed using area control error (ACE) values, time between zero crossings, inadvertent accumulation, and control pulses sent to regulating units.

  19. Instantaneous microwave frequency measurement using four-wave mixing in a chalcogenide chip

    NASA Astrophysics Data System (ADS)

    Pagani, Mattia; Vu, Khu; Choi, Duk-Yong; Madden, Steve J.; Eggleton, Benjamin J.; Marpaung, David

    2016-08-01

    We present the first instantaneous frequency measurement (IFM) system using four-wave mixing (FWM) in a compact photonic chip. We exploit the high nonlinearity of chalcogenide to achieve efficient FWM in a short 23 mm As2S3 waveguide. This reduces the measurement latency by orders of magnitude, compared to fiber-based approaches. We demonstrate the tuning of the system response to maximize measurement bandwidth (40 GHz, limited by the equipment used), or accuracy (740 MHz rms error). Additionally, we modify the previous FWM-based IFM system structure to allow for ultra-fast reconfiguration of the bandwidth and resolution of the measurement. This has the potential to become the first IFM system capable of ultra-fast accurate frequency measurement, with no compromise of bandwidth.

  20. Quantum Metrology Enhanced by Repetitive Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Unden, Thomas; Balasubramanian, Priya; Louzon, Daniel; Vinkler, Yuval; Plenio, Martin B.; Markham, Matthew; Twitchen, Daniel; Stacey, Alastair; Lovchinsky, Igor; Sushkov, Alexander O.; Lukin, Mikhail D.; Retzker, Alex; Naydenov, Boris; McGuinness, Liam P.; Jelezko, Fedor

    2016-06-01

    We experimentally demonstrate the protection of a room-temperature hybrid spin register against environmental decoherence by performing repeated quantum error correction whilst maintaining sensitivity to signal fields. We use a long-lived nuclear spin to correct multiple phase errors on a sensitive electron spin in diamond and realize magnetic field sensing beyond the time scales set by natural decoherence. The universal extension of sensing time, robust to noise at any frequency, demonstrates the definitive advantage entangled multiqubit systems provide for quantum sensing and offers an important complement to quantum control techniques.

  1. Quantum Metrology Enhanced by Repetitive Quantum Error Correction.

    PubMed

    Unden, Thomas; Balasubramanian, Priya; Louzon, Daniel; Vinkler, Yuval; Plenio, Martin B; Markham, Matthew; Twitchen, Daniel; Stacey, Alastair; Lovchinsky, Igor; Sushkov, Alexander O; Lukin, Mikhail D; Retzker, Alex; Naydenov, Boris; McGuinness, Liam P; Jelezko, Fedor

    2016-06-10

    We experimentally demonstrate the protection of a room-temperature hybrid spin register against environmental decoherence by performing repeated quantum error correction whilst maintaining sensitivity to signal fields. We use a long-lived nuclear spin to correct multiple phase errors on a sensitive electron spin in diamond and realize magnetic field sensing beyond the time scales set by natural decoherence. The universal extension of sensing time, robust to noise at any frequency, demonstrates the definitive advantage entangled multiqubit systems provide for quantum sensing and offers an important complement to quantum control techniques. PMID:27341218

  2. Error detection and correction unit with built-in self-test capability for spacecraft applications

    NASA Technical Reports Server (NTRS)

    Timoc, Constantin

    1990-01-01

    The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.

  3. He's Frequency Formulation for Nonlinear Oscillators

    ERIC Educational Resources Information Center

    Geng, Lei; Cai, Xu-Chu

    2007-01-01

    Based on an ancient Chinese algorithm, J H He suggested a simple but effective method to find the frequency of a nonlinear oscillator. In this paper, a modified version is suggested to improve the accuracy of the frequency; two examples are given, revealing that the obtained solutions are of remarkable accuracy and are valid for the whole solution…

  4. Error field penetration and locking to the backward wave

    NASA Astrophysics Data System (ADS)

    Finn, John; Cole, Andrew; Brennan, Dylan

    2015-11-01

    Error field penetration involves driving a stable tearing mode in a rotating toroidal plasma. In this paper it is shown that locking for modes with real frequencies ωr differ from conventional results. The reconnected flux for modes with frequencies +/-ωr in the plasma frame is maximized when the frequency of the stable backward mode (-ωr) in the lab frame is zero, i.e. when v =ωr / k . Notably, the locking torque is exactly zero at v =ωr / k , with a pronounced peak at just higher rotation, leading to a locked state with plasma velocity just above ωr / k . Real frequencies are known to occur due to the Glasser effect for modes in the resistive-inertial (RI) regime. This therefore leads to locking of the plasma velocity to just above v =ωr / k . Also, similar real frequencies can occur in the visco-resistive (VR) regime with pressure, and the locking torque is similar to the RI result. Real frequencies occur due to diamagnetic effects in other tearing mode regimes and also show this effect. Nonlinear effects on the mode amplitude and torque for weakly stable modes or large error fields are discussed. We discuss the possibility of applying external fields of different helicities to drive sheared flows.

  5. Graduate Students' Administration and Scoring Errors on the Woodcock-Johnson III Tests of Cognitive Abilities

    ERIC Educational Resources Information Center

    Ramos, Erica; Alfonso, Vincent C.; Schermerhorn, Susan M.

    2009-01-01

    The interpretation of cognitive test scores often leads to decisions concerning the diagnosis, educational placement, and types of interventions used for children. Therefore, it is important that practitioners administer and score cognitive tests without error. This study assesses the frequency and types of examiner errors that occur during the…

  6. Stitching-error reduction in gratings by shot-shifted electron-beam lithography

    NASA Technical Reports Server (NTRS)

    Dougherty, D. J.; Muller, R. E.; Maker, P. D.; Forouhar, S.

    2001-01-01

    Calculations of the grating spatial-frequency spectrum and the filtering properties of multiple-pass electron-beam writing demonstrate a tradeoff between stitching-error suppression and minimum pitch separation. High-resolution measurements of optical-diffraction patterns show a 25-dB reduction in stitching-error side modes.

  7. Administration and Scoring Errors of Graduate Students Learning the WISC-IV: Issues and Controversies

    ERIC Educational Resources Information Center

    Mrazik, Martin; Janzen, Troy M.; Dombrowski, Stefan C.; Barford, Sean W.; Krawchuk, Lindsey L.

    2012-01-01

    A total of 19 graduate students enrolled in a graduate course conducted 6 consecutive administrations of the Wechsler Intelligence Scale for Children, 4th edition (WISC-IV, Canadian version). Test protocols were examined to obtain data describing the frequency of examiner errors, including administration and scoring errors. Results identified 511…

  8. 78 FR 45479 - Frequency Response and Frequency Bias Setting Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-29

    ... frequency response, and encourages coordinated automatic generation control (AGC) operation.\\6\\ These... in MW/0.1 Hz, included in a Balancing Authority's Area Control Error equation to account for the Balancing Authority's inverse Frequency Response contribution to the Interconnection, and...

  9. Having Fun with Error Analysis

    ERIC Educational Resources Information Center

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  10. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  11. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  12. RM2: rms error comparisons

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1976-01-01

    The root-mean-square error performance measure is used to compare the relative performance of several widely known source coding algorithms with the RM2 image data compression system. The results demonstrate that RM2 has a uniformly significant performance advantage.

  13. What Is a Reading Error?

    ERIC Educational Resources Information Center

    Labov, William; Baker, Bettina

    2010-01-01

    Early efforts to apply knowledge of dialect differences to reading stressed the importance of the distinction between differences in pronunciation and mistakes in reading. This study develops a method of estimating the probability that a given oral reading that deviates from the text is a true reading error by observing the semantic impact of the…

  14. Amplify Errors to Minimize Them

    ERIC Educational Resources Information Center

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  15. Typical errors of ESP users

    NASA Astrophysics Data System (ADS)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  16. Cascade Error Projection Learning Algorithm

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  17. Input/output error analyzer

    NASA Technical Reports Server (NTRS)

    Vaughan, E. T.

    1977-01-01

    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  18. A brief history of error.

    PubMed

    Murray, Andrew W

    2011-10-01

    The spindle checkpoint monitors chromosome alignment on the mitotic and meiotic spindle. When the checkpoint detects errors, it arrests progress of the cell cycle while it attempts to correct the mistakes. This perspective will present a brief history summarizing what we know about the checkpoint, and a list of questions we must answer before we understand it. PMID:21968991

  19. Measurement error in geometric morphometrics.

    PubMed

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset. PMID:27038025

  20. Multiscale Systematic Error Correction via Wavelet-Based Band Splitting and Bayesian Error Modeling in Kepler Light Curves

    NASA Astrophysics Data System (ADS)

    Stumpe, Martin C.; Smith, J. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.

    2012-05-01

    Kepler photometric data contain significant systematic and stochastic errors as they come from the Kepler Spacecraft. The main cause for the systematic errors are changes in the photometer focus due to thermal changes in the instrument, and also residual spacecraft pointing errors. It is the main purpose of the Presearch-Data-Conditioning (PDC) module of the Kepler Science processing pipeline to remove these systematic errors from the light curves. While PDC has recently seen a dramatic performance improvement by means of a Bayesian approach to systematic error correction and improved discontinuity correction, there is still room for improvement. One problem of the current (Kepler 8.1) implementation of PDC is that injection of high frequency noise can be observed in some light curves. Although this high frequency noise does not negatively impact the general cotrending, an increased noise level can make detection of planet transits or other astrophysical signals more difficult. The origin of this noise-injection is that high frequency components of light curves sometimes get included into detrending basis vectors characterizing long term trends. Similarly, small scale features like edges can sometimes get included in basis vectors which otherwise describe low frequency trends. As a side effect to removing the trends, detrending with these basis vectors can then also mistakenly introduce these small scale features into the light curves. A solution to this problem is to perform a separation of scales, such that small scale features and large scale features are described by different basis vectors. We present our new multiscale approach that employs wavelet-based band splitting to decompose small scale from large scale features in the light curves. The PDC Bayesian detrending can then be performed on each band individually to correct small and large scale systematics independently. Funding for the Kepler Mission is provided by the NASA Science Mission Directorate.

  1. Frequency-domain Green's functions for radar waves in heterogeneous 2.5D media

    USGS Publications Warehouse

    Ellefsen, K.J.; Croize, D.; Mazzella, A.T.; McKenna, J.R.

    2009-01-01

    Green's functions for radar waves propagating in heterogeneous 2.5D media might be calculated in the frequency domain using a hybrid method. The model is defined in the Cartesian coordinate system, and its electromagnetic properties might vary in the x- and z-directions, but not in the y-direction. Wave propagation in the x- and z-directions is simulated with the finite-difference method, and wave propagation in the y-direction is simulated with an analytic function. The absorbing boundaries on the finite-difference grid are perfectly matched layers that have been modified to make them compatible with the hybrid method. The accuracy of these numerical Greens functions is assessed by comparing them with independently calculated Green's functions. For a homogeneous model, the magnitude errors range from -4.16% through 0.44%, and the phase errors range from -0.06% through 4.86%. For a layered model, the magnitude errors range from -2.60% through 2.06%, and the phase errors range from -0.49% through 2.73%. These numerical Green's functions might be used for forward modeling and full waveform inversion. ?? 2009 Society of Exploration Geophysicists. All rights reserved.

  2. Flood frequency in Alaska

    USGS Publications Warehouse

    Childers, J.M.

    1970-01-01

    Records of peak discharge at 183 sites were used to study flood frequency in Alaska. The vast size of Alaska, its great ranges of physiography, and the lack of data for much of the State precluded a comprehensive analysis of all flood determinants. Peak stream discharges, where gaging-station records were available, were analyzed for 2-year, 5-year, 10-year, 25-year, and 50-year average-recurrence intervals. A regional analysis of the flood characteristics by multiple-regression methods gave a set of equations that can be used to estimate floods of selected recurrence intervals up to 50 years for any site on any stream in Alaska. The equations relate floods to drainage-basin characteristics. The study indicates that in Alaska the 50-year flood can be estimated from 10-year gaging- station records with a standard error of 22 percent whereas the 50-year flood can be estimated from the regression equation with a standard error of 53 percent. Also, maximum known floods at more than 500 gaging stations and miscellaneous sites in Alaska were related to drainage-area size. An envelope curve of 500 cubic feet per second per square mile covered all but 2 floods in the State.

  3. Toward a cognitive taxonomy of medical errors.

    PubMed Central

    Zhang, Jiajie; Patel, Vimla L.; Johnson, Todd R.; Shortliffe, Edward H.

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions. PMID:12463962

  4. A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications

    SciTech Connect

    Bronevetsky, G; de Supinski, B; Schulz, M

    2009-02-13

    Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.

  5. Ac-dc converter firing error detection

    SciTech Connect

    Gould, O.L.

    1996-07-15

    Each of the twelve Booster Main Magnet Power Supply modules consist of two three-phase, full-wave rectifier bridges in series to provide a 560 VDC maximum output. The harmonic contents of the twelve-pulse ac-dc converter output are multiples of the 60 Hz ac power input, with a predominant 720 Hz signal greater than 14 dB in magnitude above the closest harmonic components at maximum output. The 720 Hz harmonic is typically greater than 20 dB below the 500 VDC output signal under normal operation. Extracting specific harmonics from the rectifier output signal of a 6, 12, or 24 pulse ac-dc converter allows the detection of SCR firing angle errors or complete misfires. A bandpass filter provides the input signal to a frequency-to-voltage converter. Comparing the output of the frequency-to-voltage converter to a reference voltage level provides an indication of the magnitude of the harmonics in the ac-dc converter output signal.

  6. Space Saving Statistics: An Introduction to Constant Error, Variable Error, and Absolute Error.

    ERIC Educational Resources Information Center

    Guth, David

    1990-01-01

    Article discusses research on orientation and mobility (O&M) for individuals with visual impairments, examining constant, variable, and absolute error (descriptive statistics that quantify fundamentally different characteristics of distributions of spatially directed behavior). It illustrates the statistics with examples, noting their application…

  7. High-precision coseismic displacement estimation with a single-frequency GPS receiver

    NASA Astrophysics Data System (ADS)

    Guo, Bofeng; Zhang, Xiaohong; Ren, Xiaodong; Li, Xingxing

    2015-07-01

    To improve the performance of Global Positioning System (GPS) in the earthquake/tsunami early warning and rapid response applications, minimizing the blind zone and increasing the stability and accuracy of both the rapid source and rupture inversion, the density of existing GPS networks must be increased in the areas at risk. For economic reasons, low-cost single-frequency receivers would be preferable to make the sparse dual-frequency GPS networks denser. When using single-frequency GPS receivers, the main problem that must be solved is the ionospheric delay, which is a critical factor when determining accurate coseismic displacements. In this study, we introduce a modified Satellite-specific Epoch-differenced Ionospheric Delay (MSEID) model to compensate for the effect of ionospheric error on single-frequency GPS receivers. In the MSEID model, the time-differenced ionospheric delays observed from a regional dual-frequency GPS network to a common satellite are fitted to a plane rather than part of a sphere, and the parameters of this plane are determined by using the coordinates of the stations. When the parameters are known, time-differenced ionospheric delays for a single-frequency GPS receiver could be derived from the observations of those dual-frequency receivers. Using these ionospheric delay corrections, coseismic displacements of a single-frequency GPS receiver can be accurately calculated based on time-differenced carrier-phase measurements in real time. The performance of the proposed approach is validated using 5 Hz GPS data collected during the 2012 Nicoya Peninsula Earthquake (Mw 7.6, 2012 September 5) in Costa Rica. This shows that the proposed approach improves the accuracy of the displacement of a single-frequency GPS station, and coseismic displacements with an accuracy of a few centimetres are achieved over a 10-min interval.

  8. Analysis of double-probe characteristics in low-frequency gas discharges and its improvement

    SciTech Connect

    Liu, DongLin Li, XiaoPing; Xie, Kai; Liu, ZhiWei; Shao, MingXu

    2015-01-15

    The double-probe has been used successfully in radio-frequency discharges. However, in low-frequency discharges, the double-probe I-V curve is so much seriously distorted by the strong plasma potential fluctuations that the I-V curve may lead to a large estimate error of plasma parameters. To suppress the distortion, we investigate the double-probe characteristics in low-frequency gas discharge based on an equivalent circuit model, taking both the plasma sheath and probe circuit into account. We discovered that there are two primary interferences to the I-V curve distortion: the voltage fluctuation between two probe tips caused by the filter difference voltage and the current peak at the negative edge of the plasma potential. Consequently, we propose a modified passive filter to reduce the two types of interference simultaneously. Experiments are conducted in a glow-discharge plasma (f = 30 kHz) to test the performance of the improved double probe. The results show that the electron density error is reduced from more than 100% to less than 10%. The proposed improved method is also suitable in cases where intensive potential fluctuations exist.

  9. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  10. Medical Errors: Tips to Help Prevent Them

    MedlinePlus

    ... to Web version Medical Errors: Tips to Help Prevent Them Medical Errors: Tips to Help Prevent Them Medical errors are one of the nation's ... single most important way you can help to prevent errors is to be an active member of ...

  11. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  12. Modified seasonal factors in exponential smoothing

    SciTech Connect

    Armstrong, J.S. . Wharton School of Finance and Commerce); Hwang, Ho-Ling ); Bandy, J. )

    1990-09-01

    Current practice uses statistical tests to determine whether seasonal factors should be applied in a given forecasting situation. Research suggests that an optimal policy might lie somewhere between using full seasonal factors and using no seasonal factors on series. This research proposes and tests use of a modified seasonality factor. Modified seasonal factors reduce the emphasis on the seasonal adjustments when forecasts are made. The adjustments account for errors in the estimation of the factors and for possible changes in the factors over the forecast horizon. An analysis of data from US Navy personnel inventories was conducted to test the use of a modified seasonality factor. Modified seasonal factors led to improved accuracy for predictions of inventories by paygrade using quarterly data from the Navy Personnel Research and Development Center (NPRDC). Under certain selections of factors, the mean absolute percent error (MAPE) was reduced by 4.4%. No gain was obtained, however, for the inventories by length of service. It is expected, but not shown here, that the modified seasonal factors will only be of value for series where the estimated seasonal factors show a substantial variation across the year. 3 refs., 6 tabs.

  13. Report on errors in pretransfusion testing from a tertiary care center: A step toward transfusion safety

    PubMed Central

    Sidhu, Meena; Meenia, Renu; Akhter, Naveen; Sawhney, Vijay; Irm, Yasmeen

    2016-01-01

    Introduction: Errors in the process of pretransfusion testing for blood transfusion can occur at any stage from collection of the sample to administration of the blood component. The present study was conducted to analyze the errors that threaten patients’ transfusion safety and actual harm/serious adverse events that occurred to the patients due to these errors. Materials and Methods: The prospective study was conducted in the Department Of Transfusion Medicine, Shri Maharaja Gulab Singh Hospital, Government Medical College, Jammu, India from January 2014 to December 2014 for a period of 1 year. Errors were defined as any deviation from established policies and standard operating procedures. A near-miss event was defined as those errors, which did not reach the patient. Location and time of occurrence of the events/errors were also noted. Results: A total of 32,672 requisitions for the transfusion of blood and blood components were received for typing and cross-matching. Out of these, 26,683 products were issued to the various clinical departments. A total of 2,229 errors were detected over a period of 1 year. Near-miss events constituted 53% of the errors and actual harmful events due to errors occurred in 0.26% of the patients. Sample labeling errors were 2.4%, inappropriate request for blood components 2%, and information on requisition forms not matching with that on the sample 1.5% of all the requisitions received were the most frequent errors in clinical services. In transfusion services, the most common event was accepting sample in error with the frequency of 0.5% of all requisitions. ABO incompatible hemolytic reactions were the most frequent harmful event with the frequency of 2.2/10,000 transfusions. Conclusion: Sample labeling, inappropriate request, and sample received in error were the most frequent high-risk errors. PMID:27011670

  14. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

    SciTech Connect

    LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

    2004-07-26

    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

  15. Optimization design and error analysis of photoelectric autocollimator

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Yan, Bixi; Hu, Mingjun; Dong, Mingli

    2012-11-01

    A photoelectric autocollimator employing an area Charge Coupled Device (CCD) as its target receiver, which is specially used in numerical stage calibration is optimized, and the various error factors are analyzed. By using the ZEMAX software, the image qualities are optimized to ensure the spherical and coma aberrations of the collimating system are less than 0.27mm and 0.035mm respectively; the Root Mean Square (RMS) radius is close to 6.45 microns, which is identified with the resolution of the CCD, and the Modulation Transfer Function (MTF) is greater than 0.3 in the full field of view, 0.5 in the centre field at the corresponding frequency. The errors origin mainly from fabrication and alignment, which are all about 0.4" . The error synthesis shows that the instrument can meet the demands of the design accuracy, which is also consistent with the experiment.

  16. Entropic error-disturbance relations

    NASA Astrophysics Data System (ADS)

    Coles, Patrick; Furrer, Fabian

    2014-03-01

    We derive an entropic error-disturbance relation for a sequential measurement scenario as originally considered by Heisenberg, and we discuss how our relation could be tested using existing experimental setups. Our relation is valid for discrete observables, such as spin, as well as continuous observables, such as position and momentum. The novel aspect of our relation compared to earlier versions is its clear operational interpretation and the quantification of error and disturbance using entropic quantities. This directly relates the measurement uncertainty, a fundamental property of quantum mechanics, to information theoretical limitations and offers potential applications in for instance quantum cryptography. PC is funded by National Research Foundation Singapore and Ministry of Education Tier 3 Grant ``Random numbers from quantum processes'' (MOE2012-T3-1-009). FF is funded by Japan Society for the Promotion of Science, KAKENHI grant No. 24-02793.

  17. Robot learning and error correction

    NASA Technical Reports Server (NTRS)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  18. Negligence, genuine error, and litigation.

    PubMed

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  19. Interference signal frequency tracking for extracting phase in frequency scanning interferometry using an extended Kalman filter.

    PubMed

    Liu, Zhe; Liu, Zhigang; Deng, Zhongwen; Tao, Long

    2016-04-10

    Optical frequency scanning nonlinearity seriously affects interference signal phase extraction accuracy in frequency-scanning interferometry systems using external cavity diode lasers. In this paper, an interference signal frequency tracking method using an extended Kalman filter is proposed. The interferometric phase is obtained by integrating the estimated instantaneous frequency over time. The method is independent of the laser's optical frequency scanning nonlinearity. The method is validated through simulations and experiments. The experimental results demonstrate that the relative phase extraction error in the fractional part is <1.5% with the proposed method and the standard deviation of absolute distance measurement is <2.4  μm. PMID:27139864

  20. Robust Blind Frequency and Transition Time Estimation for Frequency Hopping Systems

    NASA Astrophysics Data System (ADS)

    Fu, Kuo-Ching; Chen, Yung-Fang

    2010-12-01

    In frequency hopping spread spectrum (FHSS) systems, two major problems are timing synchronization and frequency estimation. A blind estimation scheme is presented for estimating frequency and transition time without using reference signals. The scheme is robust in the sense that it can avoid the unbalanced sampling block problem that occurs in existing maximum likelihood-based schemes, which causes large errors in one of the estimates of frequency. The proposed scheme has a lower computational cost than the maximum likelihood-based greedy search method. The estimated parameters are also used for the subsequent time and frequency tracking. The simulation results demonstrate the efficacy of the proposed approach.

  1. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Billings, C. E.; Lanber, J. K.; Cooper, G. E.

    1974-01-01

    This report is a brief description of research being undertaken by the National Aeronautics and Space Administration. The project is designed to seek out factors in the aviation system which contribute to human error, and to search for ways of minimizing the potential threat posed by these factors. The philosophy and assumptions underlying the study are discussed, together with an outline of the research plan.

  2. Error analysis of the articulated flexible arm CMM based on geometric method

    NASA Astrophysics Data System (ADS)

    Wang, Xueying; Liu, Shugui; Zhang, Guoxiong; Wang, Bin

    2006-11-01

    In order to overcome the disadvantage of traditional CMM (Coordinate Measuring Machine), a new type of CMM with rotational joints and flexible arms named articulated arm flexible CMM is developed, in which linear measurements are substituted by angular ones. Firstly a quasi-spherical coordinate system is put forward, the ideal mathematical model of articulated arm flexible CMM is established. On the base of full analysis on the factors affecting the measurement accuracy, ideal mathematical model is modified to error model according to structural parameters and geometric errors. A geometric method is proposed to verify feasibility of error model, and the results convincingly show its validity. Position errors caused by different type of error sources are analyzed, and a theoretic base for introducing error compensation and improving the accuracy of articulated arm flexible CMM is established.

  3. Clinical review: Medication errors in critical care

    PubMed Central

    Moyen, Eric; Camiré, Eric; Stelfox, Henry Thomas

    2008-01-01

    Medication errors in critical care are frequent, serious, and predictable. Critically ill patients are prescribed twice as many medications as patients outside of the intensive care unit (ICU) and nearly all will suffer a potentially life-threatening error at some point during their stay. The aim of this article is to provide a basic review of medication errors in the ICU, identify risk factors for medication errors, and suggest strategies to prevent errors and manage their consequences. PMID:18373883

  4. Error Location in Structural Dynamic Model of a Rocket Structure

    NASA Astrophysics Data System (ADS)

    Sundararajan, T.; Sam, C.

    2012-06-01

    Structural dynamic characteristics of the aerospace structures are essential to obtain the structural responses due to dynamic loads during its mission. The structural dynamic parameters of the aerospace structures are frequencies, associated mode shape and damping. Usually finite element (FE) model of the aerospace structures are generated to estimate the frequencies and the associated mode shape. These FE models are validated by modal survey/ground resonance tests to ensure its completeness and correctness. The modeling deficiencies, if any, in these FE models have to be corrected. This paper describes the method to locate the FE modeling errors using residual force method.

  5. Frequency Synthesizer For Tracking Filter

    NASA Technical Reports Server (NTRS)

    Randall, Richard L.

    1990-01-01

    Digital frequency-synthesizing subsystem generates trains of pulses, free of jitter, for use as frequency-control signals in tracking filters. Part of assembly of electronic equipment used to measure vibrations in bearings in rotating machinery. Designed to meet requirements for tracking narrow-band cage-rotation and ball-pass components of vibrations, as discussed in "Frequency-Tracking Error Detector" (MFS-29538) and "Ball-Pass Cage-Modulation Detector" (MFS-29539). Synthesizer includes preset counter, output of which controls signal for ball-pass filter. Input to this preset counter updated every 2 microseconds: responds almost immediately, effectively eliminating relatively long response time (lock-in time) and phase jitter.

  6. Human error in hospitals and industrial accidents: current concepts.

    PubMed

    Spencer, F C

    2000-10-01

    Most data concerning errors and accidents are from industrial accidents and airline injuries. General Electric, Alcoa, and Motorola, among others, all have reported complex programs that resulted in a marked reduction in frequency of worker injuries. In the field of medicine, however, with the outstanding exception of anesthesiology, there is a paucity of information, most reports referring to the 1984 Harvard-New York State Study, more than 16 years ago. This scarcity of information indicates the complexity of the problem. It seems very unlikely that simple exhortation or additional regulations will help because the problem lies principally in the multiple human-machine interfaces that constitute modern medical care. The absence of success stories also indicates that the best methods have to be learned by experience. A liaison with industry should be helpful, although the varieties of human illness are far different from a standardized manufacturing process. Concurrent with the studies of industrial and nuclear accidents, cognitive psychologists have intensively studied how the brain stores and retrieves information. Several concepts have emerged. First, errors are not character defects to be treated by the classic approach of discipline and education, but are byproducts of normal thinking that occur frequently. Second, major accidents are rarely causedby a single error; instead, they are often a combination of chronic system errors, termed latent errors. Identifying and correcting these latent errors should be the principal focus for corrective planning rather than searching for an individual culprit. This nonpunitive concept of errors is a key basis for an effective reporting system, brilliantly demonstrated in aviation with the ASRS system developed more than 25 years ago. The ASRS currently receives more than 30,000 reports annually and is credited with the remarkable increase in safety of airplane travel. Adverse drug events constitute about 25% of hospital

  7. Error control in the GCF: An information-theoretic model for error analysis and coding

    NASA Technical Reports Server (NTRS)

    Adeyemi, O.

    1974-01-01

    The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

  8. JPEG2000-coded image error concealment exploiting convex sets projections.

    PubMed

    Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio

    2005-04-01

    Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach. PMID:15825483

  9. Carriage Error Identification Based on Cross-Correlation Analysis and Wavelet Transformation

    PubMed Central

    Mu, Donghui; Chen, Dongju; Fan, Jinwei; Wang, Xiaofeng; Zhang, Feihu

    2012-01-01

    This paper proposes a novel method for identifying carriage errors. A general mathematical model of a guideway system is developed, based on the multi-body system method. Based on the proposed model, most error sources in the guideway system can be measured. The flatness of a workpiece measured by the PGI1240 profilometer is represented by a wavelet. Cross-correlation analysis performed to identify the error source of the carriage. The error model is developed based on experimental results on the low frequency components of the signals. With the use of wavelets, the identification precision of test signals is very high. PMID:23012558

  10. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    SciTech Connect

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  11. Rotationally shearing interferometer employing modified Dove prisms

    NASA Astrophysics Data System (ADS)

    Paez, Gonzalo; Strojnik, Marija; Moreno, Ivan

    2003-12-01

    We describe the rotationally shearing interferometer (RSI) employing modified Dove prisms, designed with a widened aperture to increase throughput and with larger base angles to minimize the wave-front tilt introduced due to manufacturing errors. Experimental results obtained with the RSI ascertain the feasibility of the design. This work demonstrates that the rotationally shearing interferometry may be used to perform some functions of the traditional astronomical instruments.

  12. Comparing measurement errors for formants in synthetic and natural vowels.

    PubMed

    Shadle, Christine H; Nam, Hosung; Whalen, D H

    2016-02-01

    The measurement of formant frequencies of vowels is among the most common measurements in speech studies, but measurements are known to be biased by the particular fundamental frequency (F0) exciting the formants. Approaches to reducing the errors were assessed in two experiments. In the first, synthetic vowels were constructed with five different first formant (F1) values and nine different F0 values; formant bandwidths, and higher formant frequencies, were constant. Input formant values were compared to manual measurements and automatic measures using the linear prediction coding-Burg algorithm, linear prediction closed-phase covariance, the weighted linear prediction-attenuated main excitation (WLP-AME) algorithm [Alku, Pohjalainen, Vainio, Laukkanen, and Story (2013). J. Acoust. Soc. Am. 134(2), 1295-1313], spectra smoothed cepstrally and by averaging repeated discrete Fourier transforms. Formants were also measured manually from pruned reassigned spectrograms (RSs) [Fulop (2011). Speech Spectrum Analysis (Springer, Berlin)]. All but WLP-AME and RS had large errors in the direction of the strongest harmonic; the smallest errors occur with WLP-AME and RS. In the second experiment, these methods were used on vowels in isolated words spoken by four speakers. Results for the natural speech show that F0 bias affects all automatic methods, including WLP-AME; only the formants measured manually from RS appeared to be accurate. In addition, RS coped better with weaker formants and glottal fry. PMID:26936555

  13. Quantum error correction of photon-scattering errors

    NASA Astrophysics Data System (ADS)

    Akerman, Nitzan; Glickman, Yinnon; Kotler, Shlomi; Ozeri, Roee

    2011-05-01

    Photon scattering by an atomic ground-state superposition is often considered as a source of decoherence. The same process also results in atom-photon entanglement which had been directly observed in various experiments using single atom, ion or a diamond nitrogen-vacancy center. Here we combine these two aspects to implement a quantum error correction protocol. We encode a qubit in the two Zeeman-splitted ground states of a single trapped 88 Sr+ ion. Photons are resonantly scattered on the S1 / 2 -->P1 / 2 transition. We study the process of single photon scattering i.e. the excitation of the ion to the excited manifold followed by a spontaneous emission and decay. In the absence of any knowledge on the emitted photon, the ion-qubit coherence is lost. However the joined ion-photon system still maintains coherence. We show that while scattering events where spin population is preserved (Rayleigh scattering) do not affect coherence, spin-changing (Raman) scattering events result in coherent amplitude exchange between the two qubit states. By applying a unitary spin rotation that is dependent on the detected photon polarization we retrieve the ion-qubit initial state. We characterize this quantum error correction protocol by process tomography and demonstrate an ability to preserve ion-qubit coherence with high fidelity.

  14. Error analysis of friction drive elements

    NASA Astrophysics Data System (ADS)

    Wang, Guomin; Yang, Shihai; Wang, Daxing

    2008-07-01

    Friction drive is used in some large astronomical telescopes in recent years. Comparing to the direct drive, friction drive train consists of more buildup parts. Usually, the friction drive train consists of motor-tachometer unit, coupling, reducer, driving roller, big wheel, encoder and encoder coupling. Normally, these buildup parts will introduce somewhat errors to the drive system. Some of them are random error and some of them are systematic error. For the random error, the effective way is to estimate their contributions and try to find proper way to decrease its influence. For the systematic error, the useful way is to analyse and test them quantitively, and then feedback the error to the control system to correct them. The main task of this paper is to analyse these error sources and find out their characteristics, such as random error, systematic error and contributions. The methods or equations used in the analysis will be also presented detail in this paper.

  15. The subthalamic nucleus contributes to post-error slowing.

    PubMed

    Cavanagh, James F; Sanguinetti, Joseph L; Allen, John J B; Sherman, Scott J; Frank, Michael J

    2014-11-01

    pFC is proposed to implement cognitive control via directed "top-down" influence over behavior. But how is this feat achieved? The virtue of such a descriptive model is contingent on a mechanistic understanding of how motor execution is altered in specific circumstances. In this report, we provide evidence that the well-known phenomenon of slowed RTs following mistakes (post-error slowing) is directly influenced by the degree of subthalamic nucleus (STN) activity. The STN is proposed to act as a brake on motor execution following conflict or errors, buying time so a more cautious response can be made on the next trial. STN local field potentials from nine Parkinson disease patients undergoing deep brain stimulation surgery were recorded while they performed a response conflict task. In a 2.5- to 5-Hz frequency range previously associated with conflict and error processing, the degree phase consistency preceding the response was associated with increasingly slower RTs specifically following errors. These findings provide compelling evidence that post-error slowing is in part mediated by a corticosubthalamic "hyperdirect" pathway for increased response caution. PMID:24800632

  16. Sibship reconstruction from genetic data with typing errors.

    PubMed Central

    Wang, Jinliang

    2004-01-01

    Likelihood methods have been developed to partition individuals in a sample into full-sib and half-sib families using genetic marker data without parental information. They invariably make the critical assumption that marker data are free of genotyping errors and mutations and are thus completely reliable in inferring sibships. Unfortunately, however, this assumption is rarely tenable for virtually all kinds of genetic markers in practical use and, if violated, can severely bias sibship estimates as shown by simulations in this article. I propose a new likelihood method with simple and robust models of typing error incorporated into it. Simulations show that the new method can be used to infer full- and half-sibships accurately from marker data with a high error rate and to identify typing errors at each locus in each reconstructed sib family. The new method also improves previous ones by adopting a fresh iterative procedure for updating allele frequencies with reconstructed sibships taken into account, by allowing for the use of parental information, and by using efficient algorithms for calculating the likelihood function and searching for the maximum-likelihood configuration. It is tested extensively on simulated data with a varying number of marker loci, different rates of typing errors, and various sample sizes and family structures and applied to two empirical data sets to demonstrate its usefulness. PMID:15126412

  17. How psychotherapists handle treatment errors – an ethical analysis

    PubMed Central

    2013-01-01

    Background Dealing with errors in psychotherapy is challenging, both ethically and practically. There is almost no empirical research on this topic. We aimed (1) to explore psychotherapists’ self-reported ways of dealing with an error made by themselves or by colleagues, and (2) to reconstruct their reasoning according to the two principle-based ethical approaches that are dominant in the ethics discourse of psychotherapy, Beauchamp & Childress (B&C) and Lindsay et al. (L). Methods We conducted 30 semi-structured interviews with 30 psychotherapists (physicians and non-physicians) and analysed the transcripts using qualitative content analysis. Answers were deductively categorized according to the two principle-based ethical approaches. Results Most psychotherapists reported that they preferred to an disclose error to the patient. They justified this by spontaneous intuitions and common values in psychotherapy, rarely using explicit ethical reasoning. The answers were attributed to the following categories with descending frequency: 1. Respect for patient autonomy (B&C; L), 2. Non-maleficence (B&C) and Responsibility (L), 3. Integrity (L), 4. Competence (L) and Beneficence (B&C). Conclusions Psychotherapists need specific ethical and communication training to complement and articulate their moral intuitions as a support when disclosing their errors to the patients. Principle-based ethical approaches seem to be useful for clarifying the reasons for disclosure. Further research should help to identify the most effective and acceptable ways of error disclosure in psychotherapy. PMID:24321503

  18. Controlling qubit drift by recycling error correction syndromes

    NASA Astrophysics Data System (ADS)

    Blume-Kohout, Robin

    2015-03-01

    Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE

  19. Grid-scale fluctuations and forecast error in wind power

    NASA Astrophysics Data System (ADS)

    Bel, G.; Connaughton, C. P.; Toots, M.; Bandi, M. M.

    2016-02-01

    Wind power fluctuations at the turbine and farm scales are generally not expected to be correlated over large distances. When power from distributed farms feeds the electrical grid, fluctuations from various farms are expected to smooth out. Using data from the Irish grid as a representative example, we analyze wind power fluctuations entering an electrical grid. We find that not only are grid-scale fluctuations temporally correlated up to a day, but they possess a self-similar structure—a signature of long-range correlations in atmospheric turbulence affecting wind power. Using the statistical structure of temporal correlations in fluctuations for generated and forecast power time series, we quantify two types of forecast error: a timescale error ({e}τ ) that quantifies deviations between the high frequency components of the forecast and generated time series, and a scaling error ({e}\\zeta ) that quantifies the degree to which the models fail to predict temporal correlations in the fluctuations for generated power. With no a priori knowledge of the forecast models, we suggest a simple memory kernel that reduces both the timescale error ({e}τ ) and the scaling error ({e}\\zeta ).

  20. Error analysis of sub-aperture stitching interferometry

    NASA Astrophysics Data System (ADS)

    Jia, Xin; Xu, Fuchao; Xie, Weimin; Xing, Tingwen

    2012-10-01

    Large-aperture optical elements are widely employed in high-power laser system, astronomy, and outer-space technology. Sub-aperture stitching is an effective way to extend the lateral and vertical dynamic range of a conventional interferometer. With the aim to provide the accuracy of equipment, this paper simulates the arithmetic to analyze the errors. The Selection of stitching mode and setting of the number of subaperture is given. According to the programmed algorithms simulation stitching is performed for testing the algorithm. In this paper, based on the Matlab we simulate the arithmetic of Sub-aperture stitching. The sub-aperture stitching method can also be used to test the free formed surface. The freeformed surface is created by Zernike polynomials. The accuracy has relationship with the errors of tilting, positioning. Through the stitching the medium spatial frequency of the surface can be tested. The results of errors analysis by means of Matlab are shown that how the tilting and positioning errors to influence the testing accuracy. The analysis of errors can also be used in other interferometer systems.

  1. Error compensation in computer generated hologram-based form testing of aspheres.

    PubMed

    Stuerwald, Stephan

    2014-12-10

    Computer-generated holograms (CGHs) are used relatively often to test aspheric surfaces in the case of medium and high lot sizes. Until now differently modified measurement setups for optical form testing interferometry have been presented, like subaperture stitching interferometry and scanning interferometry. In contrast, for testing low to medium lot sizes in research and development, a variety of other tactile and nontactile measurement methods have been developed. In the case of CGH-based interferometric form testing, measurement deviations in the region of several tens of nanometers typically occur. Deviations arise especially due to a nonperfect alignment of the asphere relative to the testing wavefront. Therefore, the null test is user- and adjustment-dependent, which results in insufficient repeatability and reproducibility of the form errors. When adjusting a CGH, an operator usually performs a minimization of the spatial frequency of the fringe pattern. An adjustment to the ideal position, however, often cannot be performed with sufficient precision by the operator as the position of minimum spatial fringe density is often not unique, which also depends on the asphere. Thus, the scientific and technical objectives of this paper comprise the development of a simulation-based approach to explain and quantify typical experimental errors due to misalignment of the specimen toward a CGH in an optical form testing measurement system. A further step is the programming of an iterative method to realize a virtual optimized realignment of the system on the basis of Zernike polynomial decomposition, which should allow for the calculation of the measured form for an ideal alignment and thus a careful subtraction of a typical alignment-based form error. To validate the simulation-based findings, a series of systematic experiments is performed with a recently developed hexapod positioning system in order to allow an exact and reproducible positioning of the optical CGH

  2. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    ERIC Educational Resources Information Center

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  3. Positioning errors in panoramic images in general dentistry in Sörmland County, Sweden.

    PubMed

    Ekströmer, Karin; Hjalmarsson, Lars

    2014-01-01

    The purpose of this study was to evaluate the frequency and severity of positioning errors in panoramic radiography in general dentistry. A total of 1904 digital panoramic radiographs, taken by the Public Dental Service in the county of Sörmland, Sweden, were analysed retrospectively. The study population consisted of all patients who underwent a panoramic examination during the year 2011. One experienced oral radiologist evaluated all radiographs for 10 common errors. Of the 1904 radiographs examined, 79 per cent had errors. The number of errors varied between 1-4 errors per image. No errors were found in 404 images (21%). Fifty-five images (3%) had severe errors, which made it impossible to make correct diagnostics. The most common error was the tongue not being in contact with the hard palate during exposure. However, this did not greatly affect the diagnostic usefulness of the image due to the ability to enhance the image.The patient's head was tilted too far upwards in 23 per cent of the images and the patient's head was rotated during exposure in 15 per cent. The least common error was due to patient movement during exposure (1%). Panoramic radiographs taken in general dental clinics in a Swedish county show several errors. Proper positioning of the patient is necessary to achieve panoramic images with good image quality. Some of the errors could be adjusted with the digital technique used.This allowed assessment of the images, which reduces radiation dose by avoiding retakes. PMID:26995809

  4. A procedure for removing the effect of response bias errors from waterfowl hunter questionnaire responses

    USGS Publications Warehouse

    Atwood, E.L.

    1958-01-01

    Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.

  5. Error awareness revisited: accumulation of multimodal evidence from central and autonomic nervous systems.

    PubMed

    Wessel, Jan R; Danielmeier, Claudia; Ullsperger, Markus

    2011-10-01

    The differences between erroneous actions that are consciously perceived as errors and those that go unnoticed have recently become an issue in the field of performance monitoring. In EEG studies, error awareness has been suggested to influence the error positivity (Pe) of the response-locked event-related brain potential, a positive voltage deflection prominent approximately 300 msec after error commission, whereas the preceding error-related negativity (ERN) seemed to be unaffected by error awareness. Erroneous actions, in general, have been shown to promote several changes in ongoing autonomic nervous system (ANS) activity, yet such investigations have only rarely taken into account the question of subjective error awareness. In the first part of this study, heart rate, pupillometry, and EEG were recorded during an antisaccade task to measure autonomic arousal and activity of the CNS separately for perceived and unperceived errors. Contrary to our expectations, we observed differences in both Pe and ERN with respect to subjective error awareness. This was replicated in a second experiment, using a modified version of the same task. In line with our predictions, only perceived errors provoke the previously established post-error heart rate deceleration. Also, pupil size yields a more prominent dilatory effect after an erroneous saccade, which is also significantly larger for perceived than unperceived errors. On the basis of the ERP and ANS results as well as brain-behavior correlations, we suggest a novel interpretation of the implementation and emergence of error awareness in the brain. In our framework, several systems generate input signals (e.g., ERN, sensory input, proprioception) that influence the emergence of error awareness, which is then accumulated and presumably reflected in later potentials, such as the Pe. PMID:21268673

  6. Debye Entropic Force and Modified Newtonian Dynamics

    NASA Astrophysics Data System (ADS)

    Li, Xin; Chang, Zhe

    2011-04-01

    Verlinde has suggested that the gravity has an entropic origin, and a gravitational system could be regarded as a thermodynamical system. It is well-known that the equipartition law of energy is invalid at very low temperature. Therefore, entropic force should be modified while the temperature of the holographic screen is very low. It is shown that the modified entropic force is proportional to the square of the acceleration, while the temperature of the holographic screen is much lower than the Debye temperature TD. The modified entropic force returns to the Newton's law of gravitation while the temperature of the holographic screen is much higher than the Debye temperature. The modified entropic force is connected with modified Newtonian dynamics (MOND). The constant a0 involved in MOND is linear in the Debye frequency ωD, which can be regarded as the largest frequency of the bits in screen. We find that there do have a strong connection between MOND and cosmology in the framework of Verlinde's entropic force, if the holographic screen is taken to be bound of the Universe. The Debye frequency is linear in the Hubble constant H0.

  7. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.

  8. Intra-Rater and Inter-Rater Reliability of the Balance Error Scoring System in Pre-Adolescent School Children

    ERIC Educational Resources Information Center

    Sheehan, Dwayne P.; Lafave, Mark R.; Katz, Larry

    2011-01-01

    This study was designed to test the intra- and inter-rater reliability of the University of North Carolina's Balance Error Scoring System in 9- and 10-year-old children. Additionally, a modified version of the Balance Error Scoring System was tested to determine if it was more sensitive in this population ("raw scores"). Forty-six normally…

  9. Some effects of quantization on a noiseless phase-locked loop. [sampling phase errors

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1979-01-01

    If the VCO of a phase-locked receiver is to be replaced by a digitally programmed synthesizer, the phase error signal must be sampled and quantized. Effects of quantizing after the loop filter (frequency quantization) or before (phase error quantization) are investigated. Constant Doppler or Doppler rate noiseless inputs are assumed. The main result gives the phase jitter due to frequency quantization for a Doppler-rate input. By itself, however, frequency quantization is impractical because it makes the loop dynamic range too small.

  10. Modeling methodology for MLS range navigation system errors using flight test data

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.

  11. Exact probability of error analysis for FHSS/CDMA communications in the presence of single term Rician fading

    NASA Astrophysics Data System (ADS)

    Turcotte, Randy L.; Wickert, Mark A.

    An exact expression is found for the probability of bit error of an FHSS-BFSK (frequency-hopping spread-spectrum/binary-frequency-shift-keying) multiple-access system in the presence of slow, nonselective, 'single-term' Rician fading. The effects of multiple-access interference and/or continuous tone jamming are considered. Comparisons are made between the error expressions developed here and previously published upper bounds. It is found that under certain channel conditions the upper bounds on the probability of bit error may exceed the actual probability of error by an order of magnitude.

  12. Method for decoupling error correction from privacy amplification

    NASA Astrophysics Data System (ADS)

    Lo, Hoi-Kwong

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  13. Error detection and reduction in blood banking.

    PubMed

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle

  14. Frequency domain FIR and IIR adaptive filters

    NASA Technical Reports Server (NTRS)

    Lynn, D. W.

    1990-01-01

    A discussion of the LMS adaptive filter relating to its convergence characteristics and the problems associated with disparate eigenvalues is presented. This is used to introduce the concept of proportional convergence. An approach is used to analyze the convergence characteristics of block frequency-domain adaptive filters. This leads to a development showing how the frequency-domain FIR adaptive filter is easily modified to provide proportional convergence. These ideas are extended to a block frequency-domain IIR adaptive filter and the idea of proportional convergence is applied. Experimental results illustrating proportional convergence in both FIR and IIR frequency-domain block adaptive filters is presented.

  15. Human decision error (HUMDEE) trees

    SciTech Connect

    Ostrom, L.T.

    1993-08-01

    Graphical presentations of human actions in incident and accident sequences have been used for many years. However, for the most part, human decision making has been underrepresented in these trees. This paper presents a method of incorporating the human decision process into graphical presentations of incident/accident sequences. This presentation is in the form of logic trees. These trees are called Human Decision Error Trees or HUMDEE for short. The primary benefit of HUMDEE trees is that they graphically illustrate what else the individuals involved in the event could have done to prevent either the initiation or continuation of the event. HUMDEE trees also present the alternate paths available at the operator decision points in the incident/accident sequence. This is different from the Technique for Human Error Rate Prediction (THERP) event trees. There are many uses of these trees. They can be used for incident/accident investigations to show what other courses of actions were available and for training operators. The trees also have a consequence component so that not only the decision can be explored, also the consequence of that decision.

  16. Evaluation of Intravenous Medication Errors with Smart Infusion Pumps in an Academic Medical Center

    PubMed Central

    Ohashi, Kumiko; Dykes, Patricia; McIntosh, Kathleen; Buckley, Elizabeth; Wien, Matt; Bates, David W.

    2013-01-01

    While some published research indicates a fairly high frequency of Intravenous (IV) medication errors associated with the use of smart infusion pumps, the generalizability of these results are uncertain. Additionally, the lack of a standardized methodology for measuring these errors is an issue. In this study we iteratively developed a web-based data collection tool to capture IV medication errors using a participatory design approach with interdisciplinary experts. Using the developed tool, a prevalence study was then conducted in an academic medical center. The results showed that the tool was easy to use and effectively captured all IV medication errors. Through the prevalence study, violation errors of hospital policy were found that could potentially place patients at risk, but no critical errors known to contribute to patient harm were noted. PMID:24551395

  17. Development of an RTK-GPS positioning application with an improved position error model for smartphones.

    PubMed

    Hwang, Jinsang; Yun, Hongsik; Suh, Yongcheol; Cho, Jeongho; Lee, Dongha

    2012-01-01

    This study developed a smartphone application that provides wireless communication, NRTIP client, and RTK processing features, and which can simplify the Network RTK-GPS system while reducing the required cost. A determination method for an error model in Network RTK measurements was proposed, considering both random and autocorrelation errors, to accurately calculate the coordinates measured by the application using state estimation filters. The performance evaluation of the developed application showed that it could perform high-precision real-time positioning, within several centimeters of error range at a frequency of 20 Hz. A Kalman Filter was applied to the coordinates measured from the application, to evaluate the appropriateness of the determination method for an error model, as proposed in this study. The results were more accurate, compared with those of the existing error model, which only considered the random error. PMID:23201981

  18. Development of an RTK-GPS Positioning Application with an Improved Position Error Model for Smartphones

    PubMed Central

    Hwang, Jinsang; Yun, Hongsik; Suh, Yongcheol; Cho, Jeongho; Lee, Dongha

    2012-01-01

    This study developed a smartphone application that provides wireless communication, NRTIP client, and RTK processing features, and which can simplify the Network RTK-GPS system while reducing the required cost. A determination method for an error model in Network RTK measurements was proposed, considering both random and autocorrelation errors, to accurately calculate the coordinates measured by the application using state estimation filters. The performance evaluation of the developed application showed that it could perform high-precision real-time positioning, within several centimeters of error range at a frequency of 20 Hz. A Kalman Filter was applied to the coordinates measured from the application, to evaluate the appropriateness of the determination method for an error model, as proposed in this study. The results were more accurate, compared with those of the existing error model, which only considered the random error. PMID:23201981

  19. Error field and magnetic diagnostic modeling for W7-X

    SciTech Connect

    Lazerson, Sam A.; Gates, David A.; NEILSON, GEORGE H.; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.

    2014-07-01

    The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (Β = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.

  20. The role of variation, error, and complexity in manufacturing defects

    SciTech Connect

    Hinckley, C.M.; Barkan, P.

    1994-03-01

    Variation in component properties and dimensions is a widely recognized factor in product defects which can be quantified and controlled by Statistical Process Control methodologies. Our studies have shown, however, that traditional statistical methods are ineffective in characterizing and controlling defects caused by error. The distinction between error and variation becomes increasingly important as the target defect rates approach extremely low values. Motorola data substantiates our thesis that defect rates in the range of several parts per million can only be achieved when traditional methods for controlling variation are combined with methods that specifically focus on eliminating defects due to error. Complexity in the product design, manufacturing processes, or assembly increases the likelihood of defects due to both variation and error. Thus complexity is also a root cause of defects. Until now, the absence of a sound correlation between defects and complexity has obscured the importance of this relationship. We have shown that assembly complexity can be quantified using Design for Assembly (DFA) analysis. High levels of correlation have been found between our complexity measures and defect data covering tens of millions of assembly operations in two widely different industries. The availability of an easily determined measure of complexity, combined with these correlations, permits rapid estimation of the relative defect rates for alternate design concepts. This should prove to be a powerful tool since it can guide design improvement at an early stage when concepts are most readily modified.

  1. Study of geopotential error models used in orbit determination error analysis

    NASA Technical Reports Server (NTRS)

    Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.

    1991-01-01

    The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic

  2. Medication Errors - Multiple Languages: MedlinePlus

    MedlinePlus

    ... Are Here: Home → Multiple Languages → All Health Topics → Medication Errors URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Medication Errors - Multiple Languages To use the sharing features on ...

  3. 40 CFR 96.356 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Season Allowance Tracking System § 96.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance...

  4. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...

  5. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  6. 40 CFR 96.256 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...

  7. 40 CFR 96.256 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...

  8. 40 CFR 96.256 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...

  9. 40 CFR 96.256 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...

  10. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  11. 40 CFR 60.4156 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Generating Units Hg Allowance Tracking System § 60.4156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Hg Allowance Tracking...

  12. 40 CFR 96.256 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Tracking System § 96.256 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR SO2 Allowance Tracking System account. Within...

  13. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  14. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  15. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  16. 40 CFR 60.4156 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Generating Units Hg Allowance Tracking System § 60.4156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Hg Allowance Tracking...

  17. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...

  18. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...

  19. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...

  20. 40 CFR 96.356 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Allowance Tracking System § 96.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...

  1. 40 CFR 96.356 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Allowance Tracking System § 96.356 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Ozone Season Allowance Tracking...

  2. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...

  3. Refractive Errors - Multiple Languages: MedlinePlus

    MedlinePlus

    ... Are Here: Home → Multiple Languages → All Health Topics → Refractive Errors URL of this page: https://www.nlm.nih. ... V W XYZ List of All Topics All Refractive Errors - Multiple Languages To use the sharing features on ...

  4. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  5. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....

  6. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....

  7. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....

  8. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....

  9. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....

  10. Field errors in hybrid insertion devices

    SciTech Connect

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  11. Analysis and classification of human error

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.; Rouse, S. H.

    1983-01-01

    The literature on human error is reviewed with emphasis on theories of error and classification schemes. A methodology for analysis and classification of human error is then proposed which includes a general approach to classification. Identification of possible causes and factors that contribute to the occurrence of errors is also considered. An application of the methodology to the use of checklists in the aviation domain is presented for illustrative purposes.

  12. Optimized entanglement-assisted quantum error correction

    SciTech Connect

    Taghavi, Soraya; Brun, Todd A.; Lidar, Daniel A.

    2010-10-15

    Using convex optimization, we propose entanglement-assisted quantum error-correction procedures that are optimized for given noise channels. We demonstrate through numerical examples that such an optimized error-correction method achieves higher channel fidelities than existing methods. This improved performance, which leads to perfect error correction for a larger class of error channels, is interpreted in at least some cases by quantum teleportation, but for general channels this interpretation does not hold.

  13. A cascaded coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, T.; Lin, S.

    1985-01-01

    A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.

  14. Error Propagation in a System Model

    NASA Technical Reports Server (NTRS)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  15. Dynamic frequency tuning of electric and magnetic metamaterial response

    DOEpatents

    O'Hara, John F; Averitt, Richard; Padilla, Willie; Chen, Hou-Tong

    2014-09-16

    A geometrically modifiable resonator is comprised of a resonator disposed on a substrate, and a means for geometrically modifying the resonator. The geometrically modifiable resonator can achieve active optical and/or electronic control of the frequency response in metamaterials and/or frequency selective surfaces, potentially with sub-picosecond response times. Additionally, the methods taught here can be applied to discrete geometrically modifiable circuit components such as inductors and capacitors. Principally, controlled conductivity regions, using either reversible photodoping or voltage induced depletion activation, are used to modify the geometries of circuit components, thus allowing frequency tuning of resonators without otherwise affecting the bulk substrate electrical properties. The concept is valid over any frequency range in which metamaterials are designed to operate.

  16. Error field penetration and locking to the backward propagating wave

    DOE PAGESBeta

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less

  17. Error field penetration and locking to the backward propagating wave

    SciTech Connect

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.

  18. A fresh look at the predictors of naming accuracy and errors in Alzheimer's disease.

    PubMed

    Cuetos, Fernando; Rodríguez-Ferreiro, Javier; Sage, Karen; Ellis, Andrew W

    2012-09-01

    In recent years, a considerable number of studies have tried to establish which characteristics of objects and their names predict the responses of patients with Alzheimer's disease (AD) in the picture-naming task. The frequency of use of words and their age of acquisition (AoA) have been implicated as two of the most influential variables, with naming being best preserved for objects with high-frequency, early-acquired names. The present study takes a fresh look at the predictors of naming success in Spanish and English AD patients using a range of measures of word frequency and AoA along with visual complexity, imageability, and word length as predictors. Analyses using generalized linear mixed modelling found that naming accuracy was better predicted by AoA ratings taken from older adults than conventional ratings from young adults. Older frequency measures based on written language samples predicted accuracy better than more modern measures based on the frequencies of words in film subtitles. Replacing adult frequency with an estimate of cumulative (lifespan) frequency did not reduce the impact of AoA. Semantic error rates were predicted by both written word frequency and senior AoA while null response errors were only predicted by frequency. Visual complexity, imageability, and word length did not predict naming accuracy or errors. PMID:22284909

  19. Acoustic Evidence for Phonologically Mismatched Speech Errors

    ERIC Educational Resources Information Center

    Gormley, Andrea

    2015-01-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of…

  20. 40 CFR 97.56 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Account error. 97.56 Section 97.56... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10 business days of making...