Science.gov

Sample records for modified frequency error

  1. A Modified Error in Constitutive Equation Approach for Frequency-Domain Viscoelasticity Imaging Using Interior Data

    PubMed Central

    Diaz, Manuel I.; Aquino, Wilkins; Bonnet, Marc

    2015-01-01

    This paper presents a methodology for the inverse identification of linearly viscoelastic material parameters in the context of steady-state dynamics using interior data. The inverse problem of viscoelasticity imaging is solved by minimizing a modified error in constitutive equation (MECE) functional, subject to the conservation of linear momentum. The treatment is applicable to configurations where boundary conditions may be partially or completely underspecified. The MECE functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses, and also incorporates the measurement data in a quadratic penalty term. Regularization of the problem is achieved through a penalty parameter in combination with the discrepancy principle due to Morozov. Numerical results demonstrate the robust performance of the method in situations where the available measurement data is incomplete and corrupted by noise of varying levels. PMID:26388656

  2. Spatial frequency domain error budget

    SciTech Connect

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  3. Impacts of frequency increment errors on frequency diverse array beampattern

    NASA Astrophysics Data System (ADS)

    Gao, Kuandong; Chen, Hui; Shao, Huaizong; Cai, Jingye; Wang, Wen-Qin

    2015-12-01

    Different from conventional phased array, which provides only angle-dependent beampattern, frequency diverse array (FDA) employs a small frequency increment across the antenna elements and thus results in a range angle-dependent beampattern. However, due to imperfect electronic devices, it is difficult to ensure accurate frequency increments, and consequently, the array performance will be degraded by unavoidable frequency increment errors. In this paper, we investigate the impacts of frequency increment errors on FDA beampattern. We derive the beampattern errors caused by deterministic frequency increment errors. For stochastic frequency increment errors, the corresponding upper and lower bounds of FDA beampattern error are derived. They are verified by numerical results. Furthermore, the statistical characteristics of FDA beampattern with random frequency increment errors, which obey Gaussian distribution and uniform distribution, are also investigated.

  4. Operational Single-Frequency GPS Error Maps

    NASA Astrophysics Data System (ADS)

    Bishop, G. J.; Doherty, P.; Decker, D.; Delay, S.; Sexton, E.; Citrone, P.; Scro, K.; Wilkes, R.

    2001-12-01

    The Air Force Research Laboratory and Detachment 11, Space & Missile Systems Center have implemented a new system of graphical products that provide easy-to-visualize displays of space weather effects on theater-based radio systems operating through the ionosphere. This system, the Operational Space Environment Network Display (OpSEND), is now producing its first four products at 55th Space Weather Squadron (55SWXS) in Colorado Springs. One of these products, the OpSEND Estimated GPS Single-Frequency Error Map, provides a current specification (nowcast) and one-hour forecast of estimated positioning errors that result from inaccurate ionospheric correction and GPS constellation geometry. Two-frequency GPS receivers can measure ionospheric range errors due to ionospheric total electron content (TEC), but single-frequency receivers depend on a built-in Ionospheric Correction Algorithm (ICA) for ionospheric error mitigation. The ICA, developed at AFRL in the 1970's corrects for roughly half of the ionospheric error. In the OpSEND GPS Single-Frequency Error Map, position error due to the ionosphere is based on the differences between ionospheric estimates from ICA and those generated by more accurate global ionospheric specification from the PRISM model, updated by real-time TEC data from a global set of monitor stations. Details and examples of the OpSEND system and the GPS Error Map will be presented, as well as results of initial GPS Error Map validation studies, comparing GPS error predictions and PRISM TEC specifications with observational data.

  5. Helical Gears Modified To Decrease Transmission Errors

    NASA Technical Reports Server (NTRS)

    Handschuh, R. F.; Coy, J. J.; Litvin, F. L.; Zhang, J.

    1993-01-01

    Tooth surfaces of helical gears modified, according to proposed design concept, to make gears more tolerant of misalignments and to improve distribution of contact stresses. Results in smaller transmission errors, with concomitant decreases in vibrations and noise and, possibly, increases in service lives.

  6. Helical Gears Modified To Decrease Transmission Errors

    NASA Technical Reports Server (NTRS)

    Handschuh, R. F.; Coy, J. J.; Litvin, F. L.; Zhang, J.

    1993-01-01

    Tooth surfaces of helical gears modified, according to proposed design concept, to make gears more tolerant of misalignments and to improve distribution of contact stresses. Results in smaller transmission errors, with concomitant decreases in vibrations and noise and, possibly, increases in service lives.

  7. Frequency analysis of nonlinear oscillations via the global error minimization

    NASA Astrophysics Data System (ADS)

    Kalami Yazdi, M.; Hosseini Tehrani, P.

    2016-06-01

    The capacity and effectiveness of a modified variational approach, namely global error minimization (GEM) is illustrated in this study. For this purpose, the free oscillations of a rod rocking on a cylindrical surface and the Duffing-harmonic oscillator are treated. In order to validate and exhibit the merit of the method, the obtained result is compared with both of the exact frequency and the outcome of other well-known analytical methods. The corollary reveals that the first order approximation leads to an acceptable relative error, specially for large initial conditions. The procedure can be promisingly exerted to the conservative nonlinear problems.

  8. Frequency of pediatric medication administration errors and contributing factors.

    PubMed

    Ozkan, Suzan; Kocaman, Gulseren; Ozturk, Candan; Seren, Seyda

    2011-01-01

    This study examined the frequency of pediatric medication administration errors and contributing factors. This research used the undisguised observation method and Critical Incident Technique. Errors and contributing factors were classified through the Organizational Accident Model. Errors were made in 36.5% of the 2344 doses that were observed. The most frequent errors were those associated with administration at the wrong time. According to the results of this study, errors arise from problems within the system.

  9. The Relative Frequency of Spanish Pronunciation Errors.

    ERIC Educational Resources Information Center

    Hammerly, Hector

    Types of hierarchies of pronunciation difficulty are discussed, and a hierarchy based on contrastive analysis plus informal observation is proposed. This hierarchy is less one of initial difficulty than of error persistence. One feature of this hierarchy is that, because of lesser learner awareness and very limited functional load, errors…

  10. Compensation Low-Frequency Errors in TH-1 Satellite

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Renxiang; Hu, Xin

    2016-06-01

    The topographic mapping products at 1:50,000 scale can be realized using satellite photogrammetry without ground control points (GCPs), which requires the high accuracy of exterior orientation elements. Usually, the attitudes of exterior orientation elements are obtained from the attitude determination system on the satellite. Based on the theoretical analysis and practice, the attitude determination system exists not only the high-frequency errors, but also the low-frequency errors related to the latitude of satellite orbit and the time. The low-frequency errors would affect the location accuracy without GCPs, especially to the horizontal accuracy. In SPOT5 satellite, the latitudinal model was proposed to correct attitudes using approximately 20 calibration sites data, and the location accuracy was improved. The low-frequency errors are also found in Tian Hui 1 (TH-1) satellite. Then, the method of compensation low-frequency errors is proposed in ground image processing of TH-1, which can detect and compensate the low-frequency errors automatically without using GCPs. This paper deal with the low-frequency errors in TH-1: First, the analysis about low-frequency errors of the attitude determination system is performed. Second, the compensation models are proposed in bundle adjustment. Finally, the verification is tested using data of TH-1. The testing results show: the low-frequency errors of attitude determination system can be compensated during bundle adjustment, which can improve the location accuracy without GCPs and has played an important role in the consistency of global location accuracy.

  11. Preventing medication errors in community pharmacy: frequency and seriousness of medication errors

    PubMed Central

    Knudsen, P; Herborg, H; Mortensen, A R; Knudsen, M; Hellebek, A

    2007-01-01

    Background Medication errors are a widespread problem which can, in the worst case, cause harm to patients. Errors can be corrected if documented and evaluated as a part of quality improvement. The Danish community pharmacies are committed to recording prescription corrections, dispensing errors and dispensing near misses. This study investigated the frequency and seriousness of these errors. Methods 40 randomly selected Danish community pharmacies collected data for a defined period. The data included four types of written report of incidents, three of which already existed at the pharmacies: prescription correction, dispensing near misses and dispensing errors. Data for the fourth type of report, on adverse drug events, were collected through a web‐based reporting system piloted for the project. Results There were 976 cases of prescription corrections, 229 cases of near misses, 203 cases of dispensing errors and 198 cases of adverse drug events. The error rate was 23/10 000 prescriptions for prescription corrections, 1/10 000 for dispensing errors and 2/10 000 for near misses. The errors that reached the patients were pooled for separate analysis. Most of these errors, and the potentially most serious ones, occurred in the transcription stage of the dispensing process. Conclusion Prescribing errors were the most frequent type of error reported. Errors that reached the patients were not frequent, but most of them were potentially harmful, and the absolute number of medication errors was high, as provision of medicine is a frequent event in primary care in Denmark. Patient safety could be further improved by optimising the opportunity to learn from the incidents described. PMID:17693678

  12. Digital frequency error detectors for OQPSK satellite modems

    NASA Astrophysics Data System (ADS)

    Ahmad, J.; Jeans, T. G.; Evans, B. G.

    1991-09-01

    Two algorithms for frequency error detection in OQPSK satellite modems are presented. The results of computer simulations in respect of acquisition and noise performance are given. These algorithms are suitable for DSP implementation and are applicable to mobile satellite systems in which significant Doppler shift is experienced.

  13. A modified Klobuchar model for single-frequency GNSS users over the polar region

    NASA Astrophysics Data System (ADS)

    Bi, Tong; An, Jiachun; Yang, Jian; Liu, Shulun

    2017-02-01

    For single-frequency Global Navigation Satellite System (GNSS) users, it is necessary to select a simple and effective broadcast ionospheric model to mitigate the ionospheric delay, which is one of the most serious error sources in GNSS measurement. The widely used Global Positioning System (GPS) Klobuchar model can achieve better performance in mid-latitudes, however, this model is not applicable in high-latitudes due to the more complex ionospheric structure over the polar region. Under the premise of no additional coefficients, a modified Klobuchar model is established for single-frequency GNSS users over the polar region by improving the nighttime term and the amplitude of the cosine term. The performance of the new model is validated by different ionospheric models and their applications in single-frequency single-point positioning, during different seasons and different levels of solar activities. The new model can reduce the ionospheric error by 60% over the polar region, while the GPS-Klobuchar even increases the ionospheric error in many cases. Over the polar region, the single-frequency SPP error using the new model is approximately 3 m in vertical direction and 1 m in horizontal direction, which is superior to GPS-Klobuchar. This study suggests that the modified Klobuchar model is more accurate to depict the polar ionosphere and could be used to achieve better positioning accuracy for single-frequency GNSS users over the polar region.

  14. Effect of photogrammetric reading error on slope-frequency distributions. [obtained from Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Wu, S. C.

    1973-01-01

    The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.

  15. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  16. Improving transient performance of adaptive control architectures using frequency-limited system error dynamics

    NASA Astrophysics Data System (ADS)

    Yucelen, Tansel; De La Torre, Gerardo; Johnson, Eric N.

    2014-11-01

    Although adaptive control theory offers mathematical tools to achieve system performance without excessive reliance on dynamical system models, its applications to safety-critical systems can be limited due to poor transient performance and robustness. In this paper, we develop an adaptive control architecture to achieve stabilisation and command following of uncertain dynamical systems with improved transient performance. Our framework consists of a new reference system and an adaptive controller. The proposed reference system captures a desired closed-loop dynamical system behaviour modified by a mismatch term representing the high-frequency content between the uncertain dynamical system and this reference system, i.e., the system error. In particular, this mismatch term allows the frequency content of the system error dynamics to be limited, which is used to drive the adaptive controller. It is shown that this key feature of our framework yields fast adaptation without incurring high-frequency oscillations in the transient performance. We further show the effects of design parameters on the system performance, analyse closeness of the uncertain dynamical system to the unmodified (ideal) reference system, discuss robustness of the proposed approach with respect to time-varying uncertainties and disturbances, and make connections to gradient minimisation and classical control theory. A numerical example is provided to demonstrate the efficacy of the proposed architecture.

  17. Medication errors in ED: Do patient characteristics and the environment influence the nature and frequency of medication errors?

    PubMed

    Mitchell Scott, Belinda; Considine, Julie; Botti, Mari

    2014-11-01

    Medication safety is of increasing importance and understanding the nature and frequency of medication errors in the Emergency Department (ED) will assist in tailoring interventions which will make patient care safer. The challenge with the literature to date is the wide variability in the frequency of errors reported and the reliance on incident reporting practices of busy ED staff. A prospective, exploratory descriptive design using point prevalence surveys was used to establish the frequency of observed medication errors in the ED. In addition, data related to contextual factors such as ED patients, staffing and workload were also collected during the point prevalence surveys to enable the analysis of relationships between the frequency and nature of specific error types and patient and ED characteristics at the time of data collection. A total of 172 patients were included in the study: 125 of whom patients had a medication chart. The prevalence of medication errors in the ED studied was 41.2% for failure to apply patient ID bands, 12.2% for failure to document allergy status and 38.4% for errors of omission. The proportion of older patients in the ED did not affect the frequency of medication errors. There was a relationship between high numbers of ATS 1, 2 and 3 patients (indicating high levels of clinical urgency) and increased rates of failure to document allergy status. Medication errors were affected by ED occupancy, when cubicles in the ED were over 50% occupied, medication errors occurred more frequently. ED staffing affects the frequency of medication errors, there was an increase in failure to apply ID bands and errors of omission when there were unfilled nursing deficits and lower levels of senior medical staff were associated with increased errors of omission. Medication errors related to patient identification, allergy status and medication omissions occur more frequently in the ED when the ED is busy, has sicker patients and when the staffing is

  18. On the Standard Error of the Modified Biserial Correlation.

    ERIC Educational Resources Information Center

    Koopman, Raymond F.

    1983-01-01

    A paradoxical implication of Kraemer's expression for the large-sample standard error of Brogden's form of the biserial correlation is identified, and a new expression is given which does not imply the paradox. However, numerical evidence is presented which calls into question the correctness of the expression. (Author)

  19. Fehlerhaeufigkeit im Englischunterricht (Error Frequency in English Teaching)

    ERIC Educational Resources Information Center

    Heyder, Egon

    1976-01-01

    Research conducted at a German teachers' college revealed that in English instruction at a "Comprehensive" School, equal amounts of corrective measures were devoted to each of the various types of errors. It is recommended that differentiation be made between the importance of the categories of errors. (Text is in German.) (IFS/WGA)

  20. Single trial time-frequency domain analysis of error processing in post-traumatic stress disorder.

    PubMed

    Clemans, Zachary A; El-Baz, Ayman S; Hollifield, Michael; Sokhadze, Estate M

    2012-09-13

    Error processing studies in psychology and psychiatry are relatively common. Event-related potentials (ERPs) are often used as measures of error processing, two such response-locked ERPs being the error-related negativity (ERN) and the error-related positivity (Pe). The ERN and Pe occur following committed error in reaction time tasks as low frequency (4-8 Hz) electroencephalographic (EEG) oscillations registered at the midline fronto-central sites. We created an alternative method for analyzing error processing using time-frequency analysis in the form of a wavelet transform. A study was conducted in which subjects with PTSD and healthy control completed a forced-choice task. Single trial EEG data from errors in the task were processed using a continuous wavelet transform. Coefficients from the transform that corresponded to the theta range were averaged to isolate a theta waveform in the time-frequency domain. Measures called the time-frequency ERN and Pe were obtained from these waveforms for five different channels and then averaged to obtain a single time-frequency ERN and Pe for each error trial. A comparison of the amplitude and latency for the time-frequency ERN and Pe between the PTSD and control group was performed. A significant group effect was found on the amplitude of both measures. These results indicate that the developed single trial time-frequency error analysis method is suitable for examining error processing in PTSD and possibly other psychiatric disorders.

  1. A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.

    PubMed

    Liu, Shuo; Zhang, Lei; Li, Jian

    2016-11-24

    The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.

  2. A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass

    PubMed Central

    Liu, Shuo; Zhang, Lei; Li, Jian

    2016-01-01

    The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass. PMID:27886153

  3. Research on controlling middle spatial frequency error of high gradient precise aspheric by pitch tool

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan; Zhong, Xianyun

    2016-09-01

    Extreme optical fabrication projects known as EUV and X-ray optic systems, which are representative of today's advanced optical manufacturing technology level, have special requirements for the optical surface quality. In synchroton radiation (SR) beamlines, mirrors of high shape accuracy is always used in grazing incidence. In nanolithograph systems, middle spatial frequency errors always lead to small-angle scattering or flare that reduces the contrast of the image. The slope error is defined for a given horizontal length, the increase or decrease in form error at the end point relative to the starting point is measured. The quality of reflective optical elements can be described by their deviation from ideal shape at different spatial frequencies. Usually one distinguishes between the figure error, the low spatial error part ranging from aperture length to 1mm frequencies, and the mid-high spatial error part from 1mm to 1 μm and from1 μm to some 10 nm spatial frequencies, respectively. Firstly, this paper will disscuss the relationship between slope error and middle spatial frequency error, which both describe the optical surface error along with the form profile. Then, experimental researches will be conducted on a high gradient precise aspheric with pitch tool, which aim to restraining the middle spatial frequency error.

  4. A Study of the Frequency and Communicative Effects of Errors in Spanish

    ERIC Educational Resources Information Center

    Guntermann, Gail

    1978-01-01

    A study conducted in El Salvador was designed to: determine which kinds of errors may be most frequently committed by learners who have reached a basic level of proficiency: discover which high-frequency errors most impede comprehension; and develop a procedure for eliciting evaluational reactions to errors from native listeners. (SW)

  5. Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.

    PubMed

    Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan

    2015-01-01

    Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars.

  6. Bounding higher-order ionosphere errors for the dual-frequency GPS user

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Walter, T.; Blanch, J.; Enge, P.

    2008-10-01

    Civil signals at L2 and L5 frequencies herald a new phase of Global Positioning System (GPS) performance. Dual-frequency users typically assume a first-order approximation of the ionosphere index of refraction, combining the GPS observables to eliminate most of the ranging delay, on the order of meters, introduced into the pseudoranges. This paper estimates the higher-order group and phase errors that occur from assuming the ordinary first-order dual-frequency ionosphere model using data from the Federal Aviation Administration's Wide Area Augmentation System (WAAS) network on a solar maximum quiet day and an extremely stormy day postsolar maximum. We find that during active periods, when ionospheric storms may introduce slant range delays at L1 as high as 100 m, the higher-order group errors in the L1-L2 or L1-L5 dual-frequency combination can be tens of centimeters. The group and phase errors are no longer equal and opposite, so these errors accumulate in carrier smoothing of the dual-frequency code observable. We show the errors in the carrier-smoothed code are due to higher-order group errors and, to a lesser extent, to higher-order phase rate errors. For many applications, this residual error is sufficiently small as to be neglected. However, such errors can impact geodetic applications as well as the error budgets of GPS Augmentation Systems providing Category III precision approach.

  7. "Coded and Uncoded Error Feedback: Effects on Error Frequencies in Adult Colombian EFL Learners' Writing"

    ERIC Educational Resources Information Center

    Sampson, Andrew

    2012-01-01

    This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…

  8. "Coded and Uncoded Error Feedback: Effects on Error Frequencies in Adult Colombian EFL Learners' Writing"

    ERIC Educational Resources Information Center

    Sampson, Andrew

    2012-01-01

    This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…

  9. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  10. Lock-in amplifier error prediction and correction in frequency sweep measurements.

    PubMed

    Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose

    2007-01-01

    This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.

  11. Frequency, types, and direct related costs of medication errors in an academic nephrology ward in Iran.

    PubMed

    Gharekhani, Afshin; Kanani, Negin; Khalili, Hossein; Dashti-Khavidaki, Simin

    2014-09-01

    Medication errors are ongoing problems among hospitalized patients especially those with multiple co-morbidities and polypharmacy such as patients with renal diseases. This study evaluated the frequency, types and direct related cost of medication errors in nephrology ward and the role played by clinical pharmacists. During this study, clinical pharmacists detected, managed, and recorded the medication errors. Prescribing errors including inappropriate drug, dose, or treatment durations were gathered. To assess transcription errors, the equivalence of nursery charts and physician's orders were evaluated. Administration errors were assessed by observing drugs' preparation, storage, and administration by nurses. The changes in medications costs after implementing clinical pharmacists' interventions were compared with the calculated medications costs if the medication errors were continued up to patients' discharge time. More than 85% of patients experienced medication error. The rate of medication errors was 3.5 errors per patient and 0.18 errors per ordered medication. More than 95% of medication errors occurred at prescription nodes. Most common prescribing errors were omission (26.9%) or unauthorized drugs (18.3%) and low drug dosage or frequency (17.3%). Most of the medication errors happened on cardiovascular drugs (24%) followed by vitamins and electrolytes (22.1%) and antimicrobials (18.5%). The number of medication errors was correlated with the number of ordered medications and length of hospital stay. Clinical pharmacists' interventions decreased patients' direct medication costs by 4.3%. About 22% of medication errors led to patients' harm. In conclusion, clinical pharmacists' contributions in nephrology wards were of value to prevent medication errors and to reduce medications cost.

  12. The frequency and potential causes of dispensing errors in a hospital pharmacy.

    PubMed

    Beso, Adnan; Franklin, Bryony Dean; Barber, Nick

    2005-06-01

    To determine the frequency and types of dispensing errors identified both at the final check stage and outside of a UK hospital pharmacy, to explore the reasons why they occurred, and to make recommendations for their prevention. A definition of a dispensing error and a classification system were developed. To study the frequency and types of errors, pharmacy staff recorded details of all errors identified at the final check stage during a two-week period; all errors identified outside of the department and reported during a one-year period were also recorded. During a separate six-week period, pharmacy staff making dispensing errors identified at the final check stage were interviewed to explore the causes; the findings were analysed using a model of human error. Percentage of dispensed items for which one or more dispensing errors were identified at the final check stage; percentage for which an error was reported outside of the pharmacy department; the active failures, error producing conditions and latent conditions that result in dispensing errors occurring. One or more dispensing errors were identified at the final check stage in 2.1% of 4849 dispensed items, and outside of the pharmacy department in 0.02% of 194,584 items. The majority of those identified at the final check stage involved slips in picking products, or mistakes in making assumptions about the products concerned. Factors contributing to the errors included labelling and storage of containers in the dispensary, interruptions and distractions, a culture where errors are seen as being inevitable, and reliance on others to identify and rectify errors. Dispensing errors occur in about 2% of all dispensed items. About 1 in 100 of these is missed by the final check. The impact on dispensing errors of developments such as automated dispensing systems should be evaluated.

  13. Analysis on optical heterodyne frequency error of full-field heterodyne interferometer

    NASA Astrophysics Data System (ADS)

    Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli

    2017-06-01

    The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.

  14. Frequency of medication errors in an emergency department of a large teaching hospital in southern Iran

    PubMed Central

    Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid

    2014-01-01

    This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391

  15. Error-free demodulation of pixelated carrier frequency interferograms.

    PubMed

    Servin, M; Estrada, J C

    2010-08-16

    Recently, pixelated spatial carrier interferograms have been used in optical metrology and are an industry standard nowadays. The main feature of these interferometers is that each pixel over the video camera may be phase-modulated by any (however fixed) desired angle within [0,2pi] radians. The phase at each pixel is shifted without cross-talking from their immediate neighborhoods. This has opened new possibilities for experimental spatial wavefront modulation not dreamed before, because we are no longer constrained to introduce a spatial-carrier using a tilted plane. Any useful mathematical model to phase-modulate the testing wavefront in a pixel-wise basis can be used. However we are nowadays faced with the problem that these pixelated interferograms have not been correctly demodulated to obtain an error-free (exact) wavefront estimation. The purpose of this paper is to offer the general theory that allows one to demodulate, in an exact way, pixelated spatial-carrier interferograms modulated by any thinkable two-dimensional phase carrier.

  16. Disentangling the impacts of outcome valence and outcome frequency on the post-error slowing

    PubMed Central

    Wang, Lijun; Tang, Dandan; Zhao, Yuanfang; Hitchman, Glenn; Wu, Shanshan; Tan, Jinfeng; Chen, Antao

    2015-01-01

    Post-error slowing (PES) reflects efficient outcome monitoring, manifested as slower reaction time after errors. Cognitive control account assumes that PES depends on error information, whereas orienting account posits that it depends on error frequency. This raises the question how the outcome valence and outcome frequency separably influence the generation of PES. To address this issue, we varied the probability of observation errors (50/50 and 20/80, correct/error) the “partner” committed by employing an observation-execution task and investigated the corresponding behavioral and neural effects. On each trial, participants first viewed the outcome of a flanker-run that was supposedly performed by a ‘partner’, and then performed a flanker-run themselves afterwards. We observed PES in the two error rate conditions. However, electroencephalographic data suggested error-related potentials (oERN and oPe) and rhythmic oscillation associated with attentional process (alpha band) were respectively sensitive to outcome valence and outcome frequency. Importantly, oERN amplitude was positively correlated with PES. Taken together, these findings support the assumption of the cognitive control account, suggesting that outcome valence and outcome frequency are both involved in PES. Moreover, the generation of PES is indexed by oERN, whereas the modulation of PES size could be reflected on the alpha band. PMID:25732237

  17. Phase-modulation method for AWG phase-error measurement in the frequency domain.

    PubMed

    Takada, Kazumasa; Hirose, Tomohiro

    2009-12-15

    We report a phase-modulation method for measuring arrayed waveguide grating (AWG) phase error in the frequency domain. By combining the method with a digital sampling technique that we have already reported, we can measure the phase error within an accuracy of +/-0.055 rad for the center 90% waveguides in the array even when no carrier frequencies are generated in the beat signal from the interferometer.

  18. Video error concealment using block matching and frequency selective extrapolation algorithms

    NASA Astrophysics Data System (ADS)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  19. On low-frequency errors of uniformly modulated filtered white-noise models for ground motions

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1988-01-01

    Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).

  20. The frequency of intravenous medication administration errors related to smart infusion pumps: a multihospital observational study.

    PubMed

    Schnock, Kumiko O; Dykes, Patricia C; Albert, Jennifer; Ariosto, Deborah; Call, Rosemary; Cameron, Caitlin; Carroll, Diane L; Drucker, Adrienne G; Fang, Linda; Garcia-Palm, Christine A; Husch, Marla M; Maddox, Ray R; McDonald, Nicole; McGuire, Julie; Rafie, Sally; Robertson, Emilee; Saine, Deb; Sawyer, Melinda D; Smith, Lisa P; Stinger, Kristy Dixon; Vanderveen, Timothy W; Wade, Elizabeth; Yoon, Catherine S; Lipsitz, Stuart; Bates, David W

    2017-02-01

    Intravenous medication errors persist despite the use of smart pumps. This suggests the need for a standardised methodology for measuring errors and highlights the importance of identifying issues around smart pump medication administration in order to improve patient safety. We conducted a multisite study to investigate the types and frequency of intravenous medication errors associated with smart pumps in the USA. 10 hospitals of various sizes using smart pumps from a range of vendors participated. Data were collected using a prospective point prevalence approach to capture errors associated with medications administered via smart pumps and evaluate their potential for harm. A total of 478 patients and 1164 medication administrations were assessed. Of the observed infusions, 699 (60%) had one or more errors associated with their administration. Identified errors such as labelling errors and bypassing the smart pump and the drug library were predominantly associated with violations of hospital policy. These types of errors can result in medication errors. Errors were classified according to the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP). 1 error of category E (0.1%), 4 of category D (0.3%) and 492 of category C (excluding deviations of hospital policy) (42%) were identified. Of these, unauthorised medication, bypassing the smart pump and wrong rate were the most frequent errors. We identified a high rate of error in the administration of intravenous medications despite the use of smart pumps. However, relatively few errors were potentially harmful. The results of this study will be useful in developing interventions to eliminate errors in the intravenous medication administration process. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  1. Factors modifying the frequency of spontaneous activity in gastric muscle.

    PubMed

    Suzuki, H; Kito, Y; Hashitani, H; Nakamura, E

    2006-11-01

    The cellular mechanisms that determine the frequency of spontaneous activity were investigated in gastric smooth muscles isolated from the guinea-pig. Intact antral muscle generated slow waves periodically; the interval between slow waves was decreased exponentially by depolarization of the membrane to reach a steady interval value of about 7 s. Isolated circular muscle bundles produced slow potentials spontaneously or were evoked by depolarizing current stimuli. Evoked slow potentials appeared in an all-or-none fashion, with a refractory period of approximately 2-3 s. Low concentrations of chemicals that modify intracellular signalling revealed that the refractory period was causally related to the activity of protein kinase C (PKC). Activation of PKC increased and inhibition of PKC activity decreased the frequency of slow potentials. Chemicals that inhibit mitochondrial functions reduced the frequency of slow waves. Inhibition of internal Ca(2+)-store activity decreased the amplitude, but not the frequency of slow potentials, suggesting that the amplitude is causally related to Ca(2+) release from the internal store. The results suggest that changes in [Ca(2+)](i) caused by the activity of mitochondria may play a key role in determining the frequency of spontaneous activity in gastric pacemaker cells.

  2. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  3. Analysis of bit error rate for modified T-APPM under weak atmospheric turbulence channel

    NASA Astrophysics Data System (ADS)

    Liu, Zhe; Zhang, Qi; Wang, Yong-jun; Liu, Bo; Zhang, Li-jia; Wang, Kai-min; Xiao, Fei; Deng, Chao-gong

    2013-12-01

    T-APPM is combined of TCM (trellis-coded modulation) and APPM (Amplitude-Pulse-position modulation) and has broad application prospects in space optical communication. Set partitioning in standard T-APPM algorithm has the optimal performance in a multi-carrier system, but whether this method has the optimal performance in APPM which is a single-carrier system is unknown. To solve this problem, we first research the atmospheric channel model with weak turbulence; then a modified T-APPM algorithm was proposed, compared to the standard T-APPM algorithm, modified algorithm uses Gray code mapping instead of set partitioning mapping; finally, simulate the two algorithms with Monte-Carlo method. Simulation results showed that, when bit error rate at 10-4, the modified T-APPM algorithm achieved 0.4dB in SNR, effectively improve the system error performance.

  4. Reducing the error of geoid undulation computations by modifying Stokes' function

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1980-01-01

    The truncation theory as it pertains to the calculation of geoid undulations based on Stokes' integral, but from limited gravity data, is reexamined. Specifically, the improved procedures of Molodenskii et al. are shown through numerical investigations to yield substantially smaller errors than the conventional method that is often applied in practice. In this improved method, as well as in a simpler alternative to the conventional approach, the Stokes' kernel is suitably modified in order to accelerate the rate of convergence of the error series. These modified methods, however, effect a reduction in the error only if a set of low-degree potential harmonic coefficients is utilized in the computation. Consider, for example, the situation in which gravity anomalies are given in a cap of radius 10 deg and the GEM 9 (20,20) potential field is used. Then, typically, the error in the computed undulation (aside from the spherical approximation and errors in the gravity anomaly data) according to the conventional truncation theory is 1.09 m; with Meissl's modification it reduces to 0.41m, while Molodenskii's improved method gives 0.45 m. A further alteration of Molodenskii's method is developed and yields an RMS error of 0.33 m. These values reflect the effect of the truncation, as well as the errors in the GEM 9 harmonic coefficients. The considerable improvement, suggested by these results, of the modified methods over the conventional procedure is verified with actual gravity anomaly data in two oceanic regions, where the GEOS-3 altimeter geoid serves as the basis for comparison. The optimal method of truncation, investigated by Colombo, is extremely ill-conditioned. It is shown that with no corresponding regularization, this procedure is inapplicable.

  5. How allele frequency and study design affect association test statistics with misrepresentation errors.

    PubMed

    Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael

    2014-04-01

    We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.

  6. Correction of Frequency-Dependent Nonlinear Errors in Direct-Conversion Transceivers

    DTIC Science & Technology

    2016-03-31

    Correction of Frequency-Dependent Nonlinear Errors in Direct-Conversion Transceivers Blake James & Caleb Fulton Advanced Radar Research Center...University of Oklahoma Norman, Oklahoma, USA, 73019 pyraminxrox@ou.edu, fulton@ou.edu Abstract: Correction of nonlinear and frequency dependent...frequency-dependent nonlinear distortion in modern highly digital phased arrays. The work presented here is done in the context of calibrating the

  7. Crosslinking EEG time-frequency decomposition and fMRI in error monitoring.

    PubMed

    Hoffmann, Sven; Labrenz, Franziska; Themann, Maria; Wascher, Edmund; Beste, Christian

    2014-03-01

    Recent studies implicate a common response monitoring system, being active during erroneous and correct responses. Converging evidence from time-frequency decompositions of the response-related ERP revealed that evoked theta activity at fronto-central electrode positions differentiates correct from erroneous responses in simple tasks, but also in more complex tasks. However, up to now it is unclear how different electrophysiological parameters of error processing, especially at the level of neural oscillations are related, or predictive for BOLD signal changes reflecting error processing at a functional-neuroanatomical level. The present study aims to provide crosslinks between time domain information, time-frequency information, MRI BOLD signal and behavioral parameters in a task examining error monitoring due to mistakes in a mental rotation task. The results show that BOLD signal changes reflecting error processing on a functional-neuroanatomical level are best predicted by evoked oscillations in the theta frequency band. Although the fMRI results in this study account for an involvement of the anterior cingulate cortex, middle frontal gyrus, and the Insula in error processing, the correlation of evoked oscillations and BOLD signal was restricted to a coupling of evoked theta and anterior cingulate cortex BOLD activity. The current results indicate that although there is a distributed functional-neuroanatomical network mediating error processing, only distinct parts of this network seem to modulate electrophysiological properties of error monitoring.

  8. Frequency-domain correction of sensor dynamic error for step response.

    PubMed

    Yang, Shuang-Long; Xu, Ke-Jun

    2012-11-01

    To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.

  9. Dependence of error sensitivity of frequency on bias voltage in force-balanced micro accelerometer

    NASA Astrophysics Data System (ADS)

    Chen, Lili; Zhou, Wu

    2013-06-01

    To predict more precisely the frequency of force-balanced micro accelerometer with different bias voltages, the effects of bias voltages on error sensitivity of frequency is studied. The resonance frequency of accelerometer under closed loop control is derived according to its operation principle, and its error sensitivity is derived and analyzed under over etching structure according to the characteristics of Deep Reaction Ion Etching (DRIE). Based on the theoretical results, micro accelerometer is fabricated and tested to study the influences of AC bias voltage and DC bias voltage on sensitivity, respectively. Experimental results indicate that the relative errors between test data and theory data are less than 7%, and the fluctuating value of error sensitivity under the range of voltage adjustment is less than 0.01 μm-1. It is concluded that the error sensitivity with designed parameters of structure, circuit and process error can be used to predict the frequency of accelerometer with no need to consider the influence of bias voltage.

  10. Direct measurement of the poliovirus RNA polymerase error frequency in vitro

    SciTech Connect

    Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B. )

    1988-02-01

    The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. A fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg{sup 2+} (pH 7.0) to 7.0 mM Mg{sup 2+} (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study.

  11. Inverse Material Identification in Coupled Acoustic-Structure Interaction using a Modified Error in Constitutive Equation Functional

    PubMed Central

    Warner, James E.; Diaz, Manuel I.; Aquino, Wilkins; Bonnet, Marc

    2014-01-01

    This work focuses on the identification of heterogeneous linear elastic moduli in the context of frequency-domain, coupled acoustic-structure interaction (ASI), using either solid displacement or fluid pressure measurement data. The approach postulates the inverse problem as an optimization problem where the solution is obtained by minimizing a modified error in constitutive equation (MECE) functional. The latter measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses, while incorporating the measurement data as additional quadratic error terms. We demonstrate two strategies for selecting the MECE weighting coefficient to produce regularized solutions to the ill-posed identification problem: 1) the discrepancy principle of Morozov, and 2) an error-balance approach that selects the weight parameter as the minimizer of another functional involving the ECE and the data misfit. Numerical results demonstrate that the proposed methodology can successfully recover elastic parameters in 2D and 3D ASI systems from response measurements taken in either the solid or fluid subdomains. Furthermore, both regularization strategies are shown to produce accurate reconstructions when the measurement data is polluted with noise. The discrepancy principle is shown to produce nearly optimal solutions, while the error-balance approach, although not optimal, remains effective and does not need a priori information on the noise level. PMID:25339790

  12. Inverse material identification in coupled acoustic-structure interaction using a modified error in constitutive equation functional

    NASA Astrophysics Data System (ADS)

    Warner, James E.; Diaz, Manuel I.; Aquino, Wilkins; Bonnet, Marc

    2014-09-01

    This work focuses on the identification of heterogeneous linear elastic moduli in the context of frequency-domain, coupled acoustic-structure interaction (ASI), using either solid displacement or fluid pressure measurement data. The approach postulates the inverse problem as an optimization problem where the solution is obtained by minimizing a modified error in constitutive equation (MECE) functional. The latter measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses, while incorporating the measurement data as additional quadratic error terms. We demonstrate two strategies for selecting the MECE weighting coefficient to produce regularized solutions to the ill-posed identification problem: 1) the discrepancy principle of Morozov, and 2) an error-balance approach that selects the weight parameter as the minimizer of another functional involving the ECE and the data misfit. Numerical results demonstrate that the proposed methodology can successfully recover elastic parameters in 2D and 3D ASI systems from response measurements taken in either the solid or fluid subdomains. Furthermore, both regularization strategies are shown to produce accurate reconstructions when the measurement data is polluted with noise. The discrepancy principle is shown to produce nearly optimal solutions, while the error-balance approach, although not optimal, remains effective and does not need a priori information on the noise level.

  13. English word frequency and recognition in bilinguals: Inter-corpus comparison and error analysis.

    PubMed

    Shi, Lu-Feng

    2015-01-01

    This study is the second of a two-part investigation on lexical effects on bilinguals' performance on a clinical English word recognition test. Focus is on word-frequency effects using counts provided by four corpora. Frequency of occurrence was obtained for 200 NU-6 words from the Hoosier mental lexicon (HML) and three contemporary corpora, American National Corpora, Hyperspace analogue to language (HAL), and SUBTLEX(US). Correlation analysis was performed between word frequency and error rate. Ten monolinguals and 30 bilinguals participated. Bilinguals were further grouped according to their age of English acquisition and length of schooling/working in English. Word frequency significantly affected word recognition in bilinguals who acquired English late and had limited schooling/working in English. When making errors, bilinguals tended to replace the target word with a word of a higher frequency. Overall, the newer corpora outperformed the HML in predicting error rate. Frequency counts provided by contemporary corpora predict bilinguals' recognition of English monosyllabic words. Word frequency also helps explain top replacement words for misrecognized targets. Word-frequency effects are especially prominent for bilinguals foreign born and educated.

  14. A modified technique to reduce tibial keel cutting errors during an Oxford unicompartmental knee arthroplasty.

    PubMed

    Inui, Hiroshi; Taketomi, Shuji; Tahara, Keitarou; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2017-03-01

    Bone cutting errors can cause malalignment of unicompartmental knee arthroplasties (UKA). Although the extent of tibial malalignment due to horizontal cutting errors has been well reported, there is a lack of studies evaluating malalignment as a consequence of keel cutting errors, particularly in the Oxford UKA. The purpose of this study was to examine keel cutting errors during Oxford UKA placement using a navigation system and to clarify whether two different tibial keel cutting techniques would have different error rates. The alignment of the tibial cut surface after a horizontal osteotomy and the surface of the tibial trial component was measured with a navigation system. Cutting error was defined as the angular difference between these measurements. The following two techniques were used: the standard "pushing" technique in 83 patients (group P) and a modified "dolphin" technique in 41 patients (group D). In all 123 patients studied, the mean absolute keel cutting error was 1.7° and 1.4° in the coronal and sagittal planes, respectively. In group P, there were 22 outlier patients (27 %) in the coronal plane and 13 (16 %) in the sagittal plane. Group D had three outlier patients (8 %) in the coronal plane and none (0 %) in the sagittal plane. Significant differences were observed in the outlier ratio of these techniques in both the sagittal (P = 0.014) and coronal (P = 0.008) planes. Our study demonstrated overall keel cutting errors of 1.7° in the coronal plane and 1.4° in the sagittal plane. The "dolphin" technique was found to significantly reduce keel cutting errors on the tibial side. This technique will be useful for accurate component positioning and therefore improve the longevity of Oxford UKAs. Retrospective comparative study, Level III.

  15. Real-time drift error compensation in a self-reference frequency-scanning fiber interferometer

    NASA Astrophysics Data System (ADS)

    Tao, Long; Liu, Zhigang; Zhang, Weibo; Liu, Zhe; Hong, Jun

    2017-01-01

    In order to eliminate the fiber drift errors in a frequency-scanning fiber interferometer, we propose a self-reference frequency-scanning fiber interferometer composed of two fiber Michelson interferometers sharing common optical paths of fibers. One interferometer defined as reference interferometer is used to monitor the optical path length drift in real time and establish a measurement fixed origin. The other is used as a measurement interferometer to acquire the information from the target. Because the measured optical path differences of the reference and measurement interferometers by frequency-scanning interferometry include the same fiber drift errors, the errors can be eliminated by subtraction of the former optical path difference from the latter optical path difference. A prototype interferometer was developed in our research, and experimental results demonstrate its robustness and stability.

  16. Online public reactions to frequency of diagnostic errors in US outpatient care

    PubMed Central

    Giardina, Traber Davis; Sarkar, Urmimala; Gourley, Gato; Modi, Varsha; Meyer, Ashley N.D.; Singh, Hardeep

    2016-01-01

    Background Diagnostic errors pose a significant threat to patient safety but little is known about public perceptions of diagnostic errors. A study published in BMJ Quality & Safety in 2014 estimated that diagnostic errors affect at least 5% of US adults (or 12 million) per year. We sought to explore online public reactions to media reports on the reported frequency of diagnostic errors in the US adult population. Methods We searched the World Wide Web for any news article reporting findings from the study. We then gathered all the online comments made in response to the news articles to evaluate public reaction to the newly reported diagnostic error frequency (n=241). Two coders conducted content analyses of the comments and an experienced qualitative researcher resolved differences. Results Overall, there were few comments made regarding the frequency of diagnostic errors. However, in response to the media coverage, 44 commenters shared personal experiences of diagnostic errors. Additionally, commentary centered on diagnosis-related quality of care as affected by two emergent categories: (1) US health care providers (n=79; 63 commenters) and (2) US health care reform-related policies, most commonly the Affordable Care Act (ACA) and insurance/reimbursement issues (n=62; 47 commenters). Conclusion The public appears to have substantial concerns about the impact of the ACA and other reform initiatives on the diagnosis-related quality of care. However, policy discussions on diagnostic errors are largely absent from the current national conversation on improving quality and safety. Because outpatient diagnostic errors have emerged as a major safety concern, researchers and policymakers should consider evaluating the effects of policy and practice changes on diagnostic accuracy. PMID:27347474

  17. Design methodology accounting for fabrication errors in manufactured modified Fresnel lenses for controlled LED illumination.

    PubMed

    Shim, Jongmyeong; Kim, Joongeok; Lee, Jinhyung; Park, Changsu; Cho, Eikhyun; Kang, Shinill

    2015-07-27

    The increasing demand for lightweight, miniaturized electronic devices has prompted the development of small, high-performance optical components for light-emitting diode (LED) illumination. As such, the Fresnel lens is widely used in applications due to its compact configuration. However, the vertical groove angle between the optical axis and the groove inner facets in a conventional Fresnel lens creates an inherent Fresnel loss, which degrades optical performance. Modified Fresnel lenses (MFLs) have been proposed in which the groove angles along the optical paths are carefully controlled; however, in practice, the optical performance of MFLs is inferior to the theoretical performance due to fabrication errors, as conventional design methods do not account for fabrication errors as part of the design process. In this study, the Fresnel loss and the loss area due to microscopic fabrication errors in the MFL were theoretically derived to determine optical performance. Based on this analysis, a design method for the MFL accounting for the fabrication errors was proposed. MFLs were fabricated using an ultraviolet imprinting process and an injection molding process, two representative processes with differing fabrication errors. The MFL fabrication error associated with each process was examined analytically and experimentally to investigate our methodology.

  18. Efficient simulation for fixed-receiver bistatic SAR with time and frequency synchronization errors

    NASA Astrophysics Data System (ADS)

    Yan, Feifei; Chang, Wenge; Li, Xiangyang

    2015-12-01

    Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.

  19. Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS

    NASA Astrophysics Data System (ADS)

    Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin

    2015-08-01

    Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.

  20. Random Numbers Demonstrate the Frequency of Type I Errors: Three Spreadsheets for Class Instruction

    ERIC Educational Resources Information Center

    Duffy, Sean

    2010-01-01

    This paper describes three spreadsheet exercises demonstrating the nature and frequency of type I errors using random number generation. The exercises are designed specifically to address issues related to testing multiple relations using correlation (Demonstration I), t tests varying in sample size (Demonstration II) and multiple comparisons…

  1. Pyranometer frequency response measurement and general correction scheme for time response error

    SciTech Connect

    Shen, B.; Robinson, A.M. )

    1992-10-01

    A simple sinusoidal function radiation generator was designed to examine the frequency response of a Kipp and Zonen CM-5 pyranometer in the frequency range 0.014-0.073 Hz. Applying the thermal model of the pyranometer and its two time constants, which were acquired from a step response measurement, the authors obtained the theoretical frequency response of the pyranometer. Analysis of the experimental results determined an unknown constant in the relationship derived between the pyranometer input and output. This relationship was then used to correct the time response error of the pyranometer subject to an arbitrary radiation signal.

  2. A statistical comparison of EEG time- and time-frequency domain representations of error processing.

    PubMed

    Munneke, Gert-Jan; Nap, Tanja S; Schippers, Eveline E; Cohen, Michael X

    2015-08-27

    Successful behavior relies on error detection and subsequent remedial adjustment of behavior. Researchers have identified two electrophysiological signatures of error processing: the time-domain error-related negativity (ERN), and the time-frequency domain increased power in the delta/theta frequency bands (~2-8 Hz). The relationship between these two signatures is not entirely clear: on the one hand they occur after the same type of event and with similar latency, but on the other hand, the time-domain ERP component contains only phase-locked activity whereas the time-frequency response additionally contains non-phase-locked dynamics. Here we examined the ERN and error-related delta/theta activity in relation to each other, focusing on within-subject analyses that utilize single-trial data. Using logistic regression, we constructed three statistical models in which the accuracy of each trial was predicted from the ERN, delta/theta power, or both. We found that both the ERN and delta/theta power worked roughly equally well as predictors of single-trial accuracy (~70% accurate prediction). Furthermore, a model including both measures provided a stronger overall prediction compared to either model alone. Based on these findings two conclusions are drawn: first, the phase-locked part of the EEG signal appears to be roughly as predictive of single-trial response accuracy as the non-phase-locked part; second, the single-trial ERP and delta/theta power contain both overlapping and independent information.

  3. Modifying cognitive errors promotes cognitive well being: a new approach to bias modification.

    PubMed

    Lester, Kathryn J; Mathews, Andrew; Davison, Phil S; Burgess, Jennifer L; Yiend, Jenny

    2011-09-01

    Cognitive Bias Modification (CBM) procedures have been used to train individuals to interpret ambiguous information in a negative or benign direction and have provided evidence that negative biases causally contribute to emotional vulnerability. Here we present the development and validation of a new form of CBM designed to manipulate the cognitive errors known to characterize both depression and anxiety. Our manipulation was designed to modify the biased cognitions identified by Beck's cognitive error categories (e.g. arbitrary inference, overgeneralisation) and typically targeted during therapy. In a later test of spontaneous inferences, unselected (Experiment 1) and vulnerable participants (Experiment 2) who had generated positive alternatives rather than errors perceived novel hypothetical events, their causes and outcomes in a non-distorted manner. These groups were also less vulnerable to two different types of emotional stressor (video clips; and an imagined social situation). Furthermore participants' interpretation of their own performance on a problem-solving task was improved by the manipulation, despite actual performance showing no significant change. These findings demonstrate that Cognitive Error Modification can promote positive inferences, reduce vulnerability to stress and improve self-perceptions of performance. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. The impact of manufacturing errors of domain structure on frequency doubling efficiency in PPLN waveguides

    NASA Astrophysics Data System (ADS)

    Liu, Zhengying; Ren, Aihong; Zhang, Rongzhu; Liu, Jinglun; Sun, Nianchun; Chen, Jianguo

    2010-10-01

    While the length of polarization period in the periodically poled (PP) waveguides has manufacturing errors (MEs), the impact of this errors on Quasi-Phase-Macthed (QPM) frequency doubling efficiency (FDE), and that of polarization period Λ0 and length of the waveguides at the direction of transmission beams on ME tolerance, which are all theoretically analyzed. The results show that with the ME increasing, FDE decreases rapidly. And the ME tolerance of PP waveguides is inversely proportional to the length of waveguides and is directly proportional to the polarization period Λ0. These results provide a theoretical basis for choosing material of periodically poled crystal (PPC) and controlling MEs.

  5. Where is the effect of frequency in word production? Insights from aphasic picture naming errors

    PubMed Central

    Kittredge, Audrey K.; Dell, Gary S.; Verkuilen, Jay; Schwartz, Myrna F.

    2010-01-01

    Some theories of lexical access in production locate the effect of lexical frequency at the retrieval of a word’s phonological characteristics, as opposed to the prior retrieval of a holistic representation of the word from its meaning. Yet there is evidence from both normal and aphasic individuals that frequency may influence both of these retrieval processes. This inconsistency is especially relevant in light of recent attempts to determine the representation of another lexical property, age of acquisition or AoA, whose effect is similar to that of frequency. To further explore the representations of these lexical variables in the word retrieval system, we performed hierarchical, multinomial logistic regression analyses of 50 aphasic patients’ picture-naming responses. While both log frequency and AoA had a significant influence on patient accuracy and led to fewer phonologically related errors and omissions, only log frequency had an effect on semantically related errors. These results provide evidence for a lexical access process sensitive to frequency at all stages, but with AoA having a more limited effect. PMID:18704797

  6. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  7. Estimate error of frequency-dependent Q introduced by linear regression and its nonlinear implementation

    NASA Astrophysics Data System (ADS)

    Li, Guofa; Huang, Wei; Zheng, Hao; Zhang, Baoqing

    2016-02-01

    The spectral ratio method (SRM) is widely used to estimate quality factor Q via the linear regression of seismic attenuation under the assumption of a constant Q. However, the estimate error will be introduced when this assumption is violated. For the frequency-dependent Q described by a power-law function, we derived the analytical expression of estimate error as a function of the power-law exponent γ and the ratio of the bandwidth to the central frequency σ . Based on the theoretical analysis, we found that the estimate errors are mainly dominated by the exponent γ , and less affected by the ratio σ . This phenomenon implies that the accuracy of the Q estimate can hardly be improved by adjusting the width and range of the frequency band. Hence, we proposed a two-parameter regression method to estimate the frequency-dependent Q from the nonlinear seismic attenuation. The proposed method was tested using the direct waves acquired by a near-surface cross-hole survey, and its reliability was evaluated in comparison with the result of SRM.

  8. Phoneme frequency effects in jargon aphasia: a phonological investigation of nonword errors.

    PubMed

    Robson, Jo; Pring, Tim; Marshall, Jane; Chiat, Shula

    2003-04-01

    This study investigates the nonwords produced by a jargon speaker, LT. Despite presenting with severe neologistic jargon, LT can produce discrete responses in picture naming tasks thus allowing the properties of his jargon to be investigated. This ability was exploited in two naming tasks. The first showed that LT's nonword errors are related to their targets despite being generally unrecognizable. This relatedness appears to be a general property of his errors suggesting that they are produced by lexical rather than nonlexical means. The second naming task used a set of stimuli controlled for their phonemic content. This allowed an investigation of target phonology at the level of individual phonemes. Nonword responses maintained the English distribution of consonants and showed a significant relationship to the target phonologies. A strong influence of phoneme frequency was identified. High frequency consonants showed a pattern of frequent but indiscriminate use. Low frequency consonants were realised less often but were largely restricted to target related contexts rarely appearing as error phonology. The findings are explained within a lexical activation network with the proposal that the resting levels of phoneme nodes are frequency sensitive. Predictions for the recovery of jargon aphasia and suggestions for future investigations are made.

  9. Systematic Errors in Peptide and Protein Identification and Quantification by Modified Peptides*

    PubMed Central

    Bogdanow, Boris; Zauber, Henrik; Selbach, Matthias

    2016-01-01

    The principle of shotgun proteomics is to use peptide mass spectra in order to identify corresponding sequences in a protein database. The quality of peptide and protein identification and quantification critically depends on the sensitivity and specificity of this assignment process. Many peptides in proteomic samples carry biochemical modifications, and a large fraction of unassigned spectra arise from modified peptides. Spectra derived from modified peptides can erroneously be assigned to wrong amino acid sequences. However, the impact of this problem on proteomic data has not yet been investigated systematically. Here we use combinations of different database searches to show that modified peptides can be responsible for 20–50% of false positive identifications in deep proteomic data sets. These false positive hits are particularly problematic as they have significantly higher scores and higher intensities than other false positive matches. Furthermore, these wrong peptide assignments lead to hundreds of false protein identifications and systematic biases in protein quantification. We devise a “cleaned search” strategy to address this problem and show that this considerably improves the sensitivity and specificity of proteomic data. In summary, we show that modified peptides cause systematic errors in peptide and protein identification and quantification and should therefore be considered to further improve the quality of proteomic data annotation. PMID:27215553

  10. The role of movement errors in modifying spatiotemporal gait asymmetry post stroke: a randomized controlled trial.

    PubMed

    Lewek, Michael D; Braun, Carty H; Wutzke, Clint; Giuliani, Carol

    2017-07-01

    Current rehabilitation to improve gait symmetry following stroke is based on one of two competing motor learning strategies: minimizing or augmenting symmetry errors. We sought to determine which of those motor learning strategies best improves overground spatiotemporal gait symmetry. Randomized controlled trial. Rehabilitation research lab. In all, 47 participants (59 ± 12 years old) with chronic hemiparesis post stroke and spatiotemporal gait asymmetry were randomized to error augmentation, error minimization, or conventional treadmill training (control) groups. To augment or minimize asymmetry on a step-by-step basis, we developed a responsive, "closed-loop" control system, using a split-belt instrumented treadmill that continuously adjusted the difference in belt speeds to be proportional to the patient's current asymmetry. Overground spatiotemporal asymmetries and gait speeds were collected prior to and following 18 training sessions. Step length asymmetry reduced after training, but stance time did not. There was no group × time interaction. Gait speed improved after training, but was not affected by type of asymmetry, or group. Of those who trained to modify step length asymmetry, there was a moderately strong linear relationship between the change in step length asymmetry and the change in gait speed. Augmenting errors was not superior to minimizing errors or providing only verbal feedback during conventional treadmill walking. Therefore, the use of verbal feedback to target spatiotemporal asymmetry, which was common to all participants, appears to be sufficient to reduce step length asymmetry. Alterations in stance time asymmetry were not elicited in any group.

  11. A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory

    NASA Technical Reports Server (NTRS)

    Elander, Valjean; Koshak, William; Phanord, Dieudonne

    2004-01-01

    The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."

  12. Relevant reduction effect with a modified thermoplastic mask of rotational error for glottic cancer in IMRT

    NASA Astrophysics Data System (ADS)

    Jung, Jae Hong; Jung, Joo-Young; Cho, Kwang Hwan; Ryu, Mi Ryeong; Bae, Sun Hyun; Moon, Seong Kwon; Kim, Yong Ho; Choe, Bo-Young; Suh, Tae Suk

    2017-02-01

    The purpose of this study was to analyze the glottis rotational error (GRE) by using a thermoplastic mask for patients with the glottic cancer undergoing intensity-modulated radiation therapy (IMRT). We selected 20 patients with glottic cancer who had received IMRT by using the tomotherapy. The image modalities with both kilovoltage computed tomography (planning kVCT) and megavoltage CT (daily MVCT) images were used for evaluating the error. Six anatomical landmarks in the image were defined to evaluate a correlation between the absolute GRE (°) and the length of contact with the underlying skin of the patient by the mask (mask, mm). We also statistically analyzed the results by using the Pearson's correlation coefficient and a linear regression analysis ( P <0.05). The mask and the absolute GRE were verified to have a statistical correlation ( P < 0.01). We found a statistical significance for each parameter in the linear regression analysis (mask versus absolute roll: P = 0.004 [ P < 0.05]; mask versus 3D-error: P = 0.000 [ P < 0.05]). The range of the 3D-errors with contact by the mask was from 1.2% - 39.7% between the maximumand no-contact case in this study. A thermoplastic mask with a tight, increased contact area may possibly contribute to the uncertainty of the reproducibility as a variation of the absolute GRE. Thus, we suggest that a modified mask, such as one that covers only the glottis area, can significantly reduce the patients' setup errors during the treatment.

  13. A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory

    NASA Technical Reports Server (NTRS)

    Elander, Valjean; Koshak, William; Phanord, Dieudonne

    2004-01-01

    The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."

  14. Error detection and correction for a multiple frequency quaternary phase shift keyed signal

    NASA Astrophysics Data System (ADS)

    Hopkins, Kevin S.

    1989-06-01

    A multiple frequency quaternary phased shift (MFQPSK) signaling system was developed and experimentally tested in a controlled environment. In order to insure that the quality of the received signal is such that information recovery is possible, error detection/correction (EDC) must be used. Various EDC coding schemes available are reviewed and their application to the MFQPSK signal system is analyzed. Hamming, Golay, Bose-Chaudhuri-Hocquenghem (BCH), Reed-Solomon (R-S) block codes as well as convolutional codes are presented and analyzed in the context of specific MFQPSK system parameters. A computer program was developed in order to compute bit error probabilities as a function of signal to noise ratio. Results demonstrate that various EDC schemes are suitable for the MFQPSK signal structure, and that significant performance improvements are possible with the use of certain error correction codes.

  15. Performance evaluation of pitch lap in correcting mid-spatial-frequency errors under different smoothing parameters

    NASA Astrophysics Data System (ADS)

    Xu, Lichao; Wan, Yongjian; Liu, Haitao; Wang, Jia

    2016-10-01

    Smoothing is a convenient and efficient way to restrain middle spatial frequency (MSF) errors. Based on the experience, lap diameter, rotation speed, lap pressure and the hardness of pitch layer are important to correcting MSF errors. Therefore, nine groups of experiments are designed with the orthogonal method to confirm the significance of the above parameters. Based on the Zhang's model, PV (Peak and Valley) and RMS (Root Mean Square) versus processing cycles are analyzed before and after smoothing. At the same time, the smoothing limit and smoothing rate for different parameters to correct MSF errors are analyzed. Combined with the deviation analysis, we distinguish between dominant and subordinate parameters, and find out the optimal combination and law of various parameters, so as to guide the further research and fabrication.

  16. Robust nonstationary jammer mitigation for GPS receivers with instantaneous frequency error tolerance

    NASA Astrophysics Data System (ADS)

    Wang, Ben; Zhang, Yimin D.; Qin, Si; Amin, Moeness G.

    2016-05-01

    In this paper, we propose a nonstationary jammer suppression method for GPS receivers when the signals are sparsely sampled. Missing data samples induce noise-like artifacts in the time-frequency (TF) distribution and ambiguity function of the received signals, which lead to reduced capability and degraded performance in jammer signature estimation and excision. In the proposed method, a data-dependent TF kernel is utilized to mitigate the artifacts and sparse reconstruction methods are then applied to obtain instantaneous frequency (IF) estimation of the jammers. In addition, an error tolerance of the IF estimate is applied is applied to achieve robust jammer suppression performance in the presence of IF estimation inaccuracy.

  17. Driving errors of learner teens: frequency, nature and their association with practice.

    PubMed

    Durbin, Dennis R; Mirman, Jessica H; Curry, Allison E; Wang, Wenli; Fisher Thiel, Megan C; Schultheis, Maria; Winston, Flaura K

    2014-11-01

    Despite demonstrating basic vehicle operations skills sufficient to pass a state licensing test, novice teen drivers demonstrate several deficits in tactical driving skills during the first several months of independent driving. Improving our knowledge of the types of errors made by teen permit holders early in the learning process would assist in the development of novel approaches to driver training and resources for parent supervision. The purpose of the current analysis was to describe driving performance errors made by teens during the permit period, and to determine if there were differences in the frequency and type of errors made by teens: (1) in comparison to licensed, safe, and experienced adult drivers; (2) by teen and parent-supervisor characteristics; and (3) by teen-reported quantity of practice driving. Data for this analysis were combined from two studies: (1) the control group of teens in a randomized clinical trial evaluating an intervention to improve parent-supervised practice driving (n=89 parent-teen dyads) and (2) a sample of 37 adult drivers (mean age 44.2 years), recruited and screened as an experienced and competent reference standard in a validation study of an on-road driving assessment for teens (tODA). Three measures of performance: drive termination (i.e., the assessment was discontinued for safety reasons), safety-relevant critical errors, and vehicle operation errors were evaluated at the approximate mid-point (12 weeks) and end (24 weeks) of the learner phase. Differences in driver performance were compared using the Wilcoxon rank sum test for continuous variables and Pearson's Chi-square test for categorical variables. 10.4% of teens had their early assessment terminated for safety reasons and 15.4% had their late assessment terminated, compared to no adults. These teens reported substantially fewer behind the wheel practice hours compared with teens that did not have their assessments terminated: tODAearly (9.0 vs. 20.0, p<0

  18. Chemotherapy medication errors in a pediatric cancer treatment center: prospective characterization of error types and frequency and development of a quality improvement initiative to lower the error rate.

    PubMed

    Watts, Raymond G; Parsons, Kerry

    2013-08-01

    Chemotherapy medication errors occur in all cancer treatment programs. Such errors have potential severe consequences: either enhanced toxicity or impaired disease control. Understanding and limiting chemotherapy errors are imperative. A multi-disciplinary team developed and implemented a prospective pharmacy surveillance system of chemotherapy prescribing and administration errors from 2008 to 2011 at a Children's Oncology Group-affiliated, pediatric cancer treatment program. Every chemotherapy order was prospectively reviewed for errors at the time of order submission. All chemotherapy errors were graded using standard error severity codes. Error rates were calculated by number of patient encounters and chemotherapy doses dispensed. Process improvement was utilized to develop techniques to minimize errors with a goal of zero errors reaching the patient. Over the duration of the study, more than 20,000 chemotherapy orders were reviewed. Error rates were low (6/1,000 patient encounters and 3.9/1,000 medications dispensed) at the start of the project and reduced by 50% to 3/1,000 patient encounters and 1.8/1,000 medications dispensed during the initiative. Error types included chemotherapy dosing or prescribing errors (42% of errors), treatment roadmap errors (26%), supportive care errors (15%), timing errors (12%), and pharmacy dispensing errors (4%). Ninety-two percent of errors were intercepted before reaching the patient. No error caused identified patient harm. Efforts to lower rates were successful but have not succeeded in preventing all errors. Chemotherapy medication errors are possibly unavoidable, but can be minimized by thoughtful, multispecialty review of current policies and procedures. Pediatr Blood Cancer 2013;601320-1324. © 2013 Wiley Periodicals, Inc. Copyright © 2013 Wiley Periodicals, Inc.

  19. Compensation of body shake errors in terahertz beam scanning single frequency holography for standoff personnel screening

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Chao; Sun, Zhao-Yang; Zhao, Yu; Wu, Shi-You; Fang, Guang-You

    2016-08-01

    In the terahertz (THz) band, the inherent shake of the human body may strongly impair the image quality of a beam scanning single frequency holography system for personnel screening. To realize accurate shake compensation in imaging processing, it is quite necessary to develop a high-precision measure system. However, in many cases, different parts of a human body may shake to different extents, resulting in greatly increasing the difficulty in conducting a reasonable measurement of body shake errors for image reconstruction. In this paper, a body shake error compensation algorithm based on the raw data is proposed. To analyze the effect of the body shake on the raw data, a model of echoed signal is rebuilt with considering both the beam scanning mode and the body shake. According to the rebuilt signal model, we derive the body shake error estimated method to compensate for the phase error. Simulation on the reconstruction of point targets with shake errors and proof-of-principle experiments on the human body in the 0.2-THz band are both performed to confirm the effectiveness of the body shake compensation algorithm proposed. Project supported by the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant No. YYYJ-1123).

  20. PULSAR TIMING ERRORS FROM ASYNCHRONOUS MULTI-FREQUENCY SAMPLING OF DISPERSION MEASURE VARIATIONS

    SciTech Connect

    Lam, M. T.; Cordes, J. M.; Chatterjee, S.; Dolch, T.

    2015-03-10

    Free electrons in the interstellar medium cause frequency-dependent delays in pulse arrival times due to both scattering and dispersion. Multi-frequency measurements are used to estimate and remove dispersion delays. In this paper, we focus on the effect of any non-simultaneity of multi-frequency observations on dispersive delay estimation and removal. Interstellar density variations combined with changes in the line of sight from pulsar and observer motions cause dispersion measure (DM) variations with an approximately power-law power spectrum, augmented in some cases by linear trends. We simulate time series, estimate the magnitude and statistical properties of timing errors that result from non-simultaneous observations, and derive prescriptions for data acquisition that are needed in order to achieve a specified timing precision. For nearby, highly stable pulsars, measurements need to be simultaneous to within about one day in order for the timing error from asynchronous DM correction to be less than about 10 ns. We discuss how timing precision improves when increasing the number of dual-frequency observations used in DM estimation for a given epoch. For a Kolmogorov wavenumber spectrum, we find about a factor of two improvement in precision timing when increasing from two to three observations but diminishing returns thereafter.

  1. The second-order Rytov approximation and residual error in dual-frequency satellite navigation systems

    NASA Astrophysics Data System (ADS)

    Kim, B. C.; Tinin, M. V.

    The second-order Rytov approximation has been used to determine ionospheric corrections for the phase path up to third order. We show the transition of the derived expressions to previous results obtained within the ray approximation using the second-order approximation of perturbation theory by solving the eikonal equation. The resulting equation for the phase path is used to determine the residual ionospheric first-, second- and third-order errors of a dual-frequency navigation system, with diffraction effects taken into account. Formulas are derived for the biases and variances of these errors, and these formulas are analyzed and modeled for a turbulent ionosphere. The modeling results show that the third-order error that is determined by random irregularities can be dominant in the residual errors. In particular, the role of random irregularities is enhanced for small elevation angles. Furthermore, in the case of small angles the role of diffraction effects increases. It is pointed out that a need to pass on to diffraction formulas arises when the Fresnel radius exceeds the inner scale of turbulence.

  2. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGES

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  3. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    SciTech Connect

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  4. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.; Griffin, John C.

    2015-07-01

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. Using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  5. EEG error potentials detection and classification using time-frequency features for robot reinforcement learning.

    PubMed

    Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali

    2015-08-01

    In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier.

  6. Effects of diffraction by ionospheric electron density irregularities on the range error in GNSS dual-frequency positioning and phase decorrelation

    NASA Astrophysics Data System (ADS)

    Gherm, Vadim E.; Zernov, Nikolay N.; Strangeways, Hal J.

    2011-06-01

    It can be important to determine the correlation of different frequency signals in L band that have followed transionospheric paths. In the future, both GPS and the new Galileo satellite system will broadcast three frequencies enabling more advanced three frequency correction schemes so that knowledge of correlations of different frequency pairs for scintillation conditions is desirable. Even at present, it would be helpful to know how dual-frequency Global Navigation Satellite Systems positioning can be affected by lack of correlation between the L1 and L2 signals. To treat this problem of signal correlation for the case of strong scintillation, a previously constructed simulator program, based on the hybrid method, has been further modified to simulate the fields for both frequencies on the ground, taking account of their cross correlation. Then, the errors in the two-frequency range finding method caused by scintillation have been estimated for particular ionospheric conditions and for a realistic fully three-dimensional model of the ionospheric turbulence. The results which are presented for five different frequency pairs (L1/L2, L1/L3, L1/L5, L2/L3, and L2/L5) show the dependence of diffractional errors on the scintillation index S4 and that the errors diverge from a linear relationship, the stronger are scintillation effects, and may reach up to ten centimeters, or more. The correlation of the phases at spaced frequencies has also been studied and found that the correlation coefficients for different pairs of frequencies depend on the procedure of phase retrieval, and reduce slowly as both the variance of the electron density fluctuations and cycle slips increase.

  7. A Modified Frequency Estimation Equating Method for the Common-Item Nonequivalent Groups Design

    ERIC Educational Resources Information Center

    Wang, Tianyou; Brennan, Robert L.

    2009-01-01

    Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…

  8. A Modified Frequency Estimation Equating Method for the Common-Item Nonequivalent Groups Design

    ERIC Educational Resources Information Center

    Wang, Tianyou; Brennan, Robert L.

    2009-01-01

    Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…

  9. Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint

    SciTech Connect

    Florita, A.; Hodge, B. M.; Milligan, M.

    2012-08-01

    The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.

  10. Error Probability of MRC in Frequency Selective Nakagami Fading in the Presence of CCI and ACI

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sum, Chin-Sean; Funada, Ryuhei; Sasaki, Shigenobu; Baykas, Tuncer; Wang, Junyi; Harada, Hiroshi; Kato, Shuzo

    An exact expression of error rate is developed for maximal ratio combining (MRC) in an independent but not necessarily identically distributed frequency selective Nakagami fading channel taking into account inter-symbol, co-channel and adjacent channel interferences (ISI, CCI and ACI respectively). The characteristic function (CF) method is adopted. While accurate analysis of MRC performance cannot be seen in frequency selective channel taking ISI (and CCI) into account, such analysis for ACI has not been addressed yet. The general analysis presented in this paper solves a problem of past and present interest, which has so far been studied either approximately or in simulations. The exact method presented also lets us obtain an approximate error rate expression based on Gaussian approximation (GA) of the interferences. It is shown, especially while the channel is lightly faded, has fewer multipath components and a decaying delay profile, the GA may be substantially inaccurate at high signal-to-noise ratio. However, the exact results also reveal an important finding that there is a range of parameters where the simpler GA is reasonably accurate and hence, we don't have to go for more involved exact expression.

  11. Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum

    NASA Astrophysics Data System (ADS)

    Orus Perez, Raul

    2017-04-01

    For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.

  12. Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum

    NASA Astrophysics Data System (ADS)

    Orus Perez, Raul

    2016-11-01

    For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.

  13. Suppressing gate errors in frequency-domain quantum computation through extra physical systems coupled to a cavity

    NASA Astrophysics Data System (ADS)

    Nakamura, Satoshi; Goto, Hayato; Kujiraoka, Mamiko; Ichimura, Kouichi

    2016-12-01

    We propose a scheme for frequency-domain quantum computation (FDQC) in which the errors due to crosstalk are suppressed using extra physical systems coupled to a cavity. FDQC is a promising method to realize large-scale quantum computation, but crosstalk is a major problem. When physical systems employed as qubits satisfy specific resonance conditions, gate errors due to crosstalk increase. In our scheme, the errors are suppressed by controlling the resonance conditions using extra physical systems.

  14. Spatial Distribution of the Errors in Modeling the Mid-Latitude Critical Frequencies by Different Models

    NASA Astrophysics Data System (ADS)

    Kilifarska, N. A.

    There are some models that describe the spatial distribution of greatest frequency yielding reflection from the F2 ionospheric layer (foF2). However, the distribution of the models' errors over the globe and how they depend on seasons, solar activity, etc., are unknown till this time. So the aim of the present paper is to compare the accuracy in describing the latitudinal and longitudinal variation of the mid-latitude maximum electron density, of CCIR, URSI, and a new created theoretical model. A comparison between the above mentioned models and all available from Boulder's data bank VI data (among 35 deg and 70 deg) have been made. Data for three whole years with different solar activity - 1976 (F_10.7 = 73.6), 1981 (F_10.7 = 20.6), 1983 (F_10.7 = 119.6) have been compared. The final results show that: 1. the areas with greatest and smallest errors depend on UT, season and solar activity; 2. the error distribution of CCIR and URSI models are very similar and are not coincident with these ones of theoretical model. The last result indicates that the theoretical model, described briefly bellow, may be a real alternative to the empirical CCIR and URSI models. The different spatial distribution of the models' errors gives a chance for the users to choose the most appropriate model, depending on their needs. Taking into account that the theoretical models have equal accuracy in region with many or without any ionosonde station, this result shows that our model can be used to improve the global mapping of the mid-latitude ionosphere. Moreover, if Re values of the input aeronomical parameters (neutral composition, temperatures and winds), are used - it may be expected that this theoretical model can be applied for Re or almost Re-time mapping of the main ionospheric parameters (foF2 and hmF2).

  15. Wrongful Conviction: Perceptions of Criminal Justice Professionals Regarding the Frequency of Wrongful Conviction and the Extent of System Errors

    ERIC Educational Resources Information Center

    Ramsey, Robert J.; Frank, James

    2007-01-01

    Drawing on a sample of 798 Ohio criminal justice professionals (police, prosecutors, defense attorneys, judges), the authors examine respondents' perceptions regarding the frequency of system errors (i.e., professional error and misconduct suggested by previous research to be associated with wrongful conviction), and wrongful felony conviction.…

  16. Flood Frequency Analyses Using a Modified Stochastic Storm Transposition Method

    NASA Astrophysics Data System (ADS)

    Fang, N. Z.; Kiani, M.

    2015-12-01

    Research shows that areas with similar topography and climatic environment have comparable precipitation occurrences. Reproduction and realization of historical rainfall events provide foundations for frequency analysis and the advancement of meteorological studies. Stochastic Storm Transposition (SST) is a method for such a purpose and enables us to perform hydrologic frequency analyses by transposing observed historical storm events to the sites of interest. However, many previous studies in SST reveal drawbacks from simplified Probability Density Functions (PDFs) without considering restrictions for transposing rainfalls. The goal of this study is to stochastically examine the impacts of extreme events on all locations in a homogeneity zone. Since storms with the same probability of occurrence on homogenous areas do not have the identical hydrologic impacts, the authors utilize detailed precipitation parameters including the probability of occurrence of certain depth and the number of occurrence of extreme events, which are both incorporated into a joint probability function. The new approach can reduce the bias from uniformly transposing storms which erroneously increases the probability of occurrence of storms in areas with higher rainfall depths. This procedure is iterated to simulate storm events for one thousand years as the basis for updating frequency analysis curves such as IDF and FFA. The study area is the Upper Trinity River watershed including the Dallas-Fort Worth metroplex with a total area of 6,500 mi2. It is the first time that SST method is examined in such a wide scale with 20 years of radar rainfall data.

  17. The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations

    PubMed Central

    Singh, Hardeep; Meyer, Ashley N D; Thomas, Eric J

    2014-01-01

    Background The frequency of outpatient diagnostic errors is challenging to determine due to varying error definitions and the need to review data across multiple providers and care settings over time. We estimated the frequency of diagnostic errors in the US adult population by synthesising data from three previous studies of clinic-based populations that used conceptually similar definitions of diagnostic error. Methods Data sources included two previous studies that used electronic triggers, or algorithms, to detect unusual patterns of return visits after an initial primary care visit or lack of follow-up of abnormal clinical findings related to colorectal cancer, both suggestive of diagnostic errors. A third study examined consecutive cases of lung cancer. In all three studies, diagnostic errors were confirmed through chart review and defined as missed opportunities to make a timely or correct diagnosis based on available evidence. We extrapolated the frequency of diagnostic error obtained from our studies to the US adult population, using the primary care study to estimate rates of diagnostic error for acute conditions (and exacerbations of existing conditions) and the two cancer studies to conservatively estimate rates of missed diagnosis of colorectal and lung cancer (as proxies for other serious chronic conditions). Results Combining estimates from the three studies yielded a rate of outpatient diagnostic errors of 5.08%, or approximately 12 million US adults every year. Based upon previous work, we estimate that about half of these errors could potentially be harmful. Conclusions Our population-based estimate suggests that diagnostic errors affect at least 1 in 20 US adults. This foundational evidence should encourage policymakers, healthcare organisations and researchers to start measuring and reducing diagnostic errors. PMID:24742777

  18. The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations.

    PubMed

    Singh, Hardeep; Meyer, Ashley N D; Thomas, Eric J

    2014-09-01

    The frequency of outpatient diagnostic errors is challenging to determine due to varying error definitions and the need to review data across multiple providers and care settings over time. We estimated the frequency of diagnostic errors in the US adult population by synthesising data from three previous studies of clinic-based populations that used conceptually similar definitions of diagnostic error. Data sources included two previous studies that used electronic triggers, or algorithms, to detect unusual patterns of return visits after an initial primary care visit or lack of follow-up of abnormal clinical findings related to colorectal cancer, both suggestive of diagnostic errors. A third study examined consecutive cases of lung cancer. In all three studies, diagnostic errors were confirmed through chart review and defined as missed opportunities to make a timely or correct diagnosis based on available evidence. We extrapolated the frequency of diagnostic error obtained from our studies to the US adult population, using the primary care study to estimate rates of diagnostic error for acute conditions (and exacerbations of existing conditions) and the two cancer studies to conservatively estimate rates of missed diagnosis of colorectal and lung cancer (as proxies for other serious chronic conditions). Combining estimates from the three studies yielded a rate of outpatient diagnostic errors of 5.08%, or approximately 12 million US adults every year. Based upon previous work, we estimate that about half of these errors could potentially be harmful. Our population-based estimate suggests that diagnostic errors affect at least 1 in 20 US adults. This foundational evidence should encourage policymakers, healthcare organisations and researchers to start measuring and reducing diagnostic errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  19. Demonstration of the frequency offset errors introduced by an incorrect setting of the Zeeman/magnetic field adjustment on the cesium beam frequency standard

    NASA Technical Reports Server (NTRS)

    Kaufmann, D. C.

    1976-01-01

    The fine frequency setting of a cesium beam frequency standard is accomplished by adjusting the C field control with the appropriate Zeeman frequency applied to the harmonic generator. A novice operator in the field, even when using the correct Zeeman frequency input, may mistakenly set the C field to any one of seven major Beam I peaks (fingers) represented by the Ramsey curve. This can result in frequency offset errors of as much as 2.5 parts in ten to the tenth. The effects of maladjustment are demonstrated and suggestions are discussed on how to avoid the subtle traps associated with C field adjustments.

  20. Low-Frequency Error Extraction and Compensation for Attitude Measurements from STECE Star Tracker.

    PubMed

    Lai, Yuwang; Gu, Defeng; Liu, Junhong; Li, Wenping; Yi, Dongyun

    2016-10-12

    The low frequency errors (LFE) of star trackers are the most penalizing errors for high-accuracy satellite attitude determination. Two test star trackers- have been mounted on the Space Technology Experiment and Climate Exploration (STECE) satellite, a small satellite mission developed by China. To extract and compensate the LFE of the attitude measurements for the two test star trackers, a new approach, called Fourier analysis, combined with the Vondrak filter method (FAVF) is proposed in this paper. Firstly, the LFE of the two test star trackers' attitude measurements are analyzed and extracted by the FAVF method. The remarkable orbital reproducibility features are found in both of the two test star trackers' attitude measurements. Then, by using the reproducibility feature of the LFE, the two star trackers' LFE patterns are estimated effectively. Finally, based on the actual LFE pattern results, this paper presents a new LFE compensation strategy. The validity and effectiveness of the proposed LFE compensation algorithm is demonstrated by the significant improvement in the consistency between the two test star trackers. The root mean square (RMS) of the relative Euler angle residuals are reduced from [27.95'', 25.14'', 82.43''], 3σ to [16.12'', 15.89'', 53.27''], 3σ.

  1. Modeling work zone crash frequency by quantifying measurement errors in work zone length.

    PubMed

    Yang, Hong; Ozbay, Kaan; Ozturk, Ozgur; Yildirimoglu, Mehmet

    2013-06-01

    Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence.

  2. Low-Frequency Error Extraction and Compensation for Attitude Measurements from STECE Star Tracker

    PubMed Central

    Lai, Yuwang; Gu, Defeng; Liu, Junhong; Li, Wenping; Yi, Dongyun

    2016-01-01

    The low frequency errors (LFE) of star trackers are the most penalizing errors for high-accuracy satellite attitude determination. Two test star trackers- have been mounted on the Space Technology Experiment and Climate Exploration (STECE) satellite, a small satellite mission developed by China. To extract and compensate the LFE of the attitude measurements for the two test star trackers, a new approach, called Fourier analysis, combined with the Vondrak filter method (FAVF) is proposed in this paper. Firstly, the LFE of the two test star trackers’ attitude measurements are analyzed and extracted by the FAVF method. The remarkable orbital reproducibility features are found in both of the two test star trackers’ attitude measurements. Then, by using the reproducibility feature of the LFE, the two star trackers’ LFE patterns are estimated effectively. Finally, based on the actual LFE pattern results, this paper presents a new LFE compensation strategy. The validity and effectiveness of the proposed LFE compensation algorithm is demonstrated by the significant improvement in the consistency between the two test star trackers. The root mean square (RMS) of the relative Euler angle residuals are reduced from [27.95′′, 25.14′′, 82.43′′], 3σ to [16.12′′, 15.89′′, 53.27′′], 3σ. PMID:27754320

  3. Frequency and Type of Situational Awareness Errors Contributing to Death and Brain Damage: A Closed Claims Analysis.

    PubMed

    Schulz, Christian M; Burden, Amanda; Posner, Karen L; Mincer, Shawn L; Steadman, Randolph; Wagner, Klaus J; Domino, Karen B

    2017-08-01

    Situational awareness errors may play an important role in the genesis of patient harm. The authors examined closed anesthesia malpractice claims for death or brain damage to determine the frequency and type of situational awareness errors. Surgical and procedural anesthesia death and brain damage claims in the Anesthesia Closed Claims Project database were analyzed. Situational awareness error was defined as failure to perceive relevant clinical information, failure to comprehend the meaning of available information, or failure to project, anticipate, or plan. Patient and case characteristics, primary damaging events, and anesthesia payments in claims with situational awareness errors were compared to other death and brain damage claims from 2002 to 2013. Anesthesiologist situational awareness errors contributed to death or brain damage in 198 of 266 claims (74%). Respiratory system damaging events were more common in claims with situational awareness errors (56%) than other claims (21%, P < 0.001). The most common specific respiratory events in error claims were inadequate oxygenation or ventilation (24%), difficult intubation (11%), and aspiration (10%). Payments were made in 85% of situational awareness error claims compared to 46% in other claims (P = 0.001), with no significant difference in payment size. Among 198 claims with anesthesia situational awareness error, perception errors were most common (42%), whereas comprehension errors (29%) and projection errors (29%) were relatively less common. Situational awareness error definitions were operationalized for reliable application to real-world anesthesia cases. Situational awareness errors may have contributed to catastrophic outcomes in three quarters of recent anesthesia malpractice claims.Situational awareness errors resulting in death or brain damage remain prevalent causes of malpractice claims in the 21st century.

  4. Spindle error motion measurement using concentric circle grating and sinusoidal frequency-modulated semiconductor lasers

    NASA Astrophysics Data System (ADS)

    Higuchi, Masato; Vu, Thanh-Tung; Aketagawa, Masato

    2016-11-01

    The conventional method of measuring the radial, axial and angular spindle motion is complicated and needs large spaces. Smaller instrument is better in terms of accurate and practical measurement. A method of measuring spindle error motion using a sinusoidal phase modulation and a concentric circle grating was described in the past. In the method, the concentric circle grating with fine pitch is attached to the spindle. Three optical sensors are fixed under grating and observe appropriate position of grating. The each optical sensor consists of a sinusoidal frequency modulated semiconductor laser as the light source, and two interferometers. One interferometer measures an axial spindle motion by detecting the interference fringe between reflected beam from fixed mirror and 0th-order diffracted beam. Another interferometer measures a radial spindle motion by detecting the interference fringe between ±2nd-order diffracted beams. With these optical sensor, 3 axial and 3 radial displacement of grating can be measured. From these measured displacements, axial, radial and angular spindle motion is calculated concurrently. In the previous experiment, concurrent measurement of the one axial and one radial spindle displacement at 4rpm was described. In this paper, the sinusoidal frequency modulation realized by modulating injection current is used instead of the sinusoidal phase modulation, which contributes simplicity of the instrument. Furthermore, concurrent measurement of the 5 axis (1 axial, 2 radial and 2 angular displacements) spindle motion at 4000rpm may be described.

  5. Gamma frequency entrainment attenuates amyloid load and modifies microglia.

    PubMed

    Iaccarino, Hannah F; Singer, Annabelle C; Martorell, Anthony J; Rudenko, Andrii; Gao, Fan; Gillingham, Tyler Z; Mathys, Hansruedi; Seo, Jinsoo; Kritskiy, Oleg; Abdurrob, Fatema; Adaikkan, Chinnakkaruppan; Canter, Rebecca G; Rueda, Richard; Brown, Emery N; Boyden, Edward S; Tsai, Li-Huei

    2016-12-07

    Changes in gamma oscillations (20-50 Hz) have been observed in several neurological disorders. However, the relationship between gamma oscillations and cellular pathologies is unclear. Here we show reduced, behaviourally driven gamma oscillations before the onset of plaque formation or cognitive decline in a mouse model of Alzheimer's disease. Optogenetically driving fast-spiking parvalbumin-positive (FS-PV)-interneurons at gamma (40 Hz), but not other frequencies, reduces levels of amyloid-β (Aβ)1-40 and Aβ 1-42 isoforms. Gene expression profiling revealed induction of genes associated with morphological transformation of microglia, and histological analysis confirmed increased microglia co-localization with Aβ. Subsequently, we designed a non-invasive 40 Hz light-flickering regime that reduced Aβ1-40 and Aβ1-42 levels in the visual cortex of pre-depositing mice and mitigated plaque load in aged, depositing mice. Our findings uncover a previously unappreciated function of gamma rhythms in recruiting both neuronal and glial responses to attenuate Alzheimer's-disease-associated pathology.

  6. Frequency domain analysis of errors in cross-correlations of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri

    2016-12-01

    We analyse random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these pre-processing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing pre-processing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜35 km) and dense linear array (˜20 m) across the plate-boundary faults. A block bootstrap resampling method

  7. Frequency, Types, and Potential Clinical Significance of Medication-Dispensing Errors

    PubMed Central

    Bohand, Xavier; Simon, Laurent; Perrier, Eric; Mullot, Hélène; Lefeuvre, Leslie; Plotton, Christian

    2009-01-01

    INTRODUCTION AND OBJECTIVES: Many dispensing errors occur in the hospital, and these can endanger patients. The purpose of this study was to assess the rate of dispensing errors by a unit dose drug dispensing system, to categorize the most frequent types of errors, and to evaluate their potential clinical significance. METHODS: A prospective study using a direct observation method to detect medication-dispensing errors was used. From March 2007 to April 2007, “errors detected by pharmacists” and “errors detected by nurses” were recorded under six categories: unauthorized drug, incorrect form of drug, improper dose, omission, incorrect time, and deteriorated drug errors. The potential clinical significance of the “errors detected by nurses” was evaluated. RESULTS: Among the 734 filled medication cassettes, 179 errors were detected corresponding to a total of 7249 correctly fulfilled and omitted unit doses. An overall error rate of 2.5% was found. Errors detected by pharmacists and nurses represented 155 (86.6%) and 24 (13.4%) of the 179 errors, respectively. The most frequent types of errors were improper dose (n = 57, 31.8%) and omission (n = 54, 30.2%). Nearly 45% of the 24 errors detected by nurses had the potential to cause a significant (n = 7, 29.2%) or serious (n = 4, 16.6%) adverse drug event. CONCLUSIONS: Even if none of the errors reached the patients in this study, a 2.5% error rate indicates the need for improving the unit dose drug-dispensing system. Furthermore, it is almost certain that this study failed to detect some medication errors, further arguing for strategies to prevent their recurrence. PMID:19142545

  8. Frequency, types, and potential clinical significance of medication-dispensing errors.

    PubMed

    Bohand, Xavier; Simon, Laurent; Perrier, Eric; Mullot, Hélène; Lefeuvre, Leslie; Plotton, Christian

    2009-01-01

    Many dispensing errors occur in the hospital, and these can endanger patients. The purpose of this study was to assess the rate of dispensing errors by a unit dose drug dispensing system, to categorize the most frequent types of errors, and to evaluate their potential clinical significance. A prospective study using a direct observation method to detect medication-dispensing errors was used. From March 2007 to April 2007, 'errors detected by pharmacists' and 'errors detected by nurses' were recorded under six categories: unauthorized drug, incorrect form of drug, improper dose, omission, incorrect time, and deteriorated drug errors. The potential clinical significance of the 'errors detected by nurses' was evaluated. Among the 734 filled medication cassettes, 179 errors were detected corresponding to a total of 7249 correctly fulfilled and omitted unit doses. An overall error rate of 2.5% was found. Errors detected by pharmacists and nurses represented 155 (86.6%) and 24 (13.4%) of the 179 errors, respectively. The most frequent types of errors were improper dose (n = 57, 31.8%) and omission (n = 54, 30.2%). Nearly 45% of the 24 errors detected by nurses had the potential to cause a significant (n = 7, 29.2%) or serious (n = 4, 16.6%) adverse drug event. Even if none of the errors reached the patients in this study, a 2.5% error rate indicates the need for improving the unit dose drug-dispensing system. Furthermore, it is almost certain that this study failed to detect some medication errors, further arguing for strategies to prevent their recurrence.

  9. The frequency of translational misreading errors in E. coli is largely determined by tRNA competition.

    PubMed

    Kramer, Emily B; Farabaugh, Philip J

    2007-01-01

    Estimates of missense error rates (misreading) during protein synthesis vary from 10(-3) to 10(-4) per codon. The experiments reporting these rates have measured several distinct errors using several methods and reporter systems. Variation in reported rates may reflect real differences in rates among the errors tested or in sensitivity of the reporter systems. To develop a more accurate understanding of the range of error rates, we developed a system to quantify the frequency of every possible misreading error at a defined codon in Escherichia coli. This system uses an essential lysine in the active site of firefly luciferase. Mutations in Lys529 result in up to a 1600-fold reduction in activity, but the phenotype varies with amino acid. We hypothesized that residual activity of some of the mutant genes might result from misreading of the mutant codons by tRNA(Lys) (UUUU), the cognate tRNA for the lysine codons, AAA and AAG. Our data validate this hypothesis and reveal details about relative missense error rates of near-cognate codons. The error rates in E. coli do, in fact, vary widely. One source of variation is the effect of competition by cognate tRNAs for the mutant codons; higher error frequencies result from lower competition from low-abundance tRNAs. We also used the system to study the effect of ribosomal protein mutations known to affect error rates and the effect of error-inducing antibiotics, finding that they affect misreading on only a subset of near-cognate codons and that their effect may be less general than previously thought.

  10. Method for measuring the phase error distribution of a wideband arrayed waveguide grating in the frequency domain.

    PubMed

    Takada, Kazumasa; Satoh, Shin-ichi

    2006-02-01

    We describe a method for measuring the phase error distribution of an arrayed waveguide grating (AWG) in the frequency domain when the free spectral range (FSR) of the AWG is so wide that it cannot be covered by one tunable laser source. Our method is to sweep the light frequency in the neighborhoods of two successive peaks in the AWG transmission spectrum by using two laser sources with different tuning ranges. The method was confirmed experimentally by applying it to a 160 GHz spaced AWG with a FSR of 11 THz. The variations in the derived phase error data were very small at +/-0.02 rad around the central arrayed waveguides.

  11. Error analysis for intrinsic quality factor measurement in superconducting radio frequency resonators

    DOE PAGES

    Melnychuk, O.; Grassellino, A.; Romanenko, A.

    2014-12-19

    In this paper, we discuss error analysis for intrinsic quality factor (Q₀) and accelerating gradient (Eacc ) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24–27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab,more » we estimated total uncertainty for both Q₀ and Eacc to be at the level of approximately 4% for input coupler coupling parameter β₁ in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q₀ uncertainty increases (decreases) with β₁ whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24–27], is independent of β₁. Overall, our estimated Q₀ uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24–27].« less

  12. Error analysis for intrinsic quality factor measurement in superconducting radio frequency resonators.

    PubMed

    Melnychuk, O; Grassellino, A; Romanenko, A

    2014-12-01

    In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].

  13. Error analysis for intrinsic quality factor measurement in superconducting radio frequency resonators

    NASA Astrophysics Data System (ADS)

    Melnychuk, O.; Grassellino, A.; Romanenko, A.

    2014-12-01

    In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].

  14. Generalized numerical pressure distribution model for smoothing polishing of irregular midspatial frequency errors.

    PubMed

    Nie, Xuqing; Li, Shengyi; Shi, Feng; Hu, Hao

    2014-02-20

    The smoothing effect of the rigid lap plays an important role in controlling midspatial frequency errors (MSFRs). At present, the pressure distribution between the polishing pad and processed surface is mainly calculated by Mehta's bridging model. However, this classic model does not work for the irregular MSFR. In this paper, a generalized numerical model based on the finite element method (FEM) is proposed to solve this problem. First, the smoothing polishing (SP) process is transformed to a 3D elastic structural FEM model, and the governing matrix equation is gained. By virtue of the boundary conditions applied to the governing matrix equation, the nodal displacement vector and nodal force vector of the pad can be attained, from which the pressure distribution can be extracted. In the partial contact condition, the iterative method is needed. The algorithmic routine is shown, and the applicability of the generalized numerical model is discussed. The detailed simulation is given when the lap is in contact with the irregular surface of different morphologies. A well-designed SP experiment is conducted in our lab to verify the model. A small difference between the experimental data and simulated result shows that the model is totally practicable. The generalized numerical model is applied on a Φ500  mm parabolic surface. The calculated result and measured data after the SP process have been compared, which indicates that the model established in this paper is an effective method to predict the SP process.

  15. A Modified Normalization Technique for Frequency-Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Hwang, J.; Jeong, G.; Min, D. J.; KIM, S.; Heo, J. Y.

    2016-12-01

    Full waveform inversion (FWI) is a technique to estimate subsurface material properties minimizing the misfit function built with residuals between field and modeled data. To achieve computational efficiency, FWI has been performed in the frequency domain by carrying out modeling in the frequency domain, whereas observed data (time-series data) are Fourier-transformed.One of the main drawbacks of seismic FWI is that it easily gets stuck in local minima because of lacking of low-frequency data. To compensate for this limitation, damped wavefields are used, as in the Laplace-domain waveform inversion. Using damped wavefield in FWI plays a role in generating low-frequency components and help recover long-wavelength structures. With these newly generated low-frequency components, we propose a modified frequency-normalization technique, which has an effect of boosting contribution of low-frequency components to model parameter update.In this study, we introduce the modified frequency-normalization technique which effectively amplifies low-frequency components of damped wavefields. Our method is demonstrated for synthetic data for the SEG/EAGE salt model. AcknowledgementsThis work was supported by the Korea Institute of Energy Technology Evaluation and Planning(KETEP) and the Ministry of Trade, Industry & Energy(MOTIE) of the Republic of Korea (No. 20168510030830) and by the Dual Use Technology Program, granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea.

  16. Frequency of and risk factors for medication errors by pharmacists during order verification in a tertiary care medical center.

    PubMed

    Gorbach, Christy; Blanton, Linda; Lukawski, Beverly A; Varkey, Alex C; Pitman, E Paige; Garey, Kevin W

    2015-09-01

    The frequency of and risk factors for medication errors by pharmacists during order verification in a tertiary care medical center were reviewed. This retrospective, secondary database study was conducted at a large tertiary care medical center in Houston, Texas. Inpatient and outpatient medication orders and medication errors recorded between July 1, 2011, and June 30, 2012, were reviewed. Independent variables assessed as risk factors for medication errors included workload (mean number of orders verified per pharmacist per shift), work environment (type of day, type of shift, and mean number of pharmacists per shift), and nonmodifiable characteristics of the pharmacist (type of pharmacy degree obtained, age, number of years practicing, and number of years at the institution). A total of 1,887,751 medication orders, 92 medication error events, and 50 pharmacists were included in the study. The overall error rate was 4.87 errors per 100,000 verified orders. An increasing medication error rate was associated with an increased number of orders verified per pharmacist (p = 0.007), the type of shift (p = 0.021), the type of day (p = 0.002), and the mean number of pharmacists per shift (p = 0.001). Pharmacist demographic variables were not associated with risk of error. The number of orders per shift was identified as a significant independent risk factor for medication errors (p = 0.019). An increase in the number of orders verified per shift was associated with an increased rate of pharmacist errors during order verification in a tertiary care medical center. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  17. Time-frequency representation of a highly nonstationary signal via the modified Wigner distribution

    NASA Technical Reports Server (NTRS)

    Zoladz, T. F.; Jones, J. H.; Jong, J.

    1992-01-01

    A new signal analysis technique called the modified Wigner distribution (MWD) is presented. The new signal processing tool has been very successful in determining time frequency representations of highly non-stationary multicomponent signals in both simulations and trials involving actual Space Shuttle Main Engine (SSME) high frequency data. The MWD departs from the classic Wigner distribution (WD) in that it effectively eliminates the cross coupling among positive frequency components in a multiple component signal. This attribute of the MWD, which prevents the generation of 'phantom' spectral peaks, will undoubtedly increase the utility of the WD for real world signal analysis applications which more often than not involve multicomponent signals.

  18. Every photon counts: improving low, mid, and high-spatial frequency errors on astronomical optics and materials with MRF

    NASA Astrophysics Data System (ADS)

    Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul

    2016-07-01

    Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF

  19. Frequency and Severity of Parenteral Nutrition Medication Errors at a Large Children's Hospital After Implementation of Electronic Ordering and Compounding.

    PubMed

    MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery

    2016-04-01

    The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process

  20. Time-frequency analysis of spike-wave discharges using a modified wavelet transform.

    PubMed

    Bosnyakova, Daria; Gabova, Alexandra; Kuznetsova, Galina; Obukhov, Yuri; Midzyanovskaya, Inna; Salonin, Dmitrij; van Rijn, Clementina; Coenen, Anton; Tuomisto, Leene; van Luijtelaar, Gilles

    2006-06-30

    The continuous Morlet wavelet transform was used for the analysis of the time-frequency pattern of spike-wave discharges (SWD) as can be recorded in a genetic animal model of absence epilepsy (rats of the WAG/Rij strain). We developed a new wavelet transform that allows to obtain the time-frequency dynamics of the dominating rhythm during the discharges. SWD were analyzed pre- and post-administration of certain drugs. SWD recorded predrug demonstrate quite uniform time-frequency dynamics of the dominant rhythm. The beginning of the discharge has a short period with the highest frequency value (up to 15 Hz). Then the frequency decreases to 7-9 Hz and frequency modulation occurs during the discharge in this range with a period of 0.5-0.7 s. Specific changes of SWD time-frequency dynamics were found after the administration of psychoactive drugs, addressing different brain mediator and modulator systems. Short multiple SWDs appeared under low (0.5 mg/kg) doses of haloperidol, they are characterized by a fast frequency decrease to 5-6 Hz at the end of every discharge. The frequency of the dominant frequency of SWD was not stable in long lasting SWD after 1.0 mg/kg or more haloperidol: then two periodicities were found. Long lasting SWD seen after the administration of vigabatrin showed a stable frequency of the discharge. The EEG after Ketamin showed a distinct 5 s quasiperiodicity. No clear changes of time-frequency dynamics of SWD were found after perilamine. It can be concluded that the use of the modified Morlet wavelet transform allows to describe significant parameters of the dynamics in the time-frequency domain of the dominant rhythm of SWD that were not previously detected.

  1. Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES

    NASA Astrophysics Data System (ADS)

    Sarkar, B.; Bhunia, C. T.; Maulik, U.

    2012-06-01

    Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.

  2. Numerical Predictions of Static-Pressure-Error Corrections for a Modified T-38C Aircraft

    DTIC Science & Technology

    2014-12-15

    Inc. [18] employed in related USAF aeroelastic -simulation research [19–22]. The outer mold line of the standard T-38C aircraft is modified with the... Aeroelastic Dynamics Analysis of a Full F-16 Configuration for Various Flight Conditions,” AIAA Journal, Vol. 41, No. 3, 2003, pp. 363–371. doi

  3. Active vibration control using optimized modified acceleration feedback with Adaptive Line Enhancer for frequency tracking

    NASA Astrophysics Data System (ADS)

    Nima Mahmoodi, S.; Craft, Michael J.; Southward, Steve C.; Ahmadian, Mehdi

    2011-03-01

    Modified acceleration feedback (MAF) control, an active vibration control method that uses collocated piezoelectric actuators and accelerometer is developed and its gains optimized using an optimal controller. The control system consists of two main parts: (1) frequency adaptation that uses Adaptive Line Enhancer (ALE) and (2) an optimized controller. Frequency adaptation method tracks the frequency of vibrations using ALE. The obtained frequency is then fed to MAF compensators. This provides a unique feature for MAF, by extending its domain of capabilities from controlling a certain mode of vibrations to any excited mode. The optimized MAF controller can provide optimal sets of gains for a wide range of frequencies, based on the characteristics of the system. The experimental results show that the frequency tracking method works quite well and fast enough to be used in a real-time controller. ALE parameters are numerically and experimentally investigated and tuned for optimized frequency tracking. The results also indicate that the MAF can provide significant vibration reduction using the optimized controller. The control power varies for vibration suppression at different resonance frequencies; however, it is always optimized.

  4. System for simultaneously measuring 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser.

    PubMed

    Cui, Cunxing; Feng, Qibo; Zhang, Bin; Zhao, Yuqiong

    2016-03-21

    A novel method for simultaneously measuring six degree-of-freedom (6DOF) geometric motion errors is proposed in this paper, and the corresponding measurement instrument is developed. Simultaneous measurement of 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser is accomplished for the first time to the best of the authors' knowledge. Dual-frequency laser beams that are orthogonally linear polarized were adopted as the measuring datum. Positioning error measurement was achieved by heterodyne interferometry, and other 5DOF geometric motion errors were obtained by fiber collimation measurement. A series of experiments was performed to verify the effectiveness of the developed instrument. The experimental results showed that the stability and accuracy of the positioning error measurement are 31.1 nm and 0.5 μm, respectively. For the straightness error measurements, the stability and resolution are 60 and 40 nm, respectively, and the maximum deviation of repeatability is ± 0.15 μm in the x direction and ± 0.1 μm in the y direction. For pitch and yaw measurements, the stabilities are 0.03″ and 0.04″, the maximum deviations of repeatability are ± 0.18″ and ± 0.24″, and the accuracies are 0.4″ and 0.35″, respectively. The stability and resolution of roll measurement are 0.29″ and 0.2″, respectively, and the accuracy is 0.6″.

  5. To Err is Normable: The Computation of Frequency-Domain Error Bounds from Time-Domain Data

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Veillette, Robert J.; DeAbreuGarcia, J. Alexis; Chicatelli, Amy; Hartmann, Richard

    1998-01-01

    This paper exploits the relationships among the time-domain and frequency-domain system norms to derive information useful for modeling and control design, given only the system step response data. A discussion of system and signal norms is included. The proposed procedures involve only simple numerical operations, such as the discrete approximation of derivatives and integrals, and the calculation of matrix singular values. The resulting frequency-domain and Hankel-operator norm approximations may be used to evaluate the accuracy of a given model, and to determine model corrections to decrease the modeling errors.

  6. New creep-fatigue damage model based on the frequency modified strain range method

    SciTech Connect

    Kim, Y.J.; Seok, C.S.; Park, J.J.

    1996-12-01

    For mechanical systems operating at high temperature, damage due to the interaction effect of creep and fatigue plays an important role. The objective of this paper is to propose a modified creep-fatigue damage model which separately analyzes the pure creep damage due to the hold time and the creep-fatigue interaction damage during the startup and the shutdown period. The creep damage was calculated by the general creep damage equation and the creep-fatigue interaction damage was calculated by the modified equation which is based on the frequency modified strain range method with strain rate term. In order to verify the proposed model, a series of high temperature low cycle fatigue tests were performed. The test specimens were made from Inconel-718 superalloy and the test parameters were wave form and hold time. A good agreement between the predicted lives based on the proposed model and experimentally obtained ones was obtained.

  7. Impact of Frequency of Load Changes in Fatigue Tests on the Temperature of the Modified Polymer

    NASA Astrophysics Data System (ADS)

    Komorek, Andrzej; Komorek, Zenon; Krzyzak, Aneta; Przybylek, Pawel; Szczepaniak, Robert

    2017-08-01

    In the article, the authors describe the analysis of the impact of the frequency of fatigue tests on the temperature of the modified polymer. The base material for producing the samples was the epoxy resin Epidian 57 cured with Z1 hardener. The presented study is part of a research program of adhesive joints made with the composition Epidian 57/Z1 aiming to determine the effect of physical modification of the adhesive composition on its properties and on the properties of adhesive joints made with such a modified composition. The tested adhesive compositions were modified by additions of micro- and nanoparticles in an amount of 1.85 % (micro- and nanoparticles) or 10 % (microparticles) in weight, depending on the type of the particles. In the studies, the authors used the particles of tungsten, microspheres and carbon nanotubes. The polymer samples produced by casting were loaded with compressive identical, one-sided loads at two different frequencies of load changes. During the tests, the authors recorded the temperature changes as a function of the number of cycles. The changes in the temperature field on the surface of the samples during the tests were observed by the infrared camera. As a result of the studies, it was possible to observe a significant impact of the composition of the polymer and the frequency of load changes during the test on the temperature of the sample.

  8. Measurement of the spatial frequency response (SFR) of digital still-picture cameras using a modified slanted-edge method

    NASA Astrophysics Data System (ADS)

    Hsu, Wei-Feng; Hsu, Yun C.; Chuang, Kai W.

    2000-06-01

    Spatial resolution is one of the main characteristics of electronic imaging devices such as the digital still-picture camera. It describes the capability of a device to resolve the spatial details of an image formed by the incoming optical information. The overall resolving capability is of great interest although there are various factors, contributed by camera components and signal processing algorithms, affecting the spatial resolution. The spatial frequency response (SFR), analogous to the MTF of an optical imaging system, is one of the four measurements for analysis of spatial resolution defined in ISO/FDIS 12233, and it provides a complete profile of the spatial response of digital still-picture cameras. In that document, a test chart is employed to estimate the spatial resolving capability. The calculations of SFR were conducted by using the slanted edge method in which a scene with a black-to- white or white-to-black edge tilted at a specified angle is captured. An algorithm is used to find the line spread function as well as the SFR. We will present a modified algorithm in which no prior information of the angle of the tilted black-to-white edge is needed. The tilted angle was estimated by assuming that a region around the center of the transition between black and white regions is linear. At a tilted angle of 8 degree the minimum estimation error is about 3%. The advantages of the modified slanted edge method are high accuracy, flexible use, and low cost.

  9. STATISTICAL DISTRIBUTIONS OF PARTICULATE MATTER AND THE ERROR ASSOCIATED WITH SAMPLING FREQUENCY. (R828678C010)

    EPA Science Inventory

    The distribution of particulate matter (PM) concentrations has an impact on human health effects and the setting of PM regulations. Since PM is commonly sampled on less than daily schedules, the magnitude of sampling errors needs to be determined. Daily PM data from Spokane, W...

  10. Angular Stable, Dual-Polarized and Multiband Modified Circular Ring Frequency Selective Surface

    NASA Astrophysics Data System (ADS)

    Bharti, Garima; Jha, Kumud Ranjan; Singh, G.; Jyoti, Rajeev

    2015-05-01

    In this paper, a single-layer multiband slot-type frequency selective surface (FSS), which consists of a modified circular ring loaded with concentric conventional circular ring, is discussed. We have emphasized to design an angular as well as polarization stable multiband FSS structure with reflection characteristics in S-band (2-4 GHz)/Ku (12-18 GHz) and transmission characteristics in X-band (8-12 GHz)/Ka-band (24-28 GHz). A novel synthesis technique is used to obtain the geometrical parameters of the proposed multiband FSS structure, which reduces the number of iterations in the computation process. The proposed multiband FSS structure satisfies the design issues of the frequency response in chosen frequency band of the electromagnetic spectrum and provides significant frequency stability as well as 3-dB bandwidth for both the perpendicular and parallel polarized wave incidence up to 50°. The slot-type modified circular ring FSS structure has been experimentally tested at X-band to validate the synthesis approach.

  11. Analytical simulation of water system capacity reliability, 1. Modified frequency-duration analysis

    NASA Astrophysics Data System (ADS)

    Hobbs, Benjamin F.; Beim, Gina K.

    1988-09-01

    The problem addressed is the computation of the unavailability and expected unserved demand of a water supply system having random demand, finished water storage, and unreliable capacity components. Examples of such components include pumps, treatment plants, and aqueducts. Modified frequency-duration analysis estimates these reliability statistics by, first, calculating how often demand exceeds available capacity and, second, comparing the amount of water in storage with how long such capacity deficits last. This approach builds upon frequency-duration methods developed by the power industry for analyzing generation capacity deficits. Three versions of the frequency-duration approach are presented. Two yield bounds to system unavailability and unserved demand and the third gives an estimate of their true values between those bounds.

  12. Application of modified AOGST to study the low frequency shadow zone in a gas reservoir

    NASA Astrophysics Data System (ADS)

    Abdollahi Aghdam, B.; Riahi, M. Ali

    2015-10-01

    The adaptive optimized window generalized S transform (AOGST) variant with frequency and time is a method for the time-frequency mapping of a signal. According to the AOGST method, an optimized regulation factor is calculated based on the energy concentration of the S transform. The value of this factor is 1 for standard S transform where in the AOGST method its value is limited by the interval of [0, 1]. However, AOGST may not produce an acceptable resolution for all parts of the time-frequency representation. We applied aggregation of confined interval-adaptive optimized generalized S transforms (ACI-AOGST) instead of the AOGST method. The proposed method applies the modified AOGST method to specific frequency and time intervals. By calculating regulation factors for limited frequency and time intervals of signal, arranging them in a suitable order and applying the ACI-AOGST one can provide a transformation with lowest distortion and highest resolution in comparison to other transformations. The proposed method has been used to analyse the time-frequency distribution of a synthetic signal as well as a real 2D seismic section of a producing gas reservoir located south of Iran. The results confirmed the robustness of the ACI-AOGST method.

  13. Report: Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2013-09-30

    Structural Instability and Model Error Andrew J. Majda New York University Courant Institute of Mathematical Sciences 251 Mercer Street New York, NY...Majda and his DRI post doc Sapsis have achieved a potential major breakthrough with a new class of methods for UQ. Turbulent dynamical systems are...uncertain initial data. These key physical quantities are often characterized by the degrees of freedom which carry the largest energy or variance and

  14. Report: Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2012-09-30

    Instability and Model Error Principal Investigator: Andrew J. Majda Institution: New York University Courant Institute of Mathematical ...NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) New York University, Courant Institute of Mathematical ...for the Special Volume of Communications on Pure and Applied Mathematics for 75th Anniversary of the Courant Institute, April 12, 2012, doi: 10.1002

  15. Modifying the size distribution of microbubble contrast agents for high-frequency subharmonic imaging

    PubMed Central

    Shekhar, Himanshu; Rychak, Joshua J.; Doyley, Marvin M.

    2013-01-01

    Purpose: Subharmonic imaging is of interest for high frequency (>10 MHz) nonlinear imaging, because it can specifically detect the response of ultrasound contrast agents (UCA). However, conventional UCA produce a weak subharmonic response at high frequencies, which limits the sensitivity of subharmonic imaging. We hypothesized that modifying the size distribution of the agent can enhance its high-frequency subharmonic response. The overall goal of this study was to investigate size-manipulated populations of the agent to determine the range of sizes that produce the strongest subharmonic response at high frequencies (in this case, 20 MHz). A secondary goal was to assess whether the number or the volume-weighted size distribution better represents the efficacy of the agent for high-frequency subharmonic imaging. Methods: The authors created six distinct agent size distributions from the native distribution of a commercially available UCA (Targestar-P®). The median (number-weighted) diameter of the native agent was 1.63 μm, while the median diameters of the size-manipulated populations ranged from 1.35 to 2.99 μm. The authors conducted acoustic measurements with native and size-manipulated agent populations to assess their subharmonic response to 20 MHz excitation (pulse duration 1.5 μs, pressure amplitudes 100–398 kPa). Results: The results showed a considerable difference between the subharmonic response of the agent populations that were investigated. The subharmonic response peaked for the agent population with a median diameter of 2.15 μm, which demonstrated a subharmonic signal that was 8 dB higher than the native agent. Comparing the subharmonic response of different UCA populations indicated that microbubbles with diameters between 1.3 and 3 μm are the dominant contributors to the subharmonic response at 20 MHz. Additionally, a better correlation was observed between the subharmonic response of the agent and the number-weighted size-distribution (R2

  16. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  17. Dry powder inhalers: which factors determine the frequency of handling errors?

    PubMed

    Wieshammer, Siegfried; Dreyhaupt, Jens

    2008-01-01

    Dry powder inhalers are often used ineffectively, resulting in a poor level of disease control. To determine how often essential mistakes are made in the use of Aerolizer, Discus, HandiHaler and Turbuhaler and to study the effects of age, severity of airflow obstruction and previous training in inhalational technique by medical personnel on the error rate. Two hundred and twenty-four newly referred outpatients (age 55.1 +/- 20 years) were asked how they had been acquainted with the inhaler and to demonstrate their inhalational technique. The inhaler-specific error rates were as follows: Aerolizer 9.1%, Discus 26.7%, HandiHaler 53.1% and Turbuhaler 34.9%. Compared to Aerolizer, the odds ratio of an ineffective inhalation was higher for HandiHaler (9.82, p < 0.01) and Turbuhaler (4.84, p < 0.05). The error rate increased with age and with the severity of airway obstruction (p < 0.01). When training had been given as opposed to no training, the odds ratio of ineffective inhalation was 0.22 (p < 0.01). If Turbuhaler is used, the estimated risks range from 9.8% in an 18-year-old patient with normal lung function and previous training to 83.2% in an 80-year-old patient with moderate or severe obstruction who had not received any training. Dry powder inhalers are useful in the management of younger patients with normal lung function or mild airway obstruction. In older patients with advanced chronic obstructive pulmonary disease, the risk of ineffective inhalation remains high despite training in inhalational technique. A metered-dose inhaler with a spacer might be a valuable treatment alternative in a substantial proportion of these patients. (c) 2007 S. Karger AG, Basel.

  18. Frequency Domain Errors in Variables Approach for Two Channel SIMO System Identification

    DTIC Science & Technology

    2009-06-24

    Signal et Image, ENSEIRB/UMR CNRS 5218 IMS Dpt. LAPS, Université Bordeaux 1, France william.bobillet@etu.u-bordeaux1.fr Dipartimento di Fisica e...without loss of generality . - - - ? h1(k) y1(k) b1(k), (σ21 ) x1(k) - - - ? h2(k) y2(k) b2(k), (σ22 ) x2(k) s(k) Figure 1: two-channel...developed in the fields of statistics and identification, assume that the available data are disturbed by additive error terms. Given a generic process

  19. Lexicality and Frequency in Specific Language Impairment: Accuracy and Error Data from Two Nonword Repetition Tests

    ERIC Educational Resources Information Center

    Jones, Gary; Tamburelli, Marco; Watson, Sarah E.; Gobet, Fernand; Pine, Julian M.

    2010-01-01

    Purpose: Deficits in phonological working memory and deficits in phonological processing have both been considered potential explanatory factors in specific language impairment (SLI). Manipulations of the lexicality and phonotactic frequency of nonwords enable contrasting predictions to be derived from these hypotheses. Method: Eighteen typically…

  20. Osteoblast behavior on polytetrafluoroethylene modified by long pulse, high frequency oxygen plasma immersion ion implantation.

    PubMed

    Wang, Huaiyu; Kwok, Dixon T K; Wang, Wei; Wu, Zhengwei; Tong, Liping; Zhang, Yumei; Chu, Paul K

    2010-01-01

    Polytetrafluoroethylene (PTFE) is a commonly used medical polymer due to its biological stability and other attractive properties such as high hardness and wear resistance. However, the low surface energy and lack of functional groups to interact with the cellular environment have severely limited its applications in bone or cartilage replacements. Plasma immersion ion implantation (PIII) is a proven effective surface modification technique. However, when conducted on polymeric substrates, conventional PIII experiments typically employ a low pulsing frequency and short pulse duration in order to avoid sample overheating, charging, and plasma sheath extension. In this paper, a long pulse, high frequency O(2) PIII process is described to modify PTFE substrates by implementing a shielded grid in the PIII equipment without these aforementioned adverse effects. X-ray photoelectron spectroscopy (XPS), atomic force microscopy (AFM), and contact angle measurements are carried out to reveal the surface effects of PTFE after long pulse, high frequency O(2) PIII and the results are compared to those obtained from conventional short pulse, low frequency O(2) PIII, O(2) plasma immersion, and the untreated control samples. Our results show that less oxygen-containing, rougher, and more hydrophobic surfaces are produced on PTFE after long pulse, high frequency O(2) PIII compared to the other 2 treatments. Cell viability assay, ALP activity test, and real-time PCR analysis are also performed to investigate the osteoblast behavior. It is clear that all 3 surface modification techniques promote osteoblast adhesion and proliferation on the PTFE substrates. Improvements on the ALP, OPN, and ON expression of the seeded osteoblasts are also obvious. However, among these treatments, only long pulse, high frequency O(2) PIII can promote the OCN expression of osteoblasts when the incubation time is 12 days. Our data unequivocally disclose that the long pulse, high frequency O(2) PIII

  1. Frequency and Distribution of Refractive Error in Adult Life: Methodology and Findings of the UK Biobank Study

    PubMed Central

    Cumberland, Phillippa M.; Bao, Yanchun; Hysi, Pirro G.; Foster, Paul J.; Hammond, Christopher J.; Rahi, Jugnoo S.

    2015-01-01

    Purpose To report the methodology and findings of a large scale investigation of burden and distribution of refractive error, from a contemporary and ethnically diverse study of health and disease in adults, in the UK. Methods U K Biobank, a unique contemporary resource for the study of health and disease, recruited more than half a million people aged 40–69 years. A subsample of 107,452 subjects undertook an enhanced ophthalmic examination which provided autorefraction data (a measure of refractive error). Refractive error status was categorised using the mean spherical equivalent refraction measure. Information on socio-demographic factors (age, gender, ethnicity, educational qualifications and accommodation tenure) was reported at the time of recruitment by questionnaire and face-to-face interview. Results Fifty four percent of participants aged 40–69 years had refractive error. Specifically 27% had myopia (4% high myopia), which was more common amongst younger people, those of higher socio-economic status, higher educational attainment, or of White or Chinese ethnicity. The frequency of hypermetropia increased with age (7% at 40–44 years increasing to 46% at 65–69 years), was higher in women and its severity was associated with ethnicity (moderate or high hypermetropia at least 30% less likely in non-White ethnic groups compared to White). Conclusions Refractive error is a significant public health issue for the UK and this study provides contemporary data on adults for planning services, health economic modelling and monitoring of secular trends. Further investigation of risk factors is necessary to inform strategies for prevention. There is scope to do this through the planned longitudinal extension of the UK Biobank study. PMID:26430771

  2. Lower Bounds on the Frequency Estimation Error in Magnetically Coupled MEMS Resonant Sensors.

    PubMed

    Paden, Brad E

    2016-02-01

    MEMS inductor-capacitor (LC) resonant pressure sensors have revolutionized the treatment of abdominal aortic aneurysms. In contrast to electrostatically driven MEMS resonators, these magnetically coupled devices are wireless so that they can be permanently implanted in the body and can communicate to an external coil via pressure-induced frequency modulation. Motivated by the importance of these sensors in this and other applications, this paper develops relationships among sensor design variables, system noise levels, and overall system performance. Specifically, new models are developed that express the Cramér-Rao lower bound for the variance of resonator frequency estimates in terms of system variables through a system of coupled algebraic equations, which can be used in design and optimization. Further, models are developed for a novel mechanical resonator in addition to the LC-type resonators.

  3. Error correction coding for frequency-hopping multiple-access spread spectrum communication systems

    NASA Technical Reports Server (NTRS)

    Healy, T. J.

    1982-01-01

    A communication system which would effect channel coding for frequency-hopped multiple-access is described. It is shown that in theory coding can increase the spectrum utilization efficiency of a system with mutual interference to 100 percent. Various coding strategies are discussed and some initial comparisons are given. Some of the problems associated with implementing the type of system described here are discussed.

  4. An analysis of perceptual errors in reading mammograms using quasi-local spatial frequency spectra.

    PubMed

    Mello-Thoms, C; Dunn, S M; Nodine, C F; Kundel, H L

    2001-09-01

    In this pilot study the authors examined areas on a mammogram that attracted the visual attention of experienced mammographers and mammography fellows, as well as areas that were reported to contain a malignant lesion, and, based on their spatial frequency spectrum, they characterized these areas by the type of decision outcome that they yielded: true-positives (TP), false-positives (FP), true-negatives (TN), and false-negatives (FN). Five 2-view (craniocaudal and medial-lateral oblique) mammogram cases were examined by 8 experienced observers, and the eye position of the observers was tracked. The observers were asked to report the location and nature of any malignant lesions present in the case. The authors analyzed each area in which either the observer made a decision or in which the observer had prolonged (>1,000 ms) visual dwell using wavelet packets, and characterized these areas in terms of the energy contents of each spatial frequency band. It was shown that each decision outcome is characterized by a specific profile in the spatial frequency domain, and that these profiles are significantly different from one another. As a consequence of these differences, the profiles can be used to determine which type of decision a given observer will make when examining the area. Computer-assisted perception correctly predicted up to 64% of the TPs made by the observers, 77% of the FPs, and 70% of the TNs.

  5. Further investigations on fixed abrasive diamond pellets used for diminishing mid-spatial frequency errors of optical mirrors.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-01-20

    As further application investigations on fixed abrasive diamond pellets (FADPs), this work exhibits their potential capability for diminishing mid-spatial frequency errors (MSFEs, i.e., periodic small structure) of optical surfaces. Benefitting from its high surficial rigidness, the FADPs tool has a natural smoothing effect to periodic small errors. Compared with the previous design, this proposed new tool employs more compliance to aspherical surfaces due to the pellets being mutually separated and bonded on a steel plate with elastic back of silica rubber adhesive. Moreover, a unicursal Peano-like path is presented for improving MSFEs, which can enhance the multidirectionality and uniformity of the tool's motion. Experiments were conducted to validate the effectiveness of FADPs for diminishing MSFEs. In the lapping of a Φ=420 mm Zerodur paraboloid workpiece, the grinding ripples were quickly diminished (210 min) by both visual inspection and profile metrology, as well as the power spectrum density (PSD) analysis, RMS was reduced from 4.35 to 0.55 μm. In the smoothing of a Φ=101 mm fused silica workpiece, MSFEs were obviously improved from the inspection of surface form maps, interferometric fringe patterns, and PSD analysis. The mid-spatial frequency RMS was diminished from 0.017λ to 0.014λ (λ=632.8 nm).

  6. Magnitude error bounds for sampled-data frequency response obtained from the truncation of an infinite series, and compensator improvement program

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.

  7. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  8. Performance analysis of modified Asymmetrically-Clipped Optical Orthogonal Frequency-Division Multiplexing systems

    NASA Astrophysics Data System (ADS)

    Mohamed, Salma D.; Shalaby, Hossam M. H.; Andonovic, Ivan; Aly, Moustafa H.

    2016-12-01

    A modification to the Asymmetrically-Clipped Optical Orthogonal Frequency-Division Multiplexing (ACO-OFDM) technique is proposed through unipolar encoding. A performance analysis of the Bit Error Rate (BER) is developed and Monte Carlo simulations are carried out to verify the analysis. Results are compared to that of the corresponding ACO-OFDM system under the same bit energy and transmission rate; an improvement of 1 dB is obtained at a BER of 10-4 . In addition, the performance of the proposed system in the presence of atmospheric turbulence is investigated using single-input multiple-output (SIMO) configuration and its performance under that environment is compared to that of ACO-OFDM. Energy improvements of 4 dB and 2.2 dB are obtained at a BER of 10-4 for SIMO systems of 1 and 2 photodetectors at the receiver for the case of strong turbulence, respectively.

  9. Bit error rate performance of pi/4-DQPSK in a frequency-selective fast Rayleigh fading channel

    NASA Technical Reports Server (NTRS)

    Liu, Chia-Liang; Feher, Kamilo

    1991-01-01

    The bit error rate (BER) performance of pi/4-differential quadrature phase shift keying (DQPSK) modems in cellular mobile communication systems is derived and analyzed. The system is modeled as a frequency-selective fast Rayleigh fading channel corrupted by additive white Gaussian noise (AWGN) and co-channel interference (CCI). The probability density function of the phase difference between two consecutive symbols of M-ary differential phase shift keying (DPSK) signals is first derived. In M-ary DPSK systems, the information is completely contained in this phase difference. For pi/4-DQPSK, the BER is derived in a closed form and calculated directly. Numerical results show that for the 24 kBd (48 kb/s) pi/4-DQPSK operated at a carrier frequency of 850 MHz and C/I less than 20 dB, the BER will be dominated by CCI if the vehicular speed is below 100 mi/h. In this derivation, frequency-selective fading is modeled by two independent Rayleigh signal paths. Only one co-channel is assumed in this derivation. The results obtained are also shown to be valid for discriminator detection of M-ary DPSK signals.

  10. Modified reverse tapering method to prevent frequency shift of the radiation in the planar undulator

    NASA Astrophysics Data System (ADS)

    Shim, Chi Hyun; Ko, In Soo; Parc, Yong Woon

    2017-03-01

    This paper presents a modified reverse tapering method to generate a polarized soft x ray in x-ray free-electron lasers (XFELs) with a higher photon power and a shorter undulator length than the simple linear reverse tapering method. In the proposed method, a few untapered planar undulators are added before the simple linear reverse tapering section of the undulator line. This simple modification prevents the frequency shift of the radiation that occurs when the simple linear reverse tapering method is applied to planar undulators. In the proposed method, the total length of planar undulators decreased in spite of the additional untapered undulators. When the modified reverse tapering method is used with four untapered planar undulators, the total length of the planar undulators is 64.6 m. On the other hand, the required length of the planar undulators is 94.6 m when the simple linear reverse tapering method is used. The proposed method gives us a way to generate a soft x-ray pulse (1.24 keV) with a high degree of polarization (>0.99 ) and radiation power (>30 GW ) at the new undulator line with a 10-GeV electron beam in the Pohang Accelerator Laboratory X-ray Free-Electron Laser. This method can be applied in the existing XFELs in the world without any change in the undulator lines.

  11. Electromagnetic characteristics of surface modified iron nanowires at x-band frequencies

    NASA Astrophysics Data System (ADS)

    Liang, W. F.; Yang, R. B.; Lin, W. S.; Jian, Z. J.; Tsay, C. Y.; Wu, S. H.; Lin, H. M.; Choi, S. T.; Lin, C. K.

    2012-04-01

    Surface modified iron nanowire nanoparticles were prepared via reduction of iron salts (FeCl3. 6H2O) under an applied magnetic field. To minimize the surface oxidation, dextran (0.05 and 0.25 wt. %) was added during the process and formed a thin passive layer over iron nanowires with alcohol and acetone used to wash iron nanowires. The complex permittivity (ɛ'-jɛ'') and permeability (μ'-jμ″) of absorbers are measured by a cavity perturbation method from 7 to 14 GHz. In the present study, the iron nanowire prepared with 0.25 wt. % dextran and washed by acetone (D25AC) exhibited the best microwave absorption performance. Depending on the test frequency, D25AC possessed the largest permittivity loss ranged from 0.14 to 0.17 and relatively small permeability loss (<0.05). Its high permittivity dissipation is responsible for the excellent microwave absorption performance where the reflection loss was-7.7 dB at a matching frequency of 9.0 GHz.

  12. Super-hydrophobicity and oleophobicity of silicone rubber modified by CF 4 radio frequency plasma

    NASA Astrophysics Data System (ADS)

    Gao, Song-Hua; Gao, Li-Hua; Zhou, Ke-Sheng

    2011-03-01

    Owing to excellent electric properties, silicone rubber (SIR) has been widely employed in outdoor insulator. For further improving its hydrophobicity and service life, the SIR samples are treated by CF 4 radio frequency (RF) capacitively coupled plasma. The hydrophobic and oleophobic properties are characterized by static contact angle method. The surface morphology of modified SIR is observed by atom force microscope (AFM). X-ray photoelectron spectroscopy (XPS) is used to test the variation of the functional groups on the SIR surface due to the treatment by CF 4 plasma. The results indicate that the static contact angle of SIR surface is improved from 100.7° to 150.2° via the CF 4 plasma modification, and the super-hydrophobic surface of modified SIR, which the corresponding static contact angle is 150.2°, appears at RF power of 200 W for a 5 min treatment time. It is found that the super-hydrophobic surface ascribes to the coaction of the increase of roughness created by the ablation action and the formation of [-SiF x(CH 3) 2- x-O-] n ( x = 1, 2) structure produced by F atoms replacement methyl groups reaction, more importantly, the formation of [-SiF 2-O-] n structure is the major factor for super-hydrophobic surface, and it is different from the previous studies, which proposed the fluorocarbon species such as C-F, C-F 2, C-F 3, CF-CF n, and C-CF n, were largely introduced to the polymer surface and responsible for the formation of low surface energy.

  13. A statistical approach to quantification of genetically modified organisms (GMO) using frequency distributions.

    PubMed

    Gerdes, Lars; Busch, Ulrich; Pecoraro, Sven

    2014-12-14

    According to Regulation (EU) No 619/2011, trace amounts of non-authorised genetically modified organisms (GMO) in feed are tolerated within the EU if certain prerequisites are met. Tolerable traces must not exceed the so-called 'minimum required performance limit' (MRPL), which was defined according to the mentioned regulation to correspond to 0.1% mass fraction per ingredient. Therefore, not yet authorised GMO (and some GMO whose approvals have expired) have to be quantified at very low level following the qualitative detection in genomic DNA extracted from feed samples. As the results of quantitative analysis can imply severe legal and financial consequences for producers or distributors of feed, the quantification results need to be utterly reliable. We developed a statistical approach to investigate the experimental measurement variability within one 96-well PCR plate. This approach visualises the frequency distribution as zygosity-corrected relative content of genetically modified material resulting from different combinations of transgene and reference gene Cq values. One application of it is the simulation of the consequences of varying parameters on measurement results. Parameters could be for example replicate numbers or baseline and threshold settings, measurement results could be for example median (class) and relative standard deviation (RSD). All calculations can be done using the built-in functions of Excel without any need for programming. The developed Excel spreadsheets are available (see section 'Availability of supporting data' for details). In most cases, the combination of four PCR replicates for each of the two DNA isolations already resulted in a relative standard deviation of 15% or less. The aims of the study are scientifically based suggestions for minimisation of uncertainty of measurement especially in -but not limited to- the field of GMO quantification at low concentration levels. Four PCR replicates for each of the two DNA isolations

  14. Allele frequency misspecification: effect on power and Type I error of model-dependent linkage analysis of quantitative traits under random ascertainment.

    PubMed

    Mandal, Diptasri M; Sorant, Alexa J M; Atwood, Larry D; Wilson, Alexander F; Bailey-Wilson, Joan E

    2006-04-20

    Studies of model-based linkage analysis show that trait or marker model misspecification leads to decreasing power or increasing Type I error rate. An increase in Type I error rate is seen when marker related parameters (e.g., allele frequencies) are misspecified and ascertainment is through the trait, but lod-score methods are expected to be robust when ascertainment is random (as is often the case in linkage studies of quantitative traits). In previous studies, the power of lod-score linkage analysis using the "correct" generating model for the trait was found to increase when the marker allele frequencies were misspecified and parental data were missing. An investigation of Type I error rates, conducted in the absence of parental genotype data and with misspecification of marker allele frequencies, showed that an inflation in Type I error rate was the cause of at least part of this apparent increased power. To investigate whether the observed inflation in Type I error rate in model-based LOD score linkage was due to sampling variation, the trait model was estimated from each sample using REGCHUNT, an automated segregation analysis program used to fit models by maximum likelihood using many different sets of initial parameter estimates. The Type I error rates observed using the trait models generated by REGCHUNT were usually closer to the nominal levels than those obtained when assuming the generating trait model. This suggests that the observed inflation of Type I error upon misspecification of marker allele frequencies is at least partially due to sampling variation. Thus, with missing parental genotype data, lod-score linkage is not as robust to misspecification of marker allele frequencies as has been commonly thought.

  15. Allele frequency misspecification: effect on power and Type I error of model-dependent linkage analysis of quantitative traits under random ascertainment

    PubMed Central

    Mandal, Diptasri M; Sorant, Alexa JM; Atwood, Larry D; Wilson, Alexander F; Bailey-Wilson, Joan E

    2006-01-01

    Background Studies of model-based linkage analysis show that trait or marker model misspecification leads to decreasing power or increasing Type I error rate. An increase in Type I error rate is seen when marker related parameters (e.g., allele frequencies) are misspecified and ascertainment is through the trait, but lod-score methods are expected to be robust when ascertainment is random (as is often the case in linkage studies of quantitative traits). In previous studies, the power of lod-score linkage analysis using the "correct" generating model for the trait was found to increase when the marker allele frequencies were misspecified and parental data were missing. An investigation of Type I error rates, conducted in the absence of parental genotype data and with misspecification of marker allele frequencies, showed that an inflation in Type I error rate was the cause of at least part of this apparent increased power. To investigate whether the observed inflation in Type I error rate in model-based LOD score linkage was due to sampling variation, the trait model was estimated from each sample using REGCHUNT, an automated segregation analysis program used to fit models by maximum likelihood using many different sets of initial parameter estimates. Results The Type I error rates observed using the trait models generated by REGCHUNT were usually closer to the nominal levels than those obtained when assuming the generating trait model. Conclusion This suggests that the observed inflation of Type I error upon misspecification of marker allele frequencies is at least partially due to sampling variation. Thus, with missing parental genotype data, lod-score linkage is not as robust to misspecification of marker allele frequencies as has been commonly thought. PMID:16618369

  16. Serotonergic hallucinogens differentially modify gamma and high frequency oscillations in the rat nucleus accumbens.

    PubMed

    Goda, Sailaja A; Piasecka, Joanna; Olszewski, Maciej; Kasicki, Stefan; Hunt, Mark J

    2013-07-01

    The nucleus accumbens (NAc) is a site critical for the actions of many drugs of abuse. Psychoactive compounds, such as N-methyl-D-aspartate receptor (NMDAR) antagonists, modify gamma (40-90) and high frequency oscillations (HFO, 130-180 Hz) in local field potentials (LFPs) recorded in the NAc. Lysergic acid diethylamide (LSD) and 2,5-dimethoxy-4-iodoamphetamine (DOI) are serotonergic hallucinogens and activation of 5HT2A receptors likely underlies their hallucinogenic effects. Whether these compounds can also modulate LFP oscillations in the NAc is unclear. This study aims to examine the effect of serotonergic hallucinogens on gamma and HFO recorded in the NAc and to test whether 5HT2A receptors mediate the effects observed. LFPs were recorded from the NAc of freely moving rats. Drugs were administered intraperitoneally. LSD (0.03-0.3 mg/kg) and DOI (0.5-2.0 mg/kg) increased the power and reduced the frequency of HFO. In contrast, the hallucinogens produced a robust reduction in the power of low (40-60 Hz), but not high gamma oscillations (70-90 Hz). MDL 11939 (1.0 mg/kg), a 5HT2A receptor antagonist, fully reversed the changes induced by DOI on HFO but only partially for the low gamma band. Equivalent increases in HFO power were observed after TCB-2 (5HT2A receptor agonist, 0.1-1.5 mg/kg), but not CP 809101 (5H2C receptor agonist, 0.1-3 mg/kg). Notably, hallucinogen-induced increases in HFO power were smaller than those produced by ketamine (25 mg/kg). Serotonergic hallucinogen-induced changes in HFO and gamma are mediated, at least in part, by stimulation of 5HT2A receptors. Comparison of the oscillatory changes produced by serotonergic hallucinogens and NMDAR antagonists are also discussed.

  17. The Relative Importance of Random Error and Observation Frequency in Detecting Trends in Upper Tropospheric Water Vapor

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-01-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  18. The Relative Importance of Random Error and Observation Frequency in Detecting Trends in Upper Tropospheric Water Vapor

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-01-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  19. Modified Balance Error Scoring System (M-BESS) test scores in athletes wearing protective equipment and cleats

    PubMed Central

    Azad, Aftab Mohammad; Al Juma, Saad; Bhatti, Junaid Ahmad; Delaney, J Scott

    2016-01-01

    Background Balance testing is an important part of the initial concussion assessment. There is no research on the differences in Modified Balance Error Scoring System (M-BESS) scores when tested in real world as compared to control conditions. Objective To assess the difference in M-BESS scores in athletes wearing their protective equipment and cleats on different surfaces as compared to control conditions. Methods This cross-sectional study examined university North American football and soccer athletes. Three observers independently rated athletes performing the M-BESS test in three different conditions: (1) wearing shorts and T-shirt in bare feet on firm surface (control); (2) wearing athletic equipment with cleats on FieldTurf; and (3) wearing athletic equipment with cleats on firm surface. Mean M-BESS scores were compared between conditions. Results 60 participants were recruited: 39 from football (all males) and 21 from soccer (11 males and 10 females). Average age was 21.1 years (SD=1.8). Mean M-BESS scores were significantly lower (p<0.001) for cleats on FieldTurf (mean=26.3; SD=2.0) and for cleats on firm surface (mean=26.6; SD=2.1) as compared to the control condition (mean=28.4; SD=1.5). Females had lower scores than males for cleats on FieldTurf condition (24.9 (SD=1.9) vs 27.3 (SD=1.6), p=0.005). Players who had taping or bracing on their ankles/feet had lower scores when tested with cleats on firm surface condition (24.6 (SD=1.7) vs 26.9 (SD=2.0), p=0.002). Conclusions Total M-BESS scores for athletes wearing protective equipment and cleats standing on FieldTurf or a firm surface are around two points lower than M-BESS scores performed on the same athletes under control conditions. PMID:27900181

  20. Application of a modified complementary filtering technique for increased aircraft control system frequency bandwidth in high vibration environment

    NASA Technical Reports Server (NTRS)

    Garren, J. F., Jr.; Niessen, F. R.; Abbott, T. S.; Yenni, K. R.

    1977-01-01

    A modified complementary filtering technique for estimating aircraft roll rate was developed and flown in a research helicopter to determine whether higher gains could be achieved. Use of this technique did, in fact, permit a substantial increase in system frequency bandwidth because, in comparison with first-order filtering, it reduced both noise amplification and control limit-cycle tendencies.

  1. Large Scale Parameter Estimation Problems in Frequency-Domain Elastodynamics Using an Error in Constitutive Equation Functional

    PubMed Central

    Banerjee, Biswanath; Walsh, Timothy F.; Aquino, Wilkins; Bonnet, Marc

    2012-01-01

    This paper presents the formulation and implementation of an Error in Constitutive Equations (ECE) method suitable for large-scale inverse identification of linear elastic material properties in the context of steady-state elastodynamics. In ECE-based methods, the inverse problem is postulated as an optimization problem in which the cost functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses. Furthermore, in a more recent modality of this methodology introduced by Feissel and Allix (2007), referred to as the Modified ECE (MECE), the measured data is incorporated into the formulation as a quadratic penalty term. We show that a simple and efficient continuation scheme for the penalty term, suggested by the theory of quadratic penalty methods, can significantly accelerate the convergence of the MECE algorithm. Furthermore, a (block) successive over-relaxation (SOR) technique is introduced, enabling the use of existing parallel finite element codes with minimal modification to solve the coupled system of equations that arises from the optimality conditions in MECE methods. Our numerical results demonstrate that the proposed methodology can successfully reconstruct the spatial distribution of elastic material parameters from partial and noisy measurements in as few as ten iterations in a 2D example and fifty in a 3D example. We show (through numerical experiments) that the proposed continuation scheme can improve the rate of convergence of MECE methods by at least an order of magnitude versus the alternative of using a fixed penalty parameter. Furthermore, the proposed block SOR strategy coupled with existing parallel solvers produces a computationally efficient MECE method that can be used for large scale materials identification problems, as demonstrated on a 3D example involving about 400,000 unknown moduli. Finally, our numerical results suggest that the proposed MECE

  2. Modified Ashworth scale and spasm frequency score in spinal cord injury: reliability and correlation.

    PubMed

    Baunsgaard, C B; Nissen, U V; Christensen, K B; Biering-Sørensen, F

    2016-09-01

    Intra- and inter-rater reliability study. To assess intra- and inter-rater reliability of the Modified Ashworth Scale (MAS) and Spasm Frequency Score (SFS) in lower extremities in a population of spinal cord-injured persons, as well as correlations between the two scales. Clinic for Spinal Cord Injuries, Rigshospitalet, Hornbaek, Denmark. Thirty-one persons participated in the study and were tested four times in total with MAS and SFS by three experienced raters. Cohen's kappa (κ), simple and quadratic weighted (nominal and ordinal scale level of measurement), was used as a measure of reliability and Spearman's rank correlation coefficient for correlation between MAS and SFS. Neurological level ranged from C2 to L2 and American Spinal Injury Association impairment scale A to D. Time since injury was (mean±s.d.) 3.4±6.5 years. Age was 48.3±20.2 years. Cause of injury was traumatic in 55% and non-traumatic for 45% of the participants. Antispastic medication was used by 61%. MAS showed intra-rater κsimple=-0.11 to 0.46 and κweighted=-0.11 to 0.83. Inter-rater κsimple=-0.06 to 0.32 and κweighted=0.08 to 0.74. SFS showed intra-rater κweighted=0.94 and inter-rater κweighted=0.93. Correlation between MAS and SFS showed non-significant correlation coefficients from-0.11 to 0.90. Reliability of MAS is highly affected by the weighting scheme. With a weighted-κ it was overall reliable and simple-κ overall unreliability. Repeated tests should always be performed by the same rater and in a very standardized manner. SFS was found reliable. MAS and SFS are poorly correlated, and ratings were inversely distributed and suggest that it assesses different aspects of spasticity.

  3. Propagation of Forecast Errors from the Sun to LEO Trajectories: How Does Drag Uncertainty Affect Conjunction Frequency?

    DTIC Science & Technology

    2014-09-01

    hour; for our initial computation, we used a solar EUV irradiance uncertainty of 7% at a forecast time of 7 days, so that the forecast error at the...We developed approximate expressions for the propagation of solar irradiance forecast errors propagate to atmospheric density forecasts to in-track...trajectories of most objects in low-Earth orbit, and solar variability is the largest source of error in upper atmospheric density forecasts . There is

  4. Dynamic optimization of stimulation frequency to reduce isometric muscle fatigue using a modified Hill-Huxley model.

    PubMed

    Doll, Brian D; Kirsch, Nicholas A; Bao, Xuefeng; Dicianno, Brad E; Sharma, Nitin

    2017-08-18

    Optimal frequency modulation during functional electrical stimulation (FES) may minimize or delay the onset of FES-induced muscle fatigue. An offline dynamic optimization method, constrained to a modified Hill-Huxley model, was used to determine the minimum number of pulses that would maintain a constant desired isometric contraction force. Six able-bodied participants were recruited for the experiments, and their quadriceps muscles were stimulated while they sat on a leg extension machine. The force-time (F-T) integrals and peak forces after the pulse train was delivered were found to be statistically significantly greater than the force-time integrals and peak forces obtained after a constant frequency train was delivered. Experimental results indicated that the optimized pulse trains induced lower levels of muscle fatigue compared with constant frequency pulse trains. This could have a potential advantage over current FES methods that often choose a constant frequency stimulation train. Muscle Nerve, 2017. © 2017 Wiley Periodicals, Inc.

  5. Analysis on error of laser frequency locking for fiber optical receiver in direct detection wind lidar based on Fabry-Perot interferometer and improvements

    NASA Astrophysics Data System (ADS)

    Zhang, Feifei; Dou, Xiankang; Sun, Dongsong; Shu, Zhifeng; Xia, Haiyun; Gao, Yuanyuan; Hu, Dongdong; Shangguan, Mingjia

    2014-12-01

    Direct detection Doppler wind lidar (DWL) has been demonstrated for its capability of atmospheric wind detection ranging from the troposphere to stratosphere with high temporal and spatial resolution. We design and describe a fiber-based optical receiver for direct detection DWL. Then the locking error of the relative laser frequency is analyzed and the dependent variables turn out to be the relative error of the calibrated constant and the slope of the transmission function. For high accuracy measurement of the calibrated constant for a fiber-based system, an integrating sphere is employed for its uniform scattering. What is more, the feature of temporally widening the pulse laser allows more samples be acquired for the analog-to-digital card of the same sampling rate. The result shows a relative error of 0.7% for a calibrated constant. For the latter, a new improved locking filter for a Fabry-Perot Interferometer was considered and designed with a larger slope. With these two strategies, the locking error for the relative laser frequency is calculated to be about 3 MHz, which is equivalent to a radial velocity of about 0.53 m/s and demonstrates the effective improvements of frequency locking for a robust DWL.

  6. Description of transdermal transport of hydrophilic solutes during low-frequency sonophoresis based on a modified porous pathway model.

    PubMed

    Tezel, Ahmet; Sens, Ashley; Mitragotri, Samir

    2003-02-01

    Application of low-frequency ultrasound has been shown to increase skin permeability, thereby facilitating delivery of macromolecules (low-frequency sonophoresis). In this study, we sought to determine a theoretical description of transdermal transport of hydrophilic permeants induced by low-frequency sonophoresis. Parameters such as pore size distribution, absolute porosity, and dependence of effective tortuosity on solute characteristics were investigated. Pig skin was exposed to low-frequency ultrasound at 58 kHz to achieve different skin resistivities. Transdermal delivery of four permeants [mannitol, luteinizing hormone releasing hormone (LHRH), inulin, dextran] in the presence and absence of ultrasound was measured. The porous pathway model was modified to incorporate the permeant characteristics into the model and to achieve a detailed understanding of the pathways responsible for hydrophilic permeant delivery. The slopes of the log kp(p) versus log R graphs for individual solutes changed with solute molecular area, suggesting that the permeability-resistivity correlation for each permeant is related to its size. The tortuosity that a permeant experiences within the skin also depends on its size, where larger molecules experience a less tortuous path. With the modified porous pathway model, the effective tortuosities and skin porosity were calculated independently. The results of this study show that low-frequency sonophoresis creates pathways for permeant delivery with a wide range of pore sizes. The optimum pore size utilized by solutes is related to their molecular radii.

  7. Spatial-carrier phase-shifting digital holography utilizing spatial frequency analysis for the correction of the phase-shift error.

    PubMed

    Tahara, Tatsuki; Shimozato, Yuki; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Matoba, Osamu; Kubota, Toshihiro

    2012-01-15

    We propose a single-shot digital holography in which the complex amplitude distribution is obtained by spatial-carrier phase-shifting (SCPS) interferometry and the correction of the inherent phase-shift error occurred in this interferometry. The 0th order diffraction wave and the conjugate image are removed by phase-shifting interferometry and Fourier transform technique, respectively. The inherent error is corrected in the spatial frequency domain. The proposed technique does not require an iteration process to remove the unwanted images and has an advantage in the field of view in comparison to a conventional SCPS technique.

  8. A new modified differential evolution algorithm scheme-based linear frequency modulation radar signal de-noising

    NASA Astrophysics Data System (ADS)

    Dawood Al-Dabbagh, Mohanad; Dawoud Al-Dabbagh, Rawaa; Raja Abdullah, R. S. A.; Hashim, F.

    2015-06-01

    The main intention of this study was to investigate the development of a new optimization technique based on the differential evolution (DE) algorithm, for the purpose of linear frequency modulation radar signal de-noising. As the standard DE algorithm is a fixed length optimizer, it is not suitable for solving signal de-noising problems that call for variability. A modified crossover scheme called rand-length crossover was designed to fit the proposed variable-length DE, and the new DE algorithm is referred to as the random variable-length crossover differential evolution (rvlx-DE) algorithm. The measurement results demonstrate a highly efficient capability for target detection in terms of frequency response and peak forming that was isolated from noise distortion. The modified method showed significant improvements in performance over traditional de-noising techniques.

  9. The Impact of a Modified Cooperative Learning Technique on the Grade Frequencies Observed in a Preparatory Chemistry Course

    NASA Astrophysics Data System (ADS)

    Hayes Russell, Bridget J.

    This dissertation explored the impact of a modified cooperative learning technique on the final grade frequencies observed in a large preparatory chemistry course designed for pre-science majors. Although the use of cooperative learning at all educational levels is well researched and validated in the literature, traditional lectures still dominate as the primary methodology of teaching. This study modified cooperative learning techniques by addressing commonly cited reasons for not using the methodology. Preparatory chemistry students were asked to meet in cooperative groups outside of class time to complete homework assignments. A chi-square goodness-of-fit revealed that the final grade frequency distributions observed were different than expected. Although the distribution was significantly different, the resource investment using this particular design challenged the practical significance of the findings. Further, responses from a survey revealed that the students did not use the suggested group functioning methods that empirically are known to lead to more practically significant results.

  10. Contact Ratios and Transmission Errors of a Helical Gear Set with Involute-Teeth Pinion and Modified-Circular-Arc-Teeth Gear

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Cheng; Tsay, Chung-Biau

    Contact ratio (CR) and transmission error (TE) are two significant indices for gear tooth strength and dynamics. This work investigated the CR, contact teeth (CT) and TE of a helical gear pair, composed of an involute pinion and a modified circular-arc gear. Point contact and built-in parabolic TE are obtained due to the modification of the gear’s tooth profile. Tooth contact analysis(TCA) is applied to determine the TE as well as the CR of the proposed helical gear set under various assembly condition. The effect of gear design parameters on the CRs and TEs are also investigated in numerical examples.

  11. A Modified Magnitude System that Produces Well-Behaved Magnitudes, Colors, and Errors Even for Low Signal-to-Noise Ratio Measurements

    NASA Astrophysics Data System (ADS)

    Lupton, Robert H.; Gunn, James E.; Szalay, Alexander S.

    1999-09-01

    We describe a modification of the usual definition of astronomical magnitudes, replacing the usual logarithm with an inverse hyperbolic sine function; we call these modified magnitudes ``asinh magnitudes.'' For objects detected at signal-to-noise ratios of greater than about 5, our modified definition is essentially identical to the traditional one; for fainter objects (including those with a formally negative flux), our definition is well behaved, tending to a definite value with finite errors as the flux goes to zero. This new definition is especially useful when considering the colors of faint objects, as the difference of two ``asinh'' magnitudes measures the usual flux ratio for bright objects, while avoiding the problems caused by dividing two very uncertain values for faint objects. The Sloan Digital Sky Survey data products will use this scheme to express all magnitudes in their catalogs.

  12. NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency.

    PubMed

    Das, Sudeb; Kundu, Malay Kumar

    2012-10-01

    In this article, a novel multimodal medical image fusion (MIF) method based on non-subsampled contourlet transform (NSCT) and pulse-coupled neural network (PCNN) is presented. The proposed MIF scheme exploits the advantages of both the NSCT and the PCNN to obtain better fusion results. The source medical images are first decomposed by NSCT. The low-frequency subbands (LFSs) are fused using the 'max selection' rule. For fusing the high-frequency subbands (HFSs), a PCNN model is utilized. Modified spatial frequency in NSCT domain is input to motivate the PCNN, and coefficients in NSCT domain with large firing times are selected as coefficients of the fused image. Finally, inverse NSCT (INSCT) is applied to get the fused image. Subjective as well as objective analysis of the results and comparisons with state-of-the-art MIF techniques show the effectiveness of the proposed scheme in fusing multimodal medical images.

  13. Analysis and design of modified window shapes for S-transform to improve time-frequency localization

    NASA Astrophysics Data System (ADS)

    Ma, Jianping; Jiang, Jin

    2015-06-01

    This paper deals with window design issues for modified S-transform (MST) to improve the performance of time-frequency analysis (TFA). After analyzing the drawbacks of existing window functions, a window design technique is proposed. The technique uses a sigmoid function to control the window width in frequency domain. By proper selection of certain tuning parameters of a sigmoid function, windows with different width profiles can be obtained for multi-component signals. It is also interesting to note that the MST algorithm can be considered as a special case of a generalized method that adds a tunable shaping function to the standard window in frequency domain to meet specific frequency localization needs. The proposed design technique has been validated on a physical vibration test system using signals with different characteristics. The results have demonstrated that the proposed MST algorithm has superior time-frequency localization capabilities over standard ST, as well as other classical TFA methods. Subsequently, the proposed MST algorithm is applied to vibration monitoring of pipes in a water supply process controlled by a diaphragm pump for fault detection purposes.

  14. Fast and robust population transfer in two-level quantum systems with dephasing noise and/or systematic frequency errors

    NASA Astrophysics Data System (ADS)

    Lu, Xiao-Jing; Chen, Xi; Ruschhaupt, A.; Alonso, D.; Guérin, S.; Muga, J. G.

    2013-09-01

    We design, by invariant-based inverse engineering, driving fields that invert the population of a two-level atom in a given time, robustly with respect to dephasing noise and/or systematic frequency shifts. Without imposing constraints, optimal protocols are insensitive to the perturbations but need an infinite energy. For a constrained value of the Rabi frequency, a flat π pulse is the least sensitive protocol to phase noise but not to systematic frequency shifts, for which we describe and optimize a family of protocols.

  15. Quantification of landfill methane using modified Intergovernmental Panel on Climate Change's waste model and error function analysis.

    PubMed

    Govindan, Siva Shangari; Agamuthu, P

    2014-10-01

    Waste management can be regarded as a cross-cutting environmental 'mega-issue'. Sound waste management practices support the provision of basic needs for general health, such as clean air, clean water and safe supply of food. In addition, climate change mitigation efforts can be achieved through reduction of greenhouse gas emissions from waste management operations, such as landfills. Landfills generate landfill gas, especially methane, as a result of anaerobic degradation of the degradable components of municipal solid waste. Evaluating the mode of generation and collection of landfill gas has posted a challenge over time. Scientifically, landfill gas generation rates are presently estimated using numerical models. In this study the Intergovernmental Panel on Climate Change's Waste Model is used to estimate the methane generated from a Malaysian sanitary landfill. Key parameters of the model, which are the decay rate and degradable organic carbon, are analysed in two different approaches; the bulk waste approach and waste composition approach. The model is later validated using error function analysis and optimum decay rate, and degradable organic carbon for both approaches were also obtained. The best fitting values for the bulk waste approach are a decay rate of 0.08 y(-1) and degradable organic carbon value of 0.12; and for the waste composition approach the decay rate was found to be 0.09 y(-1) and degradable organic carbon value of 0.08. From this validation exercise, the estimated error was reduced by 81% and 69% for the bulk waste and waste composition approach, respectively. In conclusion, this type of modelling could constitute a sensible starting point for landfills to introduce careful planning for efficient gas recovery in individual landfills.

  16. Fracture frequency and longevity of fractured resin composite, polyacid-modified resin composite, and resin-modified glass ionomer cement class IV restorations: an up to 14 years of follow-up.

    PubMed

    van Dijken, Jan W V; Pallesen, Ulla

    2010-04-01

    The aim of this study was to evaluate the fracture frequency and longevity of fractured class IV resin composite (RC), polyacid-modified resin composite (compomer; PMRC), and resin-modified glass ionomer cement (RMGIC) restorations in a longitudinal long-term follow-up. Eighty-five class IV RC (43: Pekafil), PMRC (24: Dyract (D), Hytac (H)), and RMGIC (18: Fuji II LC (F), Photac Fil (P)) restorations were placed in ongoing longitudinal follow-ups in 45 patients (mean age 54.5 years). The restorations were evaluated during 14 years by slightly modified USPHS criteria at yearly recalls especially for their fracture behavior. For all restorations, 36.5% were fractured, with a Kaplan-Meier (KM) estimate of 8.8 years (standard error (SE) 0.5, confidence interval (CI) 7.9-9.8). The number of fractures per material was 11 RC (25.6%; KM 9.9 years, CI 8.7-11.0), 13 PMRC (54.2%; D 66.6%; H 50.0%; KM 7.5 years, CI 5.8-9.2), and seven RMGIC (36.5%; F 22.2%, P 71.4%; KM 6.9 years, CI 7.9-9.8). Significant differences were seen between RC and PMRC (p = 0.043). A significant higher fracture rate was observed in teeth 12 + 22 compared to teeth 11 + 21. No significant differences were observed between male and female patients. Restorations in bruxing patients (45) showed 22 fractures (KM 8 years; CI 6.9-9.3) and in non-bruxing patients (39) nine fractures (KM 9.9 years, CI 8.7-11.1; p = 0.017). With regard to the longevity of the replaced failed restorations, for RC, the mean age was 4.5 years; for PMRC, 4.3 years; and for RMGIC, 3.3 years. It can be concluded that fracture was the main reason for failure of class IV restorations. An improved longevity was observed for class IV restorations compared to those presented in earlier studies. RC restorations showed the lowest failure frequency and the highest longevity.

  17. Frequency Dependent Harmonic Powers in a Modified Uni-Traveling Carrier (MUTC) Photodetector

    DTIC Science & Technology

    2017-01-27

    null for the sum and difference frequencies IMD2 powers. The displacement current depends on the changes of electric field in the intrinsic region. When...capacitance in the device as a function of bias. MHz modulation. The displacement current is larger in the intrinsic absorption region, where the electric field...with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 5a

  18. Impact of Primary Spherical Aberration, Spatial Frequency and Stiles Crawford Apodization on Wavefront determined Refractive Error: A Computational Study

    PubMed Central

    Xu, Renfeng; Bradley, Arthur; Thibos, Larry N.

    2013-01-01

    Purpose We tested the hypothesis that pupil apodization is the basis for central pupil bias of spherical refractions in eyes with spherical aberration. Methods We employed Fourier computational optics in which we vary spherical aberration levels, pupil size, and pupil apodization (Stiles Crawford Effect) within the pupil function, from which point spread functions and optical transfer functions were computed. Through-focus analysis determined the refractive correction that optimized retinal image quality. Results For a large pupil (7 mm), as spherical aberration levels increase, refractions that optimize the visual Strehl ratio mirror refractions that maximize high spatial frequency modulation in the image and both focus a near paraxial region of the pupil. These refractions are not affected by Stiles Crawford Effect apodization. Refractions that optimize low spatial frequency modulation come close to minimizing wavefront RMS, and vary with level of spherical aberration and Stiles Crawford Effect. In the presence of significant levels of spherical aberration (e.g. C40 = 0.4 µm, 7mm pupil), low spatial frequency refractions can induce −0.7D myopic shift compared to high SF refraction, and refractions that maximize image contrast of a 3 cycle per degree square-wave grating can cause −0.75D myopic drift relative to refractions that maximize image sharpness. Discussion Because of small depth of focus associated with high spatial frequency stimuli, the large change in dioptric power across the pupil caused by spherical aberration limits the effective aperture contributing to the image of high spatial frequencies. Thus, when imaging high spatial frequencies, spherical aberration effectively induces an annular aperture defining that portion of the pupil contributing to a well-focused image. As spherical focus is manipulated during the refraction procedure, the dimensions of the annular aperture change. Image quality is maximized when the inner radius of the induced

  19. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  20. Modified High Frequency Radial Spin Wave Mode Spectrum in a Chirality-Controlled Nanopillar

    NASA Astrophysics Data System (ADS)

    Kolthammer, J. E.; Rudge, J.; Choi, B. C.; Hong, Y. K.

    2016-09-01

    Circular magnetic spin valve nanopillars in a dual vortex configuration have dynamic characteristics strongly dependent on the interlayer dipole coupling. We report here on frequency domain properties of such nanopillars obtained by micromagnetic simulations. After the free layer is chirality switched with spin transfer torque, a radial spin wave eigenmode spectrum forms in the free layer with unusually large edge amplitude. The structure of these modes indicate a departure from the magnetostatic processes typically observed experimentally and treated analytically in low aspect ratio isolated disks. Our findings give new details of dynamic chirality control and relxation in nanopillars and raise potential signatures for experiments.

  1. Classification of radiological errors in chest radiographs, using support vector machine on the spatial frequency features of false- negative and false-positive regions

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Donovan, Tim; Brennan, Patrick C.; Dix, Alan; Manning, David J.

    2011-03-01

    Aim: To optimize automated classification of radiological errors during lung nodule detection from chest radiographs (CxR) using a support vector machine (SVM) run on the spatial frequency features extracted from the local background of selected regions. Background: The majority of the unreported pulmonary nodules are visually detected but not recognized; shown by the prolonged dwell time values at false-negative regions. Similarly, overestimated nodule locations are capturing substantial amounts of foveal attention. Spatial frequency properties of selected local backgrounds are correlated with human observer responses either in terms of accuracy in indicating abnormality position or in the precision of visual sampling the medical images. Methods: Seven radiologists participated in the eye tracking experiments conducted under conditions of pulmonary nodule detection from a set of 20 postero-anterior CxR. The most dwelled locations have been identified and subjected to spatial frequency (SF) analysis. The image-based features of selected ROI were extracted with un-decimated Wavelet Packet Transform. An analysis of variance was run to select SF features and a SVM schema was implemented to classify False-Negative and False-Positive from all ROI. Results: A relative high overall accuracy was obtained for each individually developed Wavelet-SVM algorithm, with over 90% average correct ratio for errors recognition from all prolonged dwell locations. Conclusion: The preliminary results show that combined eye-tracking and image-based features can be used for automated detection of radiological error with SVM. The work is still in progress and not all analytical procedures have been completed, which might have an effect on the specificity of the algorithm.

  2. Response prediction for modified mechanical systems based on in-situ frequency response functions: Theoretical and numerical studies

    NASA Astrophysics Data System (ADS)

    Wang, Zengwei; Zhu, Ping

    2017-07-01

    In this paper, a general method using in-situ frequency response functions (FRFs) is proposed for predicting operational responses of modified mechanical systems. In the method responses of modified mechanical systems can be calculated by using the delta dynamic stiffness matrix, the subsystem FRF matrix and responses of the original system, even though operational forces are unknown. The proposed method is derived theoretically in a general form as well as for six specific scenarios. The six scenarios correspond respectively to: (a) modifications made on the mass; (b) changes made on the stiffness values of the link between a degree-of-freedom (DOF) and the ground; (c) the fully rigid link between a DOF and the ground; (d) changes made on the stiffness values of the link between two DOFs; (e) the null link between two DOFs and (f) the fully rigid link between two DOFs. It is found that for scenarios (a), (b) and (d) the delta dynamic stiffness matrix is required when predicting responses of the modified mechanical system. But for scenarios (c), (e) and (f), no delta dynamic stiffness matrix is required and the new system responses can be calculated solely using the subsystem FRF matrix and responses of the original system. The proposed method is illustrated by a numerical example and validated using data generated by finite element simulations. The work in this paper will be beneficial to solving vibration and noise engineering problems.

  3. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors--Air Gap Effect.

    PubMed

    Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-04-18

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.

  4. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors—Air Gap Effect

    PubMed Central

    Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  5. Cognitive training modifies frequency EEG bands and neuropsychological measures in Rett syndrome.

    PubMed

    Fabio, Rosa Angela; Billeci, Lucia; Crifaci, Giulia; Troise, Emilia; Tortorella, Gaetano; Pioggia, Giovanni

    2016-01-01

    Rett syndrome (RS) is a childhood neurodevelopmental disorder characterized by a primary disturbance in neuronal development. Neurological abnormalities in RS are reflected in several behavioral and cognitive impairments such as stereotypies, loss of speech and hand skills, gait apraxia, irregular breathing with hyperventilation while awake, and frequent seizures. Cognitive training can enhance both neuropsychological and neurophysiological parameters. The aim of this study was to investigate whether behaviors and brain activity were modified by training in RS. The modifications were assessed in two phases: (a) after a short-term training (STT) session, i.e., after 30 min of training and (b) after long-term training (LTT), i.e., after 5 days of training. Thirty-four girls with RS were divided into two groups: a training group (21 girls) who underwent the LTT and a control group (13 girls) that did not undergo LTT. The gaze and quantitative EEG (QEEG) data were recorded during the administration of the tasks. A gold-standard eye-tracker and a wearable EEG equipment were used. Results suggest that the participants in the STT task showed a habituation effect, decreased beta activity and increased right asymmetry. The participants in the LTT task looked faster and longer at the target, and show increased beta activity and decreased theta activity, while a leftward asymmetry was re-established. The overall result of this study indicates a positive effect of long-term cognitive training on brain and behavioral parameters in subject with RS.

  6. Low-Frequency Tropical Pacific Sea-Surface Temperature over the Past Millennium: Reconstruction and Error Estimates

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J.; Cobb, K. M.; Mann, M. E.; Rutherford, S.

    2008-12-01

    Tropical Pacific sea-surface temperatures can organize climate variability at near-global scales, and since there is wide disagreement over their projected course under greenhouse forcing, it is of considerable interest to understand their evolution over the past millennium. We use the most recent high-resolution proxy data from ENSO-sensitive regions, together with the RegEM climate field reconstruction technique [Schneider, 2001, Rutherford et al, 2003, Mann et al, 2007], to extend the history of the NINO3 index at decadal scales through 1000 A.D. We present a new algorithm implementing an objective regularization technique that preserves low-frequency variance in RegEM (ITTLS).Synthetic SST and pseudoproxy tests using a realistic ENSO model are used to test the accuracy of estimated low-frequency tropical climate variability with this method The reconstruction shows important decadal and centennial variability throughout the millennium, in the context of which the twentieth century does not appear anomalous. We analyze the sensitivity of the reconstruction to the inclusion of various key proxy timeseries, target SST datasets, and subjective procedural choices, with a particular focus on representing uncertainties. By some measures, the reconstruction is found skillful back to 1500 A.D., but increasing uncertainties in earlier times may limit our ability to test proposed mechanisms of mediaeval climate variability.

  7. Evaluation of a Modified Italian European Prospective Investigation into Cancer and Nutrition Food Frequency Questionnaire for Individuals with Celiac Disease.

    PubMed

    Mazzeo, Teresa; Roncoroni, Leda; Lombardo, Vincenza; Tomba, Carolina; Elli, Luca; Sieri, Sabina; Grioni, Sara; Bardella, Maria T; Agostoni, Carlo; Doneda, Luisa; Brighenti, Furio; Pellegrini, Nicoletta

    2016-11-01

    To date, it is unclear whether individuals with celiac disease following a gluten-free (GF) diet for several years have adequate intake of all recommended nutrients. Lack of a food frequency questionnaire (FFQ) for individuals with celiac disease could be partly responsible for this still-debated issue. The aim of the study is to evaluate the performance of a modified European Prospective Investigation into Cancer and Nutrition (EPIC) FFQ in estimating nutrient and food intake in a celiac population. In a cross-sectional study, the dietary habits of individuals with celiac disease were reported using a modified Italian EPIC FFQ and were compared to a 7-day weighed food record as a reference method. A total of 200 individuals with histologically confirmed celiac disease were enrolled in the study between October 2012 and August 2014 at the Center for Prevention and Diagnosis of Celiac Disease (Milan, Italy). Nutrient and food category intake were calculated by 7-day weighed food record using an Italian food database integrated with the nutrient composition of 60 GF foods and the modified EPIC FFQ, in which 24 foods were substituted with GF foods comparable for energy and carbohydrate content. An evaluation of the modified FFQ compared to 7-day weighed food record in assessing the reported intake of nutrient and food groups was conducted using Spearman's correlation coefficients and weighted κ. One hundred individuals completed the study. The Spearman's correlation coefficients of FFQ and 7-day weighed food record ranged from .13 to .73 for nutrients and from .23 to .75 for food groups. A moderate agreement, which was defined as a weighted κ value of .40 to .60, was obtained for 30% of the analyzed nutrients, and 40% of the nutrients showed values between .30 and .40. The weighted κ exceeded .40 for 60% of the 15 analyzed food groups. The modified EPIC FFQ demonstrated moderate congruence with a weighed food record in ranking individuals by dietary intakes

  8. High-resolution differential mode delay measurement for a multimode optical fiber using a modified optical frequency domain reflectometer.

    PubMed

    Ahn, T-J; Kim, D

    2005-10-03

    A novel differential mode delay (DMD) measurement technique for a multimode optical fiber based on optical frequency domain reflectometry (OFDR) has been proposed. We have obtained a high-resolution DMD value of 0.054 ps/m for a commercial multimode optical fiber with length of 50 m by using a modified OFDR in a Mach-Zehnder interferometer structure with a tunable external cavity laser and a Mach-Zehnder interferometer instead of Michelson interferometer. We have also compared the OFDR measurement results with those obtained using a traditional time-domain measurement method. DMD resolution with our proposed OFDR technique is more than an order of magnitude better than a result obtainable with a conventional time-domain method.

  9. Low-Frequency Tropical Pacific Sea-Surface Temperature over the Past Millennium: Reconstruction and Error Estimates

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J.; Cobb, K.; Mann, M. E.; Rutherford, S. D.; Wittenberg, A. T.

    2009-12-01

    Since surface conditions over the tropical Pacific can organize climate variability at near-global scales, and since there is wide disagreement over their projected course under greenhouse forcing, it is of considerable interest to reconstruct their low-frequency evolution over the past millennium. To this end, we make use of the hybrid RegEM climate reconstruction technique (Mann et al. 2008; Schneider 2001), which aims to reconstruct decadal and longer-scale variations of sea-surface temperature (SST) from an array of climate proxies. We first assemble a database of published and new, high-resolution proxy data from ENSO-sensitive regions, screened for significant correlation with a common ENSO metric (NINO3 index). Proxy observations come primarily from coral, speleothem, marine and lake sediment, and ice core sources, as well as long tree-ring chronologies. The hybrid RegEM methodology is then validated within a pseudoproxy context using two coupled general circulation model simulations of the past millennium’s climate; one using the NCAR CSM1.4, the other the GFDL CM2.1, models (Ammann et al. 2007; Wittenberg 2009). Validation results are found to be sensitive to the ratio of interannual to lower-frequency variability, with poor reconstruction skill for CM2.1 but good skill for CSM1.4. The latter features prominent changes in NINO3 at decadal-to-centennial timescales, which the network and method detect relatively easily. In contrast, the unforced CM2.1 NINO3 is dominated by interannual variations, and its long-term oscillations are more difficult to reconstruct. These two limit cases bracket the observed NINO3 behavior over the historical period. We then apply the method to the proxy observations and extend the decadal-scale history of tropical Pacific SSTs over the past millennium, analyzing the sensitivity of such reconstruction to the inclusion of various key proxy timeseries and details of the statistical analysis, emphasizing metrics of uncertainty

  10. Frequency Analyses Can Be Improved by a Modified t-test in Sample-based Preclinical Efficacy Studies.

    PubMed

    Halperin, Gideon; Klausner, Ziv

    2013-01-01

    Sample-based preclinical drug efficacy studies compare frequency (proportion) or incidences of successes within respective samples of test and control groups. The word success in principle refers to a protected (e.g., due to vaccination), recovered, or surviving animal, depending on the particular experiment. We introduce here a modified t-test for two independent groups, aimed at statistical analysis of the difference between frequencies of successes in sample based preclinical studies. The test is applicable whenever the study is based on repeating replicate experiments, as required by certain procedures such as validation. Such experiments are based on constant drug dose and performed under identical conditions and protocol. The proposed test combines the computational rules of t-test for two independent groups and analysis of variance. In the initial steps, incidences are transformed to proportions, and variance between proportions in samples of the j(th) group (s(p(j))(2)), is then transformed into theoretical weighted variance within the i(th) repetition (sample) of the j(th) group (s(i,j)(2)). The variance of proportions in samples of the size of the whole group (SE(j)(2)) is then calculated. The t-statistic is computed according to the rules of t-test for two independent groups. Significance is calculated using (N(1) - 1) + (N(2) - 1) degrees of freedom, where N(j) denotes the total number of animals in the j(th) group. The proposed model offers an important advantage over incidence or proportion distribution models, such as chi-square or normal approximation of binomial distribution, respectively, because it considers variance between replicate experiments. It moreover offers important flexibility by limiting the requirement for identical sample sizes only to samples within the control or test group. A difference between groups in sample sizes, number of samples, or both, preventing application of block designs or the standard formats of t-test, may still

  11. Photocatalytic characteristic and photodegradation kinetics of toluene using N-doped TiO2 modified by radio frequency plasma.

    PubMed

    Shie, Je-Lueng; Lee, Chiu-Hsuan; Chiou, Chyow-San; Chen, Yi-Hung; Chang, Ching-Yuan

    2014-01-01

    This study investigates the feasibility of applications of the plasma surface modification of photocatalysts and the removal of toluene from indoor environments. N-doped TiO2 is prepared by precipitation methods and calcined using a muffle furnace (MF) and modified by radio frequency plasma (RF) at different temperatures with light sources from a visible light lamp (VLL), a white light-emitting diode (WLED) and an ultraviolet light-emitting diode (UVLED). The operation parameters and influential factors are addressed and prepared for characteristic analysis and photo-decomposition examination. Furthermore, related kinetic models are established and used to simulate the experimental data. The characteristic analysis results show that the RF plasma-calcination method enhanced the Brunauer Emmett Teller surface area of the modified photocatalysts effectively. For the elemental analysis, the mass percentages of N for the RF-modified photocatalyst are larger than those of MF by six times. The aerodynamic diameters of the RF-modifiedphotocatalyst are all smaller than those of MF. Photocatalytic decompositions of toluene are elucidated according to the Langmuir-Hinshelwood model. Decomposition efficiencies (eta) of toluene for RF-calcined methods are all higher than those of commercial TiO2 (P25). Reaction kinetics ofphoto-decomposition reactions using RF-calcined methods with WLED are proposed. A comparison of the simulation results with experimental data is also made and indicates good agreement. All the results provide useful information and design specifications. Thus, this study shows the feasibility and potential use of plasma modification via LED in photocatalysis.

  12. High frequency electromagnetic properties of interstitial-atom-modified Ce2Fe17NX and its composites

    NASA Astrophysics Data System (ADS)

    Li, L. Z.; Wei, J. Z.; Xia, Y. H.; Wu, R.; Yun, C.; Yang, Y. B.; Yang, W. Y.; Du, H. L.; Han, J. Z.; Liu, S. Q.; Yang, Y. C.; Wang, C. S.; Yang, J. B.

    2014-07-01

    The magnetic and microwave absorption properties of the interstitial atom modified intermetallic compound Ce2Fe17NX have been investigated. The Ce2Fe17NX compound shows a planar anisotropy with saturation magnetization of 1088 kA/m at room temperature. The Ce2Fe17NX paraffin composite with a mass ratio of 1:1 exhibits a permeability of μ ' = 2.7 at low frequency, together with a reflection loss of -26 dB at 6.9 GHz with a thickness of 1.5 mm and -60 dB at 2.2 GHz with a thickness of 4.0 mm. It was found that this composite increases the Snoek limit and exhibits both high working frequency and permeability due to its high saturation magnetization and high ratio of the c-axis anisotropy field to the basal plane anisotropy field. Hence, it is possible that this composite can be used as a high-performance thin layer microwave absorber.

  13. Modified carbon fiber/magnetic graphene/epoxy composites with synergistic effect for electromagnetic interference shielding over broad frequency band.

    PubMed

    Wu, Jiaming; Ye, Zhengmao; Ge, Heyi; Chen, Juan; Liu, Wenxiu; Liu, Zhifang

    2017-11-15

    The study on electromagnetic interference (EMI) shielding materials is a long-standing project in civil and military areas. In this work, a novel kind of shielding material was fabricated and the shielding effectiveness (SE) over broad frequency was investigated. The epoxy (EP) resin was reinforced by reduced graphene oxide coated carbon fiber (rGO-CF, GCF) and Fe3O4 nanoparticles deposited rGO nanohybrids (magnetic graphene, MG). With only 0.5wt% of GCF and 9wt% of MG, the GCF/MG3/EP exhibited excellent SE (>30dB over 8.2-26.5GHz) and a maximum value of 51.1dB at 26.5GHz, which had a 31.7% increase than that of GCF/EP. The 3-aminopropyltriethoxysilane (APTES) modified MG (NMG) gained better dispersion in resin and could further increased absorbing property of composites. The synergistic effect between GCF and MG raised by 3D interpenetration structure of the ternary nanocomposite was favorable for high efficient shielding in wide frequency range. This composites show promise in lightweight and high-efficiency EMI shielding field. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Cardiac baroreflex gain is frequency dependent: insights from repeated sit-to-stand maneuvers and the modified Oxford method.

    PubMed

    Horsman, Helen M; Peebles, Karen C; Galletly, Duncan C; Tzeng, Yu-Chieh

    2013-07-01

    Cardiac baroreflex gain is usually quantified as the reflex alteration in heart rate during changes in blood pressure without considering the effect of the rate of change in blood pressure on the estimated gain. This study sought to (i) characterize baroreflex gain as a function of blood pressure oscillation frequencies using a repeat sit-to-stand method and (ii) compare baroreflex gain values obtained using the sit-to-stand method against the modified Oxford method. Fifteen healthy individuals underwent the repeated sit-to-stand method in which blood pressure oscillations were driven at 0.03, 0.05, 0.07, and 0.1 Hz. Sixteen healthy participants underwent the sit-to-stand and modified Oxford methods to examine their agreement. Sit-to-stand baroreflex gain was highest at 0.05 Hz (8.8 ± 3.2 ms·mm Hg(-1)) and lowest at 0.1 Hz (5.8 ± 3.0 ms·mm Hg(-1)). Baroreflex gains at 0.03 Hz (7.7 ± 3.0 ms·mm Hg(-1)) and 0.07 Hz (7.5 ± 3.3 ms·mm Hg(-1)) were not different from the baroreflex gain at 0.05 Hz. There was moderate correlation between phenylephrine gain and sit-to-stand gain (r values ranged from 0.52 to 0.75; all frequencies, p < 0.05), but no correlation between sodium nitroprusside gain and sit-to-stand gain (r values ranged from -0.07 to 0.22; all p < 0.05). Bland-Altman analysis of phenylephrine gain and sit-to-stand gain showed poor agreement and a positive proportional bias. These results show that baroreflex gains derived from these 2 methods cannot be used interchangeably. Furthermore, cardiac baroreflex gain is frequency dependent between 0.03 Hz and 0.1 Hz, which challenges the conventional practice of summarizing baroreflex gain as a single number.

  15. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  16. Reducing epistemic errors in water quality modelling through high-frequency data and stakeholder collaboration: the case of an industrial spill

    NASA Astrophysics Data System (ADS)

    Krueger, Tobias; Inman, Alex; Paling, Nick

    2014-05-01

    Catchment management, as driven by legislation such as the EU WFD or grassroots initiatives, requires the apportionment of in-stream pollution to point and diffuse sources so that mitigation measures can be targeted and costs and benefits shared. Source apportionment is typically done via modelling. Given model imperfections and input data errors, it has become state-of-the-art to employ an uncertainty framework. However, what is not easily incorporated in such a framework, and currently much discussed in hydrology, are epistemic uncertainties, i.e. those uncertainties that relate to lack of knowledge about processes and data. For example, what if an otherwise negligible source suddenly matters because of an accidental pollution incident? In this paper we present such a case of epistemic error, an industrial spill ignored in a water quality model, demonstrate the bias of the resulting model simulations, and show how the error was discovered somewhat incidentally by auxiliary high-frequency data and finally corrected through the collective intelligence of a stakeholder network. We suggest that accidental pollution incidents like this are a wide-spread, though largely ignored, problem. Hence our discussion will reflect on the practice of catchment monitoring, modelling and management in general. The case itself occurred as part of ongoing modelling support in the Tamar catchment, one of the priority catchments of the UK government's new approach to managing water resources more decentralised and collaboratively. An Extended Export Coefficient Model (ECM+) had been developed with stakeholders to simulate transfers of nutrients (N & P), sediment and Faecal Coliforms from land to water and down the river network as a function of sewage treatment options, land use, livestock densities and farm management practices. In the process of updating the model for the hydrological years 2008-2012 an over-prediction of the annual average P concentration by the model was found at

  17. Errors of Omission in English-Speaking Children's Production of Plurals and the Past Tense: The Effects of Frequency, Phonology, and Competition.

    PubMed

    Matthews, Danielle E; Theakston, Anna L

    2006-11-12

    How do English-speaking children inflect nouns for plurality and verbs for the past tense? We assess theoretical answers to this question by considering errors of omission, which occur when children produce a stem in place of its inflected counterpart (e.g., saying "dress" to refer to 5 dresses). A total of 307 children (aged 3;11-9;9) participated in 3 inflection studies. In Study 1, we show that errors of omission occur until the age of 7 and are more likely with both sibilant regular nouns (e.g., dress) and irregular nouns (e.g., man) than regular nouns (e.g., dog). Sibilant nouns are more likely to be inflected if they are high frequency. In Studies 2 and 3, we show that similar effects apply to the inflection of verbs and that there is an advantage for "regular-like" irregulars whose inflected form, but not stem form, ends in d/t. The results imply that (a) stems and inflected forms compete for production and (b) children generalize both product-oriented and source-oriented schemas when learning about inflectional morphology.

  18. Correction of proton resonance frequency shift MR-thermometry errors caused by heat-induced magnetic susceptibility changes during high intensity focused ultrasound ablations in tissues containing fat.

    PubMed

    Baron, Paul; Deckers, Roel; de Greef, Martijn; Merckel, Laura G; Bakker, Chris J G; Bouwman, Job G; Bleys, Ronald L A W; van den Bosch, Maurice A A J; Bartels, Lambertus W

    2014-12-01

    In this study, we aim to demonstrate the sensitivity of proton resonance frequency shift (PRFS) -based thermometry to heat-induced magnetic susceptibility changes and to present and evaluate a model-based correction procedure. To demonstrate the expected temperature effect, field disturbances during high intensity focused ultrasound sonications were monitored in breast fat samples with a three-dimensional (3D) gradient echo sequence. To evaluate the correction procedure, the interface of tissue-mimicking ethylene glycol gel and fat was sonicated. During sonication, the temperature was monitored with a 2D dual flip angle multi-echo gradient echo sequence, allowing for PRFS-based relative and referenced temperature measurements in the gel and T1 -based temperature measurements in fat. The PRFS-based measurement in the gel was corrected by minimizing the discrepancy between the observed 2D temperature profile and the profile predicted by a 3D thermal model. The HIFU sonications of breast fat resulted in a magnetic field disturbance which completely disappeared after cooling. For the correction method, the 5th to 95th percentile interval of the PRFS-thermometry error in the gel decreased from 3.8°C before correction to 2.0-2.3°C after correction. This study has shown the effects of magnetic susceptibility changes induced by heating of breast fatty tissue samples. The resultant errors can be reduced by the use of a model-based correction procedure. © 2013 Wiley Periodicals, Inc.

  19. Detection of inborn errors of metabolism utilizing GC-MS urinary metabolomics coupled with a modified orthogonal partial least squares discriminant analysis.

    PubMed

    Yang, Qin; Lin, Shan-Shan; Yang, Jiang-Tao; Tang, Li-Juan; Yu, Ru-Qin

    2017-04-01

    GC-MS urinary metabolomic analysis coupled with chemometrics is used to detect inborn errors of metabolism (IEMs), which are genetic disorders causing severe mental and physical debility and even sudden infant death. Orthogonal partial least squares discriminant analysis (OPLS-DA) is an efficient multivariate statistical method that conducts data analysis of metabolite profiling. However, performance degradation is often observed for OPLS-DA due to increasing size and complexity of metabolomic datasets. In this study, hybrid particle swarm optimization (HPSO) is employed to modify OPLS-DA by simultaneously selecting the optimal variable subset, associated weights and the appropriate number of orthogonal components, constructing a new algorithm called HPSO-OPLSDA. Investigating two IEMs, methylmalonic acidemia (MMA) and isovaleric acidemia (IVA), results suggest that HPSO-OPLSDA can significantly outperform OPLS-DA in terms of the discrimination between disease samples and healthy controls. Moreover, main discriminative metabolites are identified by HPSO-OPLSDA to aid the clinical diagnosis of IEMs, including methylmalonic-2, methylcitric-4(1) and 3-OH-propionic-2 for MMA and isovalerylglycine-1 for IVA.

  20. Stabilized soliton self-frequency shift and 0.1- PHz sideband generation in a photonic-crystal fiber with an air-hole-modified core.

    PubMed

    Liu, Bo-Wen; Hu, Ming-Lie; Fang, Xiao-Hui; Li, Yan-Feng; Chai, Lu; Wang, Ching-Yue; Tong, Weijun; Luo, Jie; Voronin, Aleksandr A; Zheltikov, Aleksei M

    2008-09-15

    Fiber dispersion and nonlinearity management strategy based on a modification of a photonic-crystal fiber (PCF) core with an air hole is shown to facilitate optimization of PCF components for a stable soliton frequency shift and subpetahertz sideband generation through four-wave mixing. Spectral recoil of an optical soliton by a red-shifted dispersive wave, generated through a soliton instability induced by high-order fiber dispersion, is shown to stabilize the soliton self-frequency shift in a highly nonlinear PCF with an air-hole-modified core relative to pump power variations. A fiber with a 2.3-microm-diameter core modified with a 0.9-microm-diameter air hole is used to demonstrate a robust soliton self-frequency shift of unamplified 50-fs Ti: sapphire laser pulses to a central wavelength of about 960 nm, which remains insensitive to variations in the pump pulse energy within the range from 60 to at least 100 pJ. In this regime of frequency shifting, intense high- and low-frequency branches of dispersive wave radiation are simultaneously observed in the spectrum of PCF output. An air-hole-modified-core PCF with appropriate dispersion and nonlinearity parameters is shown to provide efficient four-wave mixing, giving rise to Stokes and anti-Stokes sidebands whose frequency shift relative to the pump wavelength falls within the subpetahertz range, thus offering an attractive source for nonlinear Raman microspectroscopy.

  1. Role of dispersal timing and frequency in annual grass-invaded Great Basin ecosystems: how modifying seeding strategies increases restoration success

    USDA-ARS?s Scientific Manuscript database

    Seed dispersal dynamics strongly affect plant community assembly in restored annual grass—infested ecosystems. Modifying perennial grass seeding rates and frequency may increase perennial grass establishment, yet these impacts have not yet been quantified. To assess these effects, we established a f...

  2. Empathy and error processing.

    PubMed

    Larson, Michael J; Fair, Joseph E; Good, Daniel A; Baldwin, Scott A

    2010-05-01

    Recent research suggests a relationship between empathy and error processing. Error processing is an evaluative control function that can be measured using post-error response time slowing and the error-related negativity (ERN) and post-error positivity (Pe) components of the event-related potential (ERP). Thirty healthy participants completed two measures of empathy, the Interpersonal Reactivity Index (IRI) and the Empathy Quotient (EQ), and a modified Stroop task. Post-error slowing was associated with increased empathic personal distress on the IRI. ERN amplitude was related to overall empathy score on the EQ and the fantasy subscale of the IRI. The Pe and measures of empathy were not related. Results remained consistent when negative affect was controlled via partial correlation, with an additional relationship between ERN amplitude and empathic concern on the IRI. Findings support a connection between empathy and error processing mechanisms.

  3. Design of a W-band Frequency Tripler for Broadband Operation Based on a Modified Equivalent Circuit Model of GaAs Schottky Varistor Diode

    NASA Astrophysics Data System (ADS)

    Chen, Zhenhua; Xu, Jinping

    2013-01-01

    This paper presents the design and experimental results of a W-band frequency tripler with commercially available planar Schottky varistor diodes DBES105a fabricated by UMS, Inc. The frequency tripler features the characteristics of tunerless, passive, low conversion loss, broadband and compact. Considering actual circuit structure, especially the effect of ambient channel around the diode at millimeter wavelength, a modified equivalent circuit model for the Schottky diode is developed. The accuracy of the magnitude and phase of S21 of the proposed equivalent circuit model is improved by this modification. Input and output embedding circuits are designed and optimized according to the corresponding embedding impedances of the modified circuit model of the diode. The circuit of the frequency tripler is fabricated on RT/Rogers 5880 substrate with thickness of 0.127 mm. Measured conversion loss of the frequency tripler is 14.5 dB with variation of ±1 dB across the 75 ~ 103 GHz band and 15.5 ~ 19 dB over the frequency range of 103 ~ 110 GHz when driven with an input power of 18 dBm. A recorded maximum output power of 6.8 dBm is achieved at 94 GHz at room temperature. The minimum harmonics suppression is greater than 12dBc over 75 ~ 110 GHz band.

  4. New evidence for morphological errors in deep dyslexia.

    PubMed

    Rastle, Kathleen; Tyler, Lorraine K; Marslen-Wilson, William

    2006-05-01

    Morphological errors in reading aloud (e.g., sexist-->sexy) are a central feature of the symptom-complex known as deep dyslexia, and have historically been viewed as evidence that representations at some level of the reading system are morphologically structured. However, it has been proposed (Funnell, 1987) that morphological errors in deep dyslexia are not morphological in nature but are actually a type of visual error that arises when a target word that cannot be read aloud (by virtue of its low imageability and/or frequency) is modified to form a visually similar word that can be read aloud (by virtue of its higher imageability and/or frequency). In the work reported here, the deep dyslexic patient DE read aloud lists of genuinely suffixed words (e.g., killer), pseudosuffixed words (e.g., corner), and words with non-morphological embeddings (e.g., cornea). Results revealed that the morphological status of a word had a significant influence on the production of stem errors (i.e., errors that include the stem or pseudostem of the target): genuinely suffixed words yielded more stem errors than pseudosuffixed words or words with non-morphological embeddings. This effect of morphological status could not be attributed to the relative levels of target and stem imageability and/or frequency. We argue that this pattern of data indicates that apparent morphological errors in deep dyslexic reading are genuinely morphological, and discuss the implications of these errors for theories of deep dyslexia.

  5. Analyses of Rock Size-Frequency Distributions and Morphometry of Modified Hawaiian Lava Flows: Implications for Future Martian Landing Sites

    NASA Technical Reports Server (NTRS)

    Craddock, Robert A.; Golombek, Matthew; Howard, Alan D.

    2000-01-01

    Both the size-frequency distribution and morphometry of rock populations emplaced by a variety of geologic processes in Hawaii indicate that such information may be useful in planning future landing sites on Mars and interpreting the surface geology.

  6. A Modified Differential Coherent Bit Synchronization Algorithm for BeiDou Weak Signals with Large Frequency Deviation.

    PubMed

    Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi

    2017-07-04

    BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.

  7. Do predisposing and family background characteristics modify or confound the relationship between drinking frequency and alcohol-related aggression? A study of late adolescent and young adult drinkers.

    PubMed

    Wells, Samantha; Graham, Kathryn; Speechley, Mark; Koval, John J

    2006-04-01

    The present study examined whether predisposing and family background characteristics confounded (common cause/general deviance theory) or modified (conditional/interactive theory) the association between drinking frequency and alcohol-related aggression. A secondary analysis of the US National Longitudinal Survey of Youth was conducted using a composite sample of drinkers, ages 17 to 21, from the 1994, 1996, and 1998 Young Adult surveys (n=602). No evidence of confounding of the relationship between drinking frequency and alcohol-related aggression was found. In addition, predisposing characteristics did not modify the association between drinking frequency and alcohol-related aggression. However, family background variables (mother's education and any poverty) were important explanatory variables for alcohol-related aggression among males, whereas recent aggression (fights at school or work) was an important predictor for females. Overall, lack of support for the conditional/interactive and common cause theories of the alcohol and aggression relationship suggests that alcohol has an independent explanatory role in alcohol-related aggression. In addition, the gender differences found in the present study highlight the need for more gender-focussed research on predictors of alcohol-related aggression, especially among adolescents and young adults.

  8. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  9. Thermal acclimation and thyroxine treatment modify the electric organ discharge frequency in an electric fish, Apteronotus leptorhynchus.

    PubMed

    Dunlap, K D; Ragazzi, M A

    2015-11-01

    In ectotherms, the rate of many neural processes is determined externally, by the influence of the thermal environment on body temperature, and internally, by hormones secreted from the thyroid gland. Through thermal acclimation, animals can buffer the influence of the thermal environment by adjusting their physiology to stabilize certain processes in the face of environmental temperature change. The electric organ discharge (EOD) used by weak electric fish for electrocommunication and electrolocation is highly temperature sensitive. In some temperate species that naturally experience large seasonal fluctuations in environmental temperature, the thermal sensitivity (Q10) of the EOD shifts after long-term temperature change. We examined thermal acclimation of EOD frequency in a tropical electric fish, Apteronotus leptorhynchus that naturally experiences much less temperature change. We transferred fish between thermal environments (25.3 and 27.8 °C) and measured EOD frequency and its thermal sensitivity (Q10) over 11 d. After 6d, fish exhibited thermal acclimation to both warming and cooling, adjusting the thermal dependence of EOD frequency to partially compensate for the small change (2.5 °C) in water temperature. In addition, we evaluated the thyroid influence on EOD frequency by treating fish with thyroxine or the anti-thyroid compound propylthiouricil (PTU) to stimulate or inhibit thyroid activity, respectively. Thyroxine treatment significantly increased EOD frequency, but PTU had no effect. Neither thyroxine nor PTU treatment influenced the thermal sensitivity (Q10) of EOD frequency during acute temperature change. Thus, the EOD of Apteronotus shows significant thermal acclimation and responds to elevated thyroxine.

  10. ALTIMETER ERRORS,

    DTIC Science & Technology

    CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.

  11. Modified impulse method for the measurement of the frequency response of acoustic filters to weakly nonlinear transient excitations

    PubMed

    Payri; Desantes; Broatch

    2000-02-01

    In this paper, a modified impulse method is proposed which allows the determination of the influence of the excitation characteristics on acoustic filter performance. Issues related to nonlinear propagation, namely wave steepening and wave interactions, have been addressed in an approximate way, validated against one-dimensional unsteady nonlinear flow calculations. The results obtained for expansion chambers and extended duct resonators indicate that the amplitude threshold for the onset of nonlinear phenomena is related to the geometry considered.

  12. Error-related electrocorticographic activity in humans during continuous movements

    NASA Astrophysics Data System (ADS)

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2012-04-01

    Brain-machine interface (BMI) devices make errors in decoding. Detecting these errors online from neuronal activity can improve BMI performance by modifying the decoding algorithm and by correcting the errors made. Here, we study the neuronal correlates of two different types of errors which can both be employed in BMI: (i) the execution error, due to inaccurate decoding of the subjects’ movement intention; (ii) the outcome error, due to not achieving the goal of the movement. We demonstrate that, in electrocorticographic (ECoG) recordings from the surface of the human brain, strong error-related neural responses (ERNRs) for both types of errors can be observed. ERNRs were present in the low and high frequency components of the ECoG signals, with both signal components carrying partially independent information. Moreover, the observed ERNRs can be used to discriminate between error types, with high accuracy (≥83%) obtained already from single electrode signals. We found ERNRs in multiple cortical areas, including motor and somatosensory cortex. As the motor cortex is the primary target area for recording control signals for a BMI, an adaptive motor BMI utilizing these error signals may not require additional electrode implants in other brain areas.

  13. Tunable error-free optical frequency conversion of a 4ps optical short pulse over 25 nm by four-wave mixing in a polarisation-maintaining optical fibre

    NASA Astrophysics Data System (ADS)

    Morioka, T.; Kawanishi, S.; Saruwatari, M.

    1994-05-01

    Error-free, tunable optical frequency conversion of a transform-limited 4.0 ps optical pulse signalis demonstrated at 6.3 Gbit/s using four-wave mixing in a polarization-maintaining optical fibre. The process generates 4.0-4.6 ps pulses over a 25nm range with time-bandwidth products of 0.31-0.43 and conversion power penalties of less than 1.5 dB.

  14. Structural Area Inspection Frequency Evaluation (SAIFE). Volume 4. Software Documentation and User’s Manual. Book 2 Modified Program

    DTIC Science & Technology

    1978-04-01

    REPORT NO. FAA-RD-78-29, IV Book 2, LEVW STRUCTURAL AREA INSPECTION FREQUENCY EVALUATION (SAIFE) Volume IV. Software Documentation and User’s Manual...OCOR, OSDM, OPD - These variables are the number of occurrences 6 -fTirst- craks , corrosion, service damage, and production defects, respectively, for a

  15. Error Patterns in Problem Solving.

    ERIC Educational Resources Information Center

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  16. Frequency of mentally stimulating activities modifies the relationship between cardiovascular reactivity and executive function in old age.

    PubMed

    Lin, Feng; Heffner, Kathi; Mapstone, Mark; Chen, Ding-Geng Din; Porsteisson, Anton

    2014-11-01

    Recent evidence suggests that younger and middle-age adults who show greater cardiovascular reactivity (CVR) to acute mental stress demonstrate better reasoning and memory skills. The purpose of this study was to examine whether older adults would exhibit a similar positive association between CVR and executive function and whether regular engagement in mentally stimulating activities (MSA) would moderate this association. Secondary cross-sectional analysis. Three clinical research centers in the Midwest and on the West Coast and East Coast. A total of 487 older adults participating in an ongoing national survey. Heart rate (HR) and low-frequency (LF) and high-frequency (HF) domains of heart rate variability (HRV) were measured at baseline and in response to standard mental stress tasks (Stroop color word task and mental arithmetic). Executive function was measured separately from the stress tasks by using five neuropsychological tests. MSA was measured by self-reported frequency of six common MSA. Higher HR reactivity was associated with better executive function after controlling for demographic and health characteristics and baseline HR, and the interaction between HR reactivity and MSA was significant for executive function. Higher LF-HRV reactivity was also associated with executive function, but subsequent analyses indicated that frequency of MSA was the strongest predictor of executive function in models that included LF-HRV or HF-HRV. Higher HR reactivity to acute psychological stress is related to better executive function in older adults. For those with lower HR reactivity, engaging frequently in MSA produced compensatory benefits for executive function. Copyright © 2014 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  17. Use of focus groups to understand African-Americans’ dietary practices: Implications for modifying a food frequency questionnaire

    PubMed Central

    Bovell-Benjamin, Adelia C.; Dawkin, Norma; Pace, Ralphenia D.; Shikany, James M.

    2017-01-01

    Objective To generate information about dietary practices, food preferences and food preparation methods from African-Americans in Macon County, Alabama, as a precursor to an intervention designed to modify an existing dietary health questionnaire (DHQ). Method African-American males (30) and females (31) ages 20 to 75 years participated in eight focus groups in Macon County Alabama between June and July, 2007. Results The core topics identified were dietary practices; food preferences; food preparation methods; fast food practices; and seasonal/specialty foods. The younger focus group participants reported consuming mostly fast foods such as hamburgers for lunch. Fruits, vegetables, salads, fish, chicken and sandwiches were the most common lunch foods for the older males and females. Across the groups, rice, cornbread and potatoes were reportedly the most commonly consumed starchy foods at dinner. Frying and baking were the most common cooking methods. Fewer participants reported removing the skin when cooking chicken versus those who did not remove. Traditional foods including fried green tomatoes and cracklings were selected for addition to the modified DHQ, while those not commonly consumed, were deleted. Conclusions Participants described high-fat traditional food preferences, common frying and addition of salted meats to vegetables, which informed the modification of a DHQ. PMID:19285101

  18. Errors of Omission in English-Speaking Children's Production of Plurals and the Past Tense: The Effects of Frequency, Phonology, and Competition

    ERIC Educational Resources Information Center

    Matthews, Danielle E.; Theakston, Anna L.

    2006-01-01

    How do English-speaking children inflect nouns for plurality and verbs for the past tense? We assess theoretical answers to this question by considering errors of omission, which occur when children produce a stem in place of its inflected counterpart (e.g., saying "dress" to refer to 5 dresses). A total of 307 children (aged 3;11-9;9)…

  19. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  20. Infliximab therapy increases the frequency of circulating CD16(+) monocytes and modifies macrophage cytokine response to bacterial infection.

    PubMed

    Nazareth, N; Magro, F; Silva, J; Duro, M; Gracio, D; Coelho, R; Appelberg, R; Macedo, G; Sarmento, A

    2014-09-01

    Crohn's disease (CD) has been correlated with altered macrophage response to microorganisms. Considering the efficacy of infliximab treatment on CD remission, we investigated infliximab effects on circulating monocyte subsets and on macrophage cytokine response to bacteria. Human peripheral blood monocyte-derived macrophages were obtained from CD patients, treated or not with infliximab. Macrophages were infected with Escherichia coli, Enterococcus faecalis, Mycobacterium avium subsp. paratuberculosis (MAP) or M. avium subsp avium, and cytokine levels [tumour necrosis factor (TNF) and interleukin (IL)-10] were evaluated at different time-points. To evaluate infliximab-dependent effects on monocyte subsets, we studied CD14 and CD16 expression by peripheral blood monocytes before and after different infliximab administrations. We also investigated TNF secretion by macrophages obtained from CD16(+) and CD16(-) monocytes and the frequency of TNF(+) cells among CD16(+) and CD16(-) monocyte-derived macrophages from CD patients. Infliximab treatment resulted in elevated TNF and IL-10 macrophage response to bacteria. An infliximab-dependent increase in the frequency of circulating CD16(+) monocytes (particularly the CD14(++) CD16(+) subset) was also observed (before infliximab: 4·65 ± 0·58%; after three administrations: 10·68 ± 2·23%). In response to MAP infection, macrophages obtained from CD16(+) monocytes were higher TNF producers and CD16(+) macrophages from infliximab-treated CD patients showed increased frequency of TNF(+) cells. In conclusion, infliximab treatment increased the TNF production of CD macrophages in response to bacteria, which seemed to depend upon enrichment of CD16(+) circulating monocytes, particularly of the CD14(++) CD16(+) subset. Infliximab treatment of CD patients also resulted in increased macrophage IL-10 production in response to bacteria, suggesting an infliximab-induced shift to M2 macrophages.

  1. Wound healing treatment by high frequency ultrasound, microcurrent, and combined therapy modifies the immune response in rats

    PubMed Central

    Korelo, Raciele I. G.; Kryczyk, Marcelo; Garcia, Carolina; Naliwaiko, Katya; Fernandes, Luiz C.

    2016-01-01

    BACKGROUND: Therapeutic high-frequency ultrasound, microcurrent, and a combination of the two have been used as potential interventions in the soft tissue healing process, but little is known about their effect on the immune system. OBJECTIVE: To evaluate the effects of therapeutic high frequency ultrasound, microcurrent, and the combined therapy of the two on the size of the wound area, peritoneal macrophage function, CD4+ and CD8+, T lymphocyte populations, and plasma concentration of interleukins (ILs). METHOD: Sixty-five Wistar rats were randomized into five groups, as follows: uninjured control (C, group 1), lesion and no treatment (L, group 2), lesion treated with ultrasound (LU, group 3), lesion treated with microcurrent (LM, group 4), and lesion treated with combined therapy (LUM, group 5). For groups 3, 4 and 5, treatment was initiated 24 hours after surgery under anesthesia and each group was allocated into three different subgroups (n=5) to allow for the use of the different therapy resources at on days 3, 7 and 14 Photoplanimetry was performed daily. After euthanasia, blood was collected for immune analysis. RESULTS: Ultrasound increased the phagocytic capacity and the production of nitric oxide by macrophages and induced the reduction of CD4+ cells, the CD4+/CD8+ ratio, and the plasma concentration of IL-1β. Microcurrent and combined therapy decreased the production of superoxide anion, nitric oxide, CD4+-positive cells, the CD4+/CD8+ ratio, and IL-1β concentration. CONCLUSIONS: Therapeutic high-frequency ultrasound, microcurrent, and combined therapy changed the activity of the innate and adaptive immune system during healing process but did not accelerate the closure of the wound. PMID:26786082

  2. Frequency of Mentally Stimulating Activities Modifies the Relationship between Cardiovascular Reactivity and Executive Function in Old Age

    PubMed Central

    Lin, Feng; Heffner, Kathi; Mapstone, Mark; Chen, Ding-Geng (Din); Porsteisson, Anton

    2013-01-01

    Objective Recent evidence suggests that younger- and middle-age adults who show greater cardiovascular reactivity (CVR) to acute mental stress demonstrate better reasoning and memory skills. The purpose of this study was to examine whether older adults would show a similar positive association between CVR and executive function, and whether regular engagement in mentally stimulating activities (MSA) would moderate this association. Design Secondary cross-sectional analysis. Setting Three general clinical research centers located in the West Coast, Midwest, and East Coast. Participants 487 older adults participating in an on-going national survey. Measurements Heart rate (HR) and low (LF) and high frequency (HF) domains of heart rate variability (HRV) were measured at baseline and in response to standard mental stress tasks (Stroop color word task and mental arithmetic). Executive function was measured separately from the stress tasks using five neuropsychological tests. MSA was measured by self-report frequency of six common mentally stimulating activities. Results Higher HR reactivity was associated with better executive function after controlling for demographic and health variables and baseline HR activity and the interaction between HR reactivity and MSA was significant for executive function. Higher LF-HRV reactivity was also associated with executive function, but subsequent analyses indicated that frequency of MSA was the strongest predictor of executive function in models that included LF- or HF-HRV. Conclusions Higher HR reactivity to acute psychological stress is related to better executive function in older adults. For those with lower HR reactivity, engaging frequently in MSA showed significant compensatory benefits for executive function. PMID:23891367

  3. Medication Errors

    MedlinePlus

    ... address broader product safety issues. FDA Drug Safety Communications for Drug Products Associated with Medication Errors FDA Drug Safety Communication: FDA approves brand name change for antidepressant drug ...

  4. Phase tracking for pulsar navigation with Doppler frequency

    NASA Astrophysics Data System (ADS)

    Xinyuan, Zhang; Ping, Shuai; Liangwei, Huang

    2016-12-01

    Doppler frequency in pulsar navigation is an effect caused by spacecraft and pulsar motion, which would worsen the pulsar navigation accuracy. To describe this influence, we establish the Doppler frequency measurement model based on pulsar timing. With this model, we describe the relationship between the phase estimation performance and the observation time when Doppler frequency exists. To reduce the pulsar navigation error due to the Doppler frequency, we designed the phase tracking loop for the pulsar navigation. The pulsar frequency can be modified before the phase estimation. As a result, the impact of the Doppler frequency could be lessened, and the observation interval lengths can be lengthened to improve the phase estimation performance.

  5. Error detection in anatomic pathology.

    PubMed

    Zarbo, Richard J; Meier, Frederick A; Raab, Stephen S

    2005-10-01

    To define the magnitude of error occurring in anatomic pathology, to propose a scheme to classify such errors so their influence on clinical outcomes can be evaluated, and to identify quality assurance procedures able to reduce the frequency of errors. (a) Peer-reviewed literature search via PubMed for studies from single institutions and multi-institutional College of American Pathologists Q-Probes studies of anatomic pathology error detection and prevention practices; (b) structured evaluation of defects in surgical pathology reports uncovered in the Department of Pathology and Laboratory Medicine of the Henry Ford Health System in 2001-2003, using a newly validated error taxonomy scheme; and (c) comparative review of anatomic pathology quality assurance procedures proposed to reduce error. Marked differences in both definitions of error and pathology practice make comparison of error detection and prevention procedures among publications from individual institutions impossible. Q-Probes studies further suggest that observer redundancy reduces diagnostic variation and interpretive error, which ranges from 1.2 to 50 errors per 1000 cases; however, it is unclear which forms of such redundancy are the most efficient in uncovering diagnostic error. The proposed error taxonomy tested has shown a very good interobserver agreement of 91.4% (kappa = 0.8780; 95% confidence limit, 0.8416-0.9144), when applied to amended reports, and suggests a distribution of errors among identification, specimen, interpretation, and reporting variables. Presently, there are no standardized tools for defining error in anatomic pathology, so it cannot be reliably measured nor can its clinical impact be assessed. The authors propose a standardized error classification that would permit measurement of error frequencies, clinical impact of errors, and the effect of error reduction and prevention efforts. In particular, the value of double-reading, case conferences, and consultations (the

  6. Exposure to an extremely low-frequency electromagnetic field only slightly modifies the proteome of Chromobacterium violaceumATCC 12472.

    PubMed

    Baraúna, Rafael A; Santos, Agenor V; Graças, Diego A; Santos, Daniel M; Ghilardi, Rubens; Pimenta, Adriano M C; Carepo, Marta S P; Schneider, Maria P C; Silva, Artur

    2015-05-01

    Several studies of the physiological responses of different organisms exposed to extremely low-frequency electromagnetic fields (ELF-EMF) have been described. In this work, we report the minimal effects of in situ exposure to ELF-EMF on the global protein expression of Chromobacterium violaceum using a gel-based proteomic approach. The protein expression profile was only slightly altered, with five differentially expressed proteins detected in the exposed cultures; two of these proteins (DNA-binding stress protein, Dps, and alcohol dehydrogenase) were identified by MS/MS. The enhanced expression of Dps possibly helped to prevent physical damage to DNA. Although small, the changes in protein expression observed here were probably beneficial in helping the bacteria to adapt to the stress generated by the electromagnetic field.

  7. Exposure to an extremely low-frequency electromagnetic field only slightly modifies the proteome of Chromobacterium violaceumATCC 12472

    PubMed Central

    Baraúna, Rafael A.; Santos, Agenor V.; Graças, Diego A.; Santos, Daniel M.; Ghilardi, Rubens; Pimenta, Adriano M. C.; Carepo, Marta S. P.; Schneider, Maria P.C.; Silva, Artur

    2015-01-01

    Several studies of the physiological responses of different organisms exposed to extremely low-frequency electromagnetic fields (ELF-EMF) have been described. In this work, we report the minimal effects of in situ exposure to ELF-EMF on the global protein expression of Chromobacterium violaceum using a gel-based proteomic approach. The protein expression profile was only slightly altered, with five differentially expressed proteins detected in the exposed cultures; two of these proteins (DNA-binding stress protein, Dps, and alcohol dehydrogenase) were identified by MS/MS. The enhanced expression of Dps possibly helped to prevent physical damage to DNA. Although small, the changes in protein expression observed here were probably beneficial in helping the bacteria to adapt to the stress generated by the electromagnetic field. PMID:26273227

  8. Effect of a whole-body vibration training modifying the training frequency of workouts per week in active adults.

    PubMed

    Martínez-Pardo, Esmeraldo; Romero-Arenas, Salvador; Martínez-Ruiz, Enrique; Rubio-Arias, Jacobo A; Alcaraz, Pedro E

    2014-11-01

    The aim of this study was to evaluate the effects of whole-body vibration by varying the training frequency (2 or 3 sessions per week) on the development of strength, body composition, and mechanical power. Forty-one (32 men and 9 women) recreationally active subjects (21.4 ± 3.0 years old; 172.6 ± 10.9 cm; 70.9 ± 12.3 kg) took part in the study divided in 2 experimental groups (G2 = 2 sessions per week, G3 = 3 sessions per week) and a control group (CG). The frequency of vibration (50 Hz), amplitude (4 mm), time of work (60 seconds), and time of rest (60 seconds) were constant for G2 and G3 groups. Maximum isokinetic strength, body composition, and performance in vertical jumps were evaluated at the beginning and the end of the training cycle. A statistically significant increase of isokinetic strength was observed in G2 and G3 at angular velocities of 60, 180, and 270°·s. Total fat-free mass was statistically significantly increased in G2 (0.9 ± 1.0 kg) and G3 (1.5 ± 0.7 kg). In addition, statistically significant differences between G3 and CG (1.04 ± 1.7%) (p = 0.05) were found. There were no statistically significant changes in the total fat mass, fat percentage, bone mineral content, and bone mineral density in any of the groups. Both vibration training schedules produced statistically significant improvements in isokinetic strength. The vibration magnitude of the study presented an adaptation stimulus for muscle hypertrophy. The vibration training used in this study may be valid for athletes to develop both strength and hypertrophy of the lower limbs.

  9. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  10. Language comprehension errors: A further investigation

    NASA Astrophysics Data System (ADS)

    Clarkson, Philip C.

    1991-06-01

    Comprehension errors made when attempting mathematical word problems have been noted as one of the high frequency categories in error analysis. This error category has been assumed to be language based. The study reported here provides some support for the linkage of comprehension errors to measures of language competency. Further, there is evidence that the frequency of such errors is related to competency in both the mother tongue and the language of instruction for bilingual students.

  11. Photoperiodism as a modifier of effect of extremely low-frequency electromagnetic field on morphological properties of pineal gland.

    PubMed

    Lukac, Tamara; Matavulj, Amela; Matavulj, Milica; Rajković, Vesna; Lazetić, Bogosav

    2006-08-01

    The aim of our study was to determine, using histological and stereological methods, whether photoperiodism has any impact on the effects that chronic (three-month long) exposure to LF-EMF (50Hz) has on morphological characteristics on rat's pineal gland. The experiment was performed on 48 Mill Hill male rats (24 experimental and 24 control). Upon birth, 24 rats were exposed for 7h a day, 5 days a week for 3 months to LF-EMF (50 Hz, 50-500microT, 10V/m). In the winter (short days, long nights), the activity of the pineal gland and neuroendocrine sensitivity is increased. The study was performed both during summer and winter, following the identical protocol. After sacrifice of animals, samples of pineal gland were processed for HE staining and then were analyzed using the methods of stereology. The most significant changes in epiphysis in the first group of animals in wintertime are: altered glandular feature, hyperemia, reduced pinealocytes with pale pink, poor cytoplasm and irregular, stick-form nuclei. In the second group (II) pinealocytes are enlarged, with vacuolated cytoplasm and hyper chromatic, enlarged nucleus. Morphological changes of pineal gland at rats in the summertime were not as intense as in the winter and finding of the gland in the group II is compatible with those from the control group. Stereological results show both in winter and summer in the first group the decrease of volume density of pinealocytes, their cytoplasm and nuclei and in the second group in winter increase the volume density of pinealocytes, cytoplasm and nuclei, while in the second group the results in summertime are equal to those from the control group. Photoperiodism is modifier of effect of LF-EMF on morphological structure of pineal gland, because the gland recovery is incomplete in winter and reversible in summer.

  12. Design and implementation of a new modified sliding mode controller for grid-connected inverter to controlling the voltage and frequency.

    PubMed

    Ghanbarian, Mohammad Mehdi; Nayeripour, Majid; Rajaei, Amirhossein; Mansouri, Mohammad Mahdi

    2016-03-01

    As the output power of a microgrid with renewable energy sources should be regulated based on the grid conditions, using robust controllers to share and balance the power in order to regulate the voltage and frequency of microgrid is critical. Therefore a proper control system is necessary for updating the reference signals and determining the proportion of each inverter in the microgrid control. This paper proposes a new adaptive method which is robust while the conditions are changing. This controller is based on a modified sliding mode controller which provides adapting conditions in linear and nonlinear loads. The performance of the proposed method is validated by representing the simulation results and experimental lab results.

  13. Accurate identification of the frequency response functions for the rotor-bearing-foundation system using the modified pseudo mode shape method

    NASA Astrophysics Data System (ADS)

    Chen, Yeong-Shu; Cheng, Ye-Dar; Yang, Tachung; Koai, Kwang-Lu

    2010-03-01

    In this paper, an identification technique in the dynamic analyses of rotor-bearing-foundation systems called the pseudo mode shape method (PMSM) was improved in order to enhance the accuracy of the identified dynamic characteristic matrices of its foundation models. Two procedures, namely, phase modification and numerical optimisation, were proposed in the algorithm of PMSM to effectively improve its accuracy. Generally, it is always necessary to build the whole foundation model in studying the dynamics of a rotor system through the finite element analysis method. This is either unfeasible or impractical when the foundation is too complicated. Instead, the PMSM uses the frequency response function (FRF) data of joint positions between the rotor and the foundation to establish the equivalent mass, damping, and stiffness matrices of the foundation without having to build the physical model. However, the accuracy of the obtained system's FRF is still unsatisfactory, especially at those higher modes. In order to demonstrate the effectiveness of the presented methods, a solid foundation was solved for its FRF by using both the original and modified PMSM, as well as the finite element (ANSYS) model for comparisons. The results showed that the accuracy of the obtained FRF was improved remarkably with the modified PMSM based on the results of the ANSYS. In addition, an induction motor resembling a rotor-bearing-foundation system, with its housing treated as the foundation, was taken as an example to verify the algorithm experimentally. The FRF curves at the bearing supports of the rotor (armature) were obtained through modal testing to estimate the above-mentioned equivalent matrices of the housing. The FRF of the housing, which was calculated from the equivalent matrices with the modified PMSM, showed satisfactory consistency with that from the modal testing.

  14. New Gear Transmission Error Measurement System Designed

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2001-01-01

    The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.

  15. Axial eye growth and refractive error development can be modified by exposing the peripheral retina to relative myopic or hyperopic defocus.

    PubMed

    Benavente-Pérez, Alexandra; Nour, Ann; Troilo, David

    2014-09-04

    Bifocal contact lenses were used to impose hyperopic and myopic defocus on the peripheral retina of marmosets. Eye growth and refractive state were compared with untreated animals and those treated with single-vision or multizone contact lenses from earlier studies. Thirty juvenile marmosets wore one of three experimental annular bifocal contact lens designs on their right eyes and a plano contact lens on the left eye as a control for 10 weeks from 70 days of age (10 marmosets/group). The experimental designs had plano center zones (1.5 or 3 mm) and +5 diopters [D] or -5 D in the periphery (referred to as +5 D/1.5 mm, +5 D/3 mm and -5 D/3 mm). We measured the central and peripheral mean spherical refractive error (MSE), vitreous chamber depth (VC), pupil diameter (PD), calculated eye growth, and myopia progression rates prior to and during treatment. The results were compared with age-matched untreated (N=25), single-vision positive (N=19), negative (N=16), and +5/-5 D multizone lens-reared marmosets (N=10). At the end of treatment, animals in the -5 D/3 mm group had larger (P<0.01) and more myopic eyes (P<0.05) than animals in the +5 D/1.5 mm group. There was a dose-dependent relationship between the peripheral treatment zone area and the treatment-induced changes in eye growth and refractive state. Pretreatment ocular growth rates and baseline peripheral refraction accounted for 40% of the induced refraction and axial growth rate changes. Eye growth and refractive state can be manipulated by altering peripheral retinal defocus. Imposing peripheral hyperopic defocus produces axial myopia, whereas peripheral myopic defocus produces axial hyperopia. The effects are smaller than using single-vision contact lenses that impose full-field defocus, but support the use of bifocal or multifocal contact lenses as an effective treatment for myopia control. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  16. Analysis of ionospheric refraction error corrections for GRARR systems

    NASA Technical Reports Server (NTRS)

    Mallinckrodt, A. J.; Parker, H. C.; Berbert, J. H.

    1971-01-01

    A determination is presented of the ionospheric refraction correction requirements for the Goddard range and range rate (GRARR) S-band, modified S-band, very high frequency (VHF), and modified VHF systems. The relation ships within these four systems are analyzed to show that the refraction corrections are the same for all four systems and to clarify the group and phase nature of these corrections. The analysis is simplified by recognizing that the range rate is equivalent to a carrier phase range change measurement. The equation for the range errors are given.

  17. Diagnostic errors in pediatric radiology.

    PubMed

    Taylor, George A; Voss, Stephan D; Melvin, Patrice R; Graham, Dionne A

    2011-03-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement.

  18. Exploration of MR-guided head and neck hyperthermia by phantom testing of a modified prototype applicator for use with proton resonance frequency shift thermometry.

    PubMed

    Numan, Wouter C M; Hofstetter, Lorne W; Kotek, Gyula; Bakker, Jurriaan F; Fiveland, Eric W; Houston, Gavin C; Kudielka, Guido; Yeo, Desmond T B; Paulides, Margarethus M

    2014-05-01

    Magnetic resonance thermometry (MRT) offers non-invasive temperature imaging and can greatly contribute to the effectiveness of head and neck hyperthermia. We therefore wish to redesign the HYPERcollar head and neck hyperthermia applicator for simultaneous radio frequency (RF) heating and magnetic resonance thermometry. In this work we tested the feasibility of this goal through an exploratory experiment, in which we used a minimally modified applicator prototype to heat a neck model phantom and used an MR scanner to measure its temperature distribution. We identified several distorting factors of our current applicator design and experimental methods to be addressed during development of a fully MR compatible applicator. To allow MR imaging of the electromagnetically shielded inside of the applicator, only the lower half of the HYPERcollar prototype was used. Two of its antennas radiated a microwave signal (150 W, 434 MHz) for 11 min into the phantom, creating a high gradient temperature profile (ΔTmax = 5.35 °C). Thermal distributions were measured sequentially, using drift corrected proton resonance frequency shift-based MRT. Measurement accuracy was assessed using optical probe thermometry and found to be about 0.4 °C (0.1-0.7 °C). Thermal distribution size and shape were verified by thermal simulations and found to have a good correlation (r(2 )= 0.76).

  19. Potential-dependent structures investigated at the perchloric acid solution/iodine modified Au(111) interface by electrochemical frequency-modulation atomic force microscopy.

    PubMed

    Utsunomiya, Toru; Tatsumi, Shoko; Yokota, Yasuyuki; Fukui, Ken-ichi

    2015-05-21

    Electrochemical frequency-modulation atomic force microscopy (EC-FM-AFM) was adopted to analyze the electrified interface between an iodine modified Au(111) and a perchloric acid solution. Atomic resolution imaging of the electrode was strongly dependent on the electrode potential within the electrochemical window: each iodine atom was imaged in the cathodic range of the electrode potential, but not in the more anodic range where the tip is retracted by approximately 0.1 nm compared to the cathodic case for the same imaging parameters. The frequency shift versus tip-to-sample distance curves obtained in the electric double layer region on the iodine adlayer indicated that the water structuring became weaker at the anodic potential, where the atomic resolution images could not be obtained, and immediately recovered at the original cathodic potential. The reversible hydration structures were consistent with the reversible topographic images and the cyclic voltammetry results. These results indicate that perchlorate anions concentrated at the anodic potential affect the interface hydration without any irreversible changes to the interface under these conditions.

  20. High frequency electromagnetic properties of interstitial-atom-modified Ce{sub 2}Fe{sub 17}N{sub X} and its composites

    SciTech Connect

    Li, L. Z.; Wei, J. Z.; Xia, Y. H.; Wu, R.; Yun, C.; Yang, Y. B.; Yang, W. Y.; Du, H. L.; Han, J. Z.; Liu, S. Q.; Yang, Y. C.; Wang, C. S. E-mail: jbyang@pku.edu.cn; Yang, J. B. E-mail: jbyang@pku.edu.cn

    2014-07-14

    The magnetic and microwave absorption properties of the interstitial atom modified intermetallic compound Ce{sub 2}Fe{sub 17}N{sub X} have been investigated. The Ce{sub 2}Fe{sub 17}N{sub X} compound shows a planar anisotropy with saturation magnetization of 1088 kA/m at room temperature. The Ce{sub 2}Fe{sub 17}N{sub X} paraffin composite with a mass ratio of 1:1 exhibits a permeability of μ′ = 2.7 at low frequency, together with a reflection loss of −26 dB at 6.9 GHz with a thickness of 1.5 mm and −60 dB at 2.2 GHz with a thickness of 4.0 mm. It was found that this composite increases the Snoek limit and exhibits both high working frequency and permeability due to its high saturation magnetization and high ratio of the c-axis anisotropy field to the basal plane anisotropy field. Hence, it is possible that this composite can be used as a high-performance thin layer microwave absorber.

  1. Analyzing the properties of acceptor mode in two-dimensional plasma photonic crystals based on a modified finite-difference frequency-domain method

    SciTech Connect

    Zhang, Hai-Feng; Ding, Guo-Wen; Lin, Yi-Bing; Chen, Yu-Qing

    2015-05-15

    In this paper, the properties of acceptor mode in two-dimensional plasma photonic crystals (2D PPCs) composed of the homogeneous and isotropic dielectric cylinders inserted into nonmagnetized plasma background with square lattices under transverse-magnetic wave are theoretically investigated by a modified finite-difference frequency-domain (FDFD) method with supercell technique, whose symmetry of every supercell is broken by removing a central rod. A new FDFD method is developed to calculate the band structures of such PPCs. The novel FDFD method adopts a general function to describe the distribution of dielectric in the present PPCs, which can easily transform the complicated nonlinear eigenvalue equation to the simple linear equation. The details of convergence and effectiveness of proposed FDFD method are analyzed using a numerical example. The simulated results demonstrate that the enough accuracy of the proposed FDFD method can be observed compared to the plane wave expansion method, and the good convergence can also be obtained if the number of meshed grids is large enough. As a comparison, two different configurations of photonic crystals (PCs) but with similar defect are theoretically investigated. Compared to the conventional dielectric-air PCs, not only the acceptor mode has a higher frequency but also an additional photonic bandgap (PBG) can be found in the low frequency region. The calculated results also show that PBGs of proposed PPCs can be enlarged as the point defect is introduced. The influences of the parameters for present PPCs on the properties of acceptor mode are also discussed in detail. Numerical simulations reveal that the acceptor mode in the present PPCs can be easily tuned by changing those parameters. Those results can hold promise for designing the tunable applications in the signal process or time delay devices based on the present PPCs.

  2. Error estimation for ORION baseline vector determination

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1980-01-01

    Effects of error sources on Operational Radio Interferometry Observing Network (ORION) baseline vector determination are studied. Partial derivatives of delay observations with respect to each error source are formulated. Covariance analysis is performed to estimate the contribution of each error source to baseline vector error. System design parameters such as antenna sizes, system temperatures and provision for dual frequency operation are discussed.

  3. Correcting numerical integration errors caused by small aliasing errors

    SciTech Connect

    Smallwood, D.O.

    1997-11-01

    Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.

  4. Reduction of Surface Errors over a Wide Range of Spatial Frequencies Using a Combination of Electrolytic In-Process Dressing Grinding and Magnetorheological Finishing

    NASA Astrophysics Data System (ADS)

    Kunimura, Shinsuke; Ohmori, Hitoshi

    We present a rapid process for producing flat and smooth surfaces. In this technical note, a fabrication result of a carbon mirror is shown. Electrolytic in-process dressing (ELID) grinding with a metal bonded abrasive wheel, then a metal-resin bonded abrasive wheel, followed by a conductive rubber bonded abrasive wheel, and finally magnetorheological finishing (MRF) were performed as the first, second, third, and final steps, respectively in this process. Flatness over the whole surface was improved by performing the first and second steps. After the third step, peak to valley (PV) and root mean square (rms) values in an area of 0.72 x 0.54 mm2 on the surface were improved. These values were further improved after the final step, and a PV value of 10 nm and an rms value of 1 nm were obtained. Form errors and small surface irregularities such as surface waviness and micro roughness were efficiently reduced by performing ELID grinding using the above three kinds of abrasive wheels because of the high removal rate of ELID grinding, and residual small irregularities were reduced by short time MRF. This process makes it possible to produce flat and smooth surfaces in several hours.

  5. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  6. Performance analysis of ARQ error controls under Markovian block error pattern

    NASA Astrophysics Data System (ADS)

    Cho, Young Jong; Un, Chong Kwan

    1994-02-01

    In this paper, we investigate the effect of forward/backward channel memory (statistical dependence in the occurrence of transmission errors) on ARQ error controls. To take into account the effect of backward channel errors in the performance analysis, we suppose some modified ARQ schemes that have an effective retransmission strategy to prevent the deadlock incurred by the errors on acknowledgments. In the study, we consider two modified go-back-N schemes with timer control and with buffer control.

  7. Phase Error Correction for Approximated Observation-Based Compressed Sensing Radar Imaging

    PubMed Central

    Li, Bo; Liu, Falin; Zhou, Chongbin; Lv, Yuanhao; Hu, Jingqiu

    2017-01-01

    Defocus of the reconstructed image of synthetic aperture radar (SAR) occurs in the presence of the phase error. In this work, a phase error correction method is proposed for compressed sensing (CS) radar imaging based on approximated observation. The proposed method has better image focusing ability with much less memory cost, compared to the conventional approaches, due to the inherent low memory requirement of the approximated observation operator. The one-dimensional (1D) phase error correction for approximated observation-based CS-SAR imaging is first carried out and it can be conveniently applied to the cases of random-frequency waveform and linear frequency modulated (LFM) waveform without any a priori knowledge. The approximated observation operators are obtained by calculating the inverse of Omega-K and chirp scaling algorithms for random-frequency and LFM waveforms, respectively. Furthermore, the 1D phase error model is modified by incorporating a priori knowledge and then a weighted 1D phase error model is proposed, which is capable of correcting two-dimensional (2D) phase error in some cases, where the estimation can be simplified to a 1D problem. Simulation and experimental results validate the effectiveness of the proposed method in the presence of 1D phase error or weighted 1D phase error. PMID:28304353

  8. Phase Error Correction for Approximated Observation-Based Compressed Sensing Radar Imaging.

    PubMed

    Li, Bo; Liu, Falin; Zhou, Chongbin; Lv, Yuanhao; Hu, Jingqiu

    2017-03-17

    Defocus of the reconstructed image of synthetic aperture radar (SAR) occurs in the presence of the phase error. In this work, a phase error correction method is proposed for compressed sensing (CS) radar imaging based on approximated observation. The proposed method has better image focusing ability with much less memory cost, compared to the conventional approaches, due to the inherent low memory requirement of the approximated observation operator. The one-dimensional (1D) phase error correction for approximated observation-based CS-SAR imaging is first carried out and it can be conveniently applied to the cases of random-frequency waveform and linear frequency modulated (LFM) waveform without any a priori knowledge. The approximated observation operators are obtained by calculating the inverse of Omega-K and chirp scaling algorithms for random-frequency and LFM waveforms, respectively. Furthermore, the 1D phase error model is modified by incorporating a priori knowledge and then a weighted 1D phase error model is proposed, which is capable of correcting two-dimensional (2D) phase error in some cases, where the estimation can be simplified to a 1D problem. Simulation and experimental results validate the effectiveness of the proposed method in the presence of 1D phase error or weighted 1D phase error.

  9. Medical errors: overcoming the challenges.

    PubMed

    Kalra, Jawahar

    2004-12-01

    The issue of medical errors has received substantial attention in recent years. The Institute of Medicine (IOM) report released in 1999 has several implications for health care systems in all disciplines of medicine. Notwithstanding the plethora of available information on the subject, little, by way of substantive action, is done toward medical error reduction. A principal reason for this may be the stigma associated with medical errors. An educational program with a practical, informed, and longitudinal approach offers realistic solutions toward this end. Effective reporting systems need to be developed as a medium of learning from the errors and modifying behaviors appropriately. The presence of a strong leadership supported by organizational commitment is essential in driving these changes. A national, provincial or territorial quality care council dedicated solely for the purpose of enhancing patient safety and medical error reduction may be formed to oversee these efforts. The bioethical and emotional components associated with medical errors also deserve attention and focus.

  10. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day

  11. Combination of modified mixing technique and low frequency ultrasound to control the elution profile of vancomycin-loaded acrylic bone cement

    PubMed Central

    Wendling, A.; Mar, D.; Wischmeier, N.; Anderson, D.

    2016-01-01

    provides a reasonable means for increasing both short- and long-term antibiotic elution without affecting mechanical strength. Cite this article: Dr. T. McIff. Combination of modified mixing technique and low frequency ultrasound to control the elution profile of vancomycin-loaded acrylic bone cement. Bone Joint Res 2016;5:26–32. DOI: 10.1302/2046-3758.52.2000412 PMID:26843512

  12. Using a modified time-reverse imaging technique to locate low-frequency earthquakes on the San Andreas Fault near Cholame, California

    USGS Publications Warehouse

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2015-01-01

    We present a new method to locate low-frequency earthquakes (LFEs) within tectonic tremor episodes based on time-reverse imaging techniques. The modified time-reverse imaging technique presented here is the first method that locates individual LFEs within tremor episodes within 5 km uncertainty without relying on high-amplitude P-wave arrivals and that produces similar hypocentral locations to methods that locate events by stacking hundreds of LFEs without having to assume event co-location. In contrast to classic time-reverse imaging algorithms, we implement a modification to the method that searches for phase coherence over a short time period rather than identifying the maximum amplitude of a superpositioned wavefield. The method is independent of amplitude and can help constrain event origin time. The method uses individual LFE origin times, but does not rely on a priori information on LFE templates and families.We apply the method to locate 34 individual LFEs within tremor episodes that occur between 2010 and 2011 on the San Andreas Fault, near Cholame, California. Individual LFE location accuracies range from 2.6 to 5 km horizontally and 4.8 km vertically. Other methods that have been able to locate individual LFEs with accuracy of less than 5 km have mainly used large-amplitude events where a P-phase arrival can be identified. The method described here has the potential to locate a larger number of individual low-amplitude events with only the S-phase arrival. Location accuracy is controlled by the velocity model resolution and the wavelength of the dominant energy of the signal. Location results are also dependent on the number of stations used and are negligibly correlated with other factors such as the maximum gap in azimuthal coverage, source–station distance and signal-to-noise ratio.

  13. Multi-Frequency Synthesis

    NASA Astrophysics Data System (ADS)

    Conway, J. E.; Sault, R. J.

    Introduction; Image Fidelity; Multi-Frequency Synthesis; Spectral Effects; The Spectral Expansion; Spectral Dirty Beams; First Order Spectral Errors; Second Order Spectral Errors; The MFS Deconvolution Problem; Nature of The Problem; Map and Stack; Direct Assault; Data Weighting Methods; Double Deconvolution; The Sault Algorithm; Multi-Frequency Self-Calibration; Practical MFS; Conclusions

  14. Phase Errors and the Capture Effect

    SciTech Connect

    Blair, J., and Machorro, E.

    2011-11-01

    This slide-show presents analysis of spectrograms and the phase error of filtered noise in a signal. When the filtered noise is smaller than the signal amplitude, the phase error can never exceed 90{deg}, so the average phase error over many cycles is zero: this is called the capture effect because the largest signal captures the phase and frequency determination.

  15. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  16. Error studies for SNS Linac. Part 1: Transverse errors

    SciTech Connect

    Crandall, K.R.

    1998-12-31

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll).

  17. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  18. Prescription errors in cancer chemotherapy: Omissions supersede potentially harmful errors

    PubMed Central

    Mathaiyan, Jayanthi; Jain, Tanvi; Dubashi, Biswajit; Reddy, K Satyanarayana; Batmanabane, Gitanjali

    2015-01-01

    Objective: To estimate the frequency and type of prescription errors in patients receiving cancer chemotherapy. Settings and Design: We conducted a cross-sectional study at the day care unit of the Regional Cancer Centre (RCC) of a tertiary care hospital in South India. Materials and Methods: All prescriptions written during July to September 2013 for patients attending the out-patient department of the RCC to be treated at the day care center were included in this study. The prescriptions were analyzed for omission of standard information, usage of brand names, abbreviations and legibility. The errors were further classified into potentially harmful ones and not harmful based on the likelihood of resulting in harm to the patient. Descriptive analysis was performed to estimate the frequency of prescription errors and expressed as total number of errors and percentage. Results: A total of 4253 prescribing errors were found in 1500 prescriptions (283.5%), of which 47.1% were due to omissions like name, age and diagnosis and 22.5% were due to usage of brand names. Abbreviations of pre-medications and anticancer drugs accounted for 29.2% of the errors. Potentially harmful errors that were likely to result in serious consequences to the patient were estimated to be 11.7%. Conclusions: Most of the errors intercepted in our study are due to a high patient load and inattention of the prescribers to omissions in prescription. Redesigning prescription forms and sensitizing prescribers to the importance of writing prescriptions without errors may help in reducing errors to a large extent. PMID:25969654

  19. Biomechanical evaluation of laser-etched Ti implant surfaces vs. chemically modified SLA Ti implant surfaces: Removal torque and resonance frequency analysis in rabbit tibias.

    PubMed

    Lee, Jung-Tae; Cho, Sung-Am

    2016-08-01

    To compare osseointegration and implant stability of two types of laser-etched (LE) Ti implants with a chemically-modified, sandblasted, large-grit and acid-etched (SLA) Ti implant (SLActive(®), Straumann, Basel, Switzerland), by evaluating removal torque and resonance frequency between the implant surface and rabbit tibia bones. We used conventional LE Ti implants (conventional LE implant, CSM implant, Daegu, Korea) and LE Ti implants that had been chemically activated with 0.9% NaCl solution (LE active implant) for comparison with SLActive(®) implants Two types of 3.3×8mm laser-etched Ti implants - conventional LE implants and LE active implants were prepared. LE implants and SLActive(®) implants were installed on the left and right tibias of 10 adult rabbits weighing approximately 3.0kg LE active implants and SLActive(®) implants were installed on the left and right tibias of 11 adult rabbits. After installation, we measured insertion torque (ITQ) and resonance frequency (ISQ). Three weeks (LE active) or 4 weeks (conventional LE) after installation, we measured removal torque (RTQ) and ISQ. In the conventional LE experiment, the mean ITQ was 16.99±6.35Ncm for conventional LE implants and 16.11±7.36Ncm for SLActive(®) implants (p=0.778>0.05). After 4 weeks, the mean of RTQ was 39.49±17.3Ncm for LE and 42.27±20.5Ncm for SLActive(®) (p=0.747>0.05). Right after insertion of the implants, the mean ISQ was 74.8±4.98 for conventional LE and 70.1±9.15 for SLActive(®) implants (p=0.169>0.05). After 4 weeks, the mean ISQ was 64.40±6.95 for LE and 67.70±9.83 for SLActive(®) (p=0.397>0.05). In the LE active experiment, the mean ITQ was 16.24±7.49Ncm for LE active implants and 14.33±5.06Ncm for SLActive(®) implants (p=0.491>0.05). After 3 weeks, the mean RTQ was 39.25±16.41Ncm for LE active and 41.56±10.41Ncm for SLActive(®) implants (p=0.698>0.05). Right after insertion of the implants, the mean ISQ was 58.64±10.51 for LE active implants and 53.82

  20. Error analysis of tissue resistivity measurement.

    PubMed

    Tsai, Jang-Zern; Will, James A; Hubbard-Van Stelle, Scott; Cao, Hong; Tungjitkusolmun, Supan; Choy, Young Bin; Haemmerich, Dieter; Vorperian, Vicken R; Webster, John G

    2002-05-01

    We identified the error sources in a system for measuring tissue resistivity at eight frequencies from 1 Hz to 1 MHz using the four-terminal method. We expressed the measured resistivity with an analytical formula containing all error terms. We conducted practical error measurements with in-vivo and bench-top experiments. We averaged errors at all frequencies for all measurements. The standard deviations of error of the quantization error of the 8-bit digital oscilloscope with voltage averaging, the nonideality of the circuit, the in-vivo motion artifact and electrical interference combined to yield an error of +/- 1.19%. The dimension error in measuring the syringe tube for measuring the reference saline resistivity added +/- 1.32% error. The estimation of the working probe constant by interpolating a set of probe constants measured in reference saline solutions added +/- 0.48% error. The difference in the current magnitudes used during the probe calibration and that during the tissue resistivity measurement caused +/- 0.14% error. Variation of the electrode spacing, alignment, and electrode surface property due to the insertion of electrodes into the tissue caused +/- 0.61% error. We combined the above errors to yield an overall standard deviation error of the measured tissue resistivity of +/- 1.96%.

  1. Concurrent Acoustic Activation of the Medial Olivocochlear System Modifies the After-Effects of Intense Low-Frequency Sound on the Human Inner Ear.

    PubMed

    Kugler, Kathrin; Wiegrebe, Lutz; Gürkov, Robert; Krause, Eike; Drexl, Markus

    2015-12-01

    >Human hearing is rather insensitive for very low frequencies (i.e. below 100 Hz). Despite this insensitivity, low-frequency sound can cause oscillating changes of cochlear gain in inner ear regions processing even much higher frequencies. These alterations outlast the duration of the low-frequency stimulation by several minutes, for which the term 'bounce phenomenon' has been coined. Previously, we have shown that the bounce can be traced by monitoring frequency and level changes of spontaneous otoacoustic emissions (SOAEs) over time. It has been suggested elsewhere that large receptor potentials elicited by low-frequency stimulation produce a net Ca(2+) influx and associated gain decrease in outer hair cells. The bounce presumably reflects an underdamped, homeostatic readjustment of increased Ca(2+) concentrations and related gain changes after low-frequency sound offset. Here, we test this hypothesis by activating the medial olivocochlear efferent system during presentation of the bounce-evoking low-frequency (LF) sound. The efferent system is known to modulate outer hair cell Ca(2+) concentrations and receptor potentials, and therefore, it should modulate the characteristics of the bounce phenomenon. We show that simultaneous presentation of contralateral broadband noise (100 Hz-8 kHz, 65 and 70 dB SPL, 90 s, activating the efferent system) and ipsilateral low-frequency sound (30 Hz, 120 dB SPL, 90 s, inducing the bounce) affects the characteristics of bouncing SOAEs recorded after low-frequency sound offset. Specifically, the decay time constant of the SOAE level changes is shorter, and the transient SOAE suppression is less pronounced. Moreover, the number of new, transient SOAEs as they are seen during the bounce, are reduced. Taken together, activation of the medial olivocochlear system during induction of the bounce phenomenon with low-frequency sound results in changed characteristics of the bounce phenomenon. Thus, our data provide experimental support

  2. Error-Based Design Space Windowing

    NASA Technical Reports Server (NTRS)

    Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman

    2002-01-01

    Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.

  3. Reducing errors in emergency surgery.

    PubMed

    Watters, David A K; Truskett, Philip G

    2013-06-01

    Errors are to be expected in health care. Adverse events occur in around 10% of surgical patients and may be even more common in emergency surgery. There is little formal teaching on surgical error in surgical education and training programmes despite their frequency. This paper reviews surgical error and provides a classification system, to facilitate learning. The approach and language used to enable teaching about surgical error was developed through a review of key literature and consensus by the founding faculty of the Management of Surgical Emergencies course, currently delivered by General Surgeons Australia. Errors may be classified as being the result of commission, omission or inition. An error of inition is a failure of effort or will and is a failure of professionalism. The risk of error can be minimized by good situational awareness, matching perception to reality, and, during treatment, reassessing the patient, team and plan. It is important to recognize and acknowledge an error when it occurs and then to respond appropriately. The response will involve rectifying the error where possible but also disclosing, reporting and reviewing at a system level all the root causes. This should be done without shaming or blaming. However, the individual surgeon still needs to reflect on their own contribution and performance. A classification of surgical error has been developed that promotes understanding of how the error was generated, and utilizes a language that encourages reflection, reporting and response by surgeons and their teams. © 2013 The Authors. ANZ Journal of Surgery © 2013 Royal Australasian College of Surgeons.

  4. Error analysis of corner cutting algorithms

    NASA Astrophysics Data System (ADS)

    Mainar, E.; Peña, J. M.

    1999-10-01

    Corner cutting algorithms are used in different fields and, in particular, play a relevant role in Computer Aided Geometric Design. Evaluation algorithms such as the de Casteljau algorithm for polynomials and the de Boor-Cox algorithm for B-splines are examples of corner cutting algorithms. Here backward and forward error analysis of corner cutting algorithms are performed. The running error is also analyzed and as a consequence the general algorithm is modified to include the computation of an error bound.

  5. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  6. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  7. Unforced errors and error reduction in tennis.

    PubMed

    Brody, H

    2006-05-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors.

  8. Errors in general practice: development of an error classification and pilot study of a method for detecting errors

    PubMed Central

    Rubin, G; George, A; Chinn, D; Richardson, C

    2003-01-01

    Objective: To describe a classification of errors and to assess the feasibility and acceptability of a method for recording staff reported errors in general practice. Design: An iterative process in a pilot practice was used to develop a classification of errors. This was incorporated in an anonymous self-report form which was then used to collect information on errors during June 2002. The acceptability of the reporting process was assessed using a self-completion questionnaire. Setting: UK general practice. Participants: Ten general practices in the North East of England. Main outcome measures: Classification of errors, frequency of errors, error rates per 1000 appointments, acceptability of the process to participants. Results: 101 events were used to create an initial error classification. This contained six categories: prescriptions, communication, appointments, equipment, clinical care, and "other" errors. Subsequently, 940 errors were recorded in a single 2 week period from 10 practices, providing additional information. 42% (397/940) were related to prescriptions, although only 6% (22/397) of these were medication errors. Communication errors accounted for 30% (282/940) of errors and clinical errors 3% (24/940). The overall error rate was 75.6/1000 appointments (95% CI 71 to 80). The method of error reporting was found to be acceptable by 68% (36/53) of respondents with only 8% (4/53) finding the process threatening. Conclusion: We have developed a classification of errors and described a practical and acceptable method for reporting them that can be used as part of the process of risk management. Errors are common and, although all have the potential to lead to an adverse event, most are administrative. PMID:14645760

  9. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.

  10. Identification errors in pathology and laboratory medicine.

    PubMed

    Valenstein, Paul N; Sirota, Ronald L

    2004-12-01

    Identification errors involve misidentification of a patient or a specimen. Either has the potential to cause patients harm. Identification errors can occur during any part of the test cycle; however, most occur in the preanalytic phase. Patient identification errors in transfusion medicine occur in 0.05% of specimens; for general laboratory specimens the rate is much higher, around 1%. Anatomic pathology, which involves multiple specimen transfers and hand-offs, may have the highest identification error rate. Certain unavoidable cognitive failures lead to identification errors. Technology, ranging from bar-coded specimen labels to radio frequency identification tags, can be incorporated into protective systems that have the potential to detect and correct human error and reduce the frequency with which patients and specimens are misidentified.

  11. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  12. Feedback Error Learning with Insufficient Excitation

    NASA Astrophysics Data System (ADS)

    Alali, Basel; Hirata, Kentaro; Sugimoto, Kenji

    This letter studies the tracking error in Multi-input Multi-output Feedback Error Learning (MIMO-FEL) system having insufficient excitation. It is shown that the error converges to zero exponentially even if the reference signal lacks the persistently excitation (PE) condition. Furthermore, by making full use of this fast convergence, we estimate the plant parameter while in operation based on frequency response. Simulation results show the effectiveness of the proposed method compared to a conventional approach.

  13. A Review of Errors in the Journal Abstract

    ERIC Educational Resources Information Center

    Lee, Eunpyo; Kim, Eun-Kyung

    2013-01-01

    (percentage) of abstracts that involved with errors, the most erroneous part of the abstract, and the types and frequency of errors. Also the purpose expanded to compare the results with those of the previous…

  14. Error analysis of quartz crystal resonator applications

    SciTech Connect

    Lucklum, R.; Behling, C.; Hauptmann, P.; Cernosek, R.W.; Martin, S.J.

    1996-12-31

    Quartz crystal resonators in chemical sensing applications are usually configured as the frequency determining element of an electrical oscillator. By contrast, the shear modulus determination of a polymer coating needs a complete impedance analysis. The first part of this contribution reports the error made if common approximations are used to relate the frequency shift to the sorbed mass. In the second part the authors discuss different error sources in the procedure to determine shear parameters.

  15. Computation of the modified magnetostriction coefficient b' corresponding to different depth ranges in ferromagnetic specimens by using a frequency dependent model for magnetic Barkhausen emissions

    NASA Astrophysics Data System (ADS)

    Kypris, Orfeas; Nlebedim, Ikenna; Jiles, David

    2013-03-01

    We have recently shown that a linear relationship exists between the reciprocal peak voltage envelope amplitude 1 /Vpeak of the magnetic Barkhausen signal and elastic stress σ. By applying a frequency-dependent model to determine the depth of origin of the Barkhausen emissions in a uniformly stressed steel specimen, this relationship was found to be valid for different depth ranges. The linear relationship depends on a coefficient of proportionality b'. This was found to decrease with depth, indicating that the higher part of the frequency spectrum is less sensitive to changes in stress. In this study, the model equations have been applied at various depth ranges. It was found that the variation of b' with depth can be utilized in an inversion procedure to assess the stress state in ferromagnetic specimens to give stress-depth profiles. This study is useful for non-destructive characterization of stress with depth.

  16. Errors associated with outpatient computerized prescribing systems

    PubMed Central

    Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G

    2011-01-01

    Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428

  17. An observational study of drug administration errors in a Malaysian hospital (study of drug administration errors).

    PubMed

    Chua, S S; Tea, M H; Rahman, M H A

    2009-04-01

    Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.

  18. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  19. Characteristics and costs of surgical scheduling errors.

    PubMed

    Wu, Rebecca L; Aufses, Arthur H

    2012-10-01

    Errors that increase the risk of wrong-side/-site procedures not only occur the day of surgery but also are often introduced much earlier during the scheduling process. The frequency of these booking errors and their effects are unclear. All surgical scheduling errors reported in the institution's medical event reporting system from January 1, 2011, to July 31, 2011, were analyzed. Focus groups with operating room nurses were held to discuss delays caused by scheduling errors. Of 17,606 surgeries, there were 151 (.86%) booking errors. The most common errors were wrong side (55, 36%), incomplete (38, 25%), and wrong approach (25, 17%). Focus group participants said incomplete and wrong-approach bookings resulted in the longest delays, averaging 20 minutes and costing at least $320. Although infrequent, scheduling errors disrupt operating room team dynamics, causing delays and bearing substantial costs. Further research is necessary to develop tools for more accurate scheduling. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Topology of modified helical gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Zhang, J.; Handschuh, R. F.; Coy, J. J.

    1989-01-01

    The topology of several types of modified surfaces of helical gears is proposed. The modified surfaces allow absorption of a linear or almost linear function of transmission errors. These errors are caused by gear misalignment and an improvement of the contact of gear tooth surfaces. Principles and corresponding programs for computer aided simulation of meshing and contact of gears have been developed. The results of this investigation are illustrated with numerical examples.

  1. Topology of modified helical gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Zhang, J.; Handschuh, R. F.; Coy, J. J.

    1989-01-01

    The topology of several types of modified surfaces of helical gears is proposed. The modified surfaces allow absorption of a linear or almost linear function of transmission errors. These errors are caused by gear misalignment and an improvement of the contact of gear tooth surfaces. Principles and corresponding programs for computer aided simulation of meshing and contact of gears have been developed. The results of this investigation are illustrated with numerical examples.

  2. Medical errors recovered by critical care nurses.

    PubMed

    Dykes, Patricia C; Rothschild, Jeffrey M; Hurley, Ann C

    2010-05-01

    : The frequency and types of medical errors are well documented, but less is known about potential errors that were intercepted by nurses. We studied the type, frequency, and potential harm of recovered medical errors reported by critical care registered nurses (CCRNs) during the previous year. : Nurses are known to protect patients from harm. Several studies on medical errors found that there would have been more medical errors reaching the patient had not potential errors been caught earlier by nurses. : The Recovered Medical Error Inventory, a 25-item empirically derived and internally consistent (alpha =.90) list of medical errors, was posted on the Internet. Participants were recruited via e-mail and healthcare-related listservs using a nonprobability snowball sampling technique. Investigators e-mailed contacts working in hospitals or who managed healthcare-related listservs and asked the contacts to pass the link on to others with contacts in acute care settings. : During 1 year, 345 CCRNs reported that they recovered 18,578 medical errors, of which they rated 4,183 as potentially lethal. : Surveillance, clinical judgment, and interventions by CCRNs to identify, interrupt, and correct medical errors protected seriously ill patients from harm.

  3. Long-term spinal cord stimulation modifies canine intrinsic cardiac neuronal properties and ganglionic transmission during high-frequency repetitive activation.

    PubMed

    Smith, Frank M; Vermeulen, Michel; Cardinal, René

    2016-07-01

    Long-term spinal cord stimulation (SCS) applied to cranial thoracic SC segments exerts antiarrhythmic and cardioprotective actions in the canine heart in situ. We hypothesized that remodeling of intrinsic cardiac neuronal and synaptic properties occur in canines subjected to long-term SCS, specifically that synaptic efficacy may be preferentially facilitated at high presynaptic nerve stimulation frequencies. Animals subjected to continuous SCS for 5-8 weeks (long-term SCS: n = 17) or for 1 h (acute SCS: n = 4) were compared with corresponding control animals (long-term: n = 15, acute: n = 4). At termination, animals were anesthetized, the heart was excised and neurones from the right atrial ganglionated plexus were identified and studied in vitro using standard intracellular microelectrode technique. Main findings were as follows: (1) a significant reduction in whole cell membrane input resistance and acceleration of the course of AHP decay identified among phasic neurones from long-term SCS compared with controls, (2) significantly more robust synaptic transmission to rundown in long-term SCS during high-frequency (10-40 Hz) presynaptic nerve stimulation while recording from either phasic or accommodating postsynaptic neurones; this was associated with significantly greater posttrain excitatory postsynaptic potential (EPSP) numbers in long-term SCS than control, and (3) synaptic efficacy was significantly decreased by atropine in both groups. Such changes did not occur in acute SCS In conclusion, modification of intrinsic cardiac neuronal properties and facilitation of synaptic transmission at high stimulation frequency in long-term SCS could improve physiologically modulated vagal inputs to the heart. © 2016 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.

  4. Diagnostic errors in emergency departments.

    PubMed

    Tudela, Pere; Carreres, Anna; Ballester, Mònica

    2017-08-22

    Diagnostic errors have to be recognised as a possible adverse event inherent to clinical activity and incorporate them as another quality indicator. Different sources of information report their frequency, although they may still be underestimated. Contrary to what one could expect, in most cases, it does not occur in infrequent diseases. Causes can be complex and multifactorial, with individual cognitive aspects, as well as the health system. These errors can have an important clinical and socioeconomic impact. It is necessary to learn from diagnostic errors in order to develop an accurate and reliable system with a high standard of quality. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  5. [Assessment of the usefulness to use a software supervising continuous infusion rates of drugs administered with pumps in ICU and estimation of the frequency of rate of administration errors].

    PubMed

    Cayot-Constantin, S; Constantin, J-M; Perez, J-P; Chevallier, P; Clapson, P; Bazin, J-E

    2010-03-01

    To assess the usefulness and the feasibility to use a software supervising continuous infusion rates of drugs administered with pumps in ICU. Follow-up of practices and inquiry in three intensive care units. Guardrails software(TM) of reassurance of the regulations of the rates of pumps (AsenaGH, Alaris). First, evaluation and quantification of the number of infusion-rates adjustments reaching the maximal superior limit (considered as infusion-rate-errors stopped by the software). Secondly, appreciate the acceptance by staffs to such a system by a blinded questionnaire and a quantification of the number of dataset pumps programs performed with the software. The number of administrations started with the pumps of the study in the three services (11 beds) during the period of study was 63,069 and 42,694 of them (67.7 %) used the software. The number of potential errors of continuous infusion rates was 11, corresponding to a rate of infusion-rate errors of 26/100,000. KCl and insulin were concerned in two and five cases, respectively. Eighty percent of the nurses estimated that infusion-rate-errors were rare or exceptional but potentially harmful. Indeed, they considered that software supervising the continuous infusion rates of pumps could improve safety. The risk of infusion-rate-errors of drugs administered continuously with pump in ICU is rare but potentially harmful. A software that controlled the continuous infusion rates could be useful. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.

  6. Medication errors: prescribing faults and prescription errors

    PubMed Central

    Velo, Giampaolo P; Minuz, Pietro

    2009-01-01

    Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically. PMID:19594530

  7. Meal Frequencies Modify the Effect of Common Genetic Variants on Body Mass Index in Adolescents of the Northern Finland Birth Cohort 1986

    PubMed Central

    Jääskeläinen, Anne; Schwab, Ursula; Kolehmainen, Marjukka; Kaakinen, Marika; Savolainen, Markku J.; Froguel, Philippe; Cauchi, Stéphane; Järvelin, Marjo-Riitta; Laitinen, Jaana

    2013-01-01

    Recent studies suggest that meal frequencies influence the risk of obesity in children and adolescents. It has also been shown that multiple genetic loci predispose to obesity already in youth. However, it is unknown whether meal frequencies could modulate the association between single nucleotide polymorphisms (SNPs) and the risk of obesity. We examined the effect of two meal patterns on weekdays –5 meals including breakfast (regular) and ≤4 meals with or without breakfast (meal skipping) – on the genetic susceptibility to increased body mass index (BMI) in Finnish adolescents. Eight variants representing 8 early-life obesity-susceptibility loci, including FTO and MC4R, were genotyped in 2215 boys and 2449 girls aged 16 years from the population-based Northern Finland Birth Cohort 1986. A genetic risk score (GRS) was calculated for each individual by summing the number of BMI-increasing alleles across the 8 loci. Weight and height were measured and dietary data were collected using self-administered questionnaires. Among meal skippers, the difference in BMI between high-GRS and low-GRS (<8 and ≥8 BMI-increasing alleles) groups was 0.90 (95% CI 0.63,1.17) kg/m2, whereas in regular eaters, this difference was 0.32 (95% CI 0.06,0.57) kg/m2 (pinteraction  = 0.003). The effect of each MC4R rs17782313 risk allele on BMI in meal skippers (0.47 [95% CI 0.22,0.73] kg/m2) was nearly three-fold compared with regular eaters (0.18 [95% CI -0.06,0.41] kg/m2) (pinteraction  = 0.016). Further, the per-allele effect of the FTO rs1421085 was 0.24 (95% CI 0.05,0.42) kg/m2 in regular eaters and 0.46 (95% CI 0.27,0.66) kg/m2 in meal skippers but the interaction between FTO genotype and meal frequencies on BMI was significant only in boys (pinteraction  = 0.015). In summary, the regular five-meal pattern attenuated the increasing effect of common SNPs on BMI in adolescents. Considering the epidemic of obesity in youth, the promotion of regular eating may have

  8. Prevalence of teen driver errors leading to serious motor vehicle crashes.

    PubMed

    Curry, Allison E; Hafetz, Jessica; Kallan, Michael J; Winston, Flaura K; Durbin, Dennis R

    2011-07-01

    Motor vehicle crashes are the leading cause of adolescent deaths. Programs and policies should target the most common and modifiable reasons for crashes. We estimated the frequency of critical reasons for crashes involving teen drivers, and examined in more depth specific teen driver errors. The National Highway Traffic Safety Administration's (NHTSA) National Motor Vehicle Crash Causation Survey collected data at the scene of a nationally representative sample of 5470 serious crashes between 7/05 and 12/07. NHTSA researchers assigned a single driver, vehicle, or environmental factor as the critical reason for the event immediately leading to each crash. We analyzed crashes involving 15-18 year old drivers. 822 teen drivers were involved in 795 serious crashes, representing 335,667 teens in 325,291 crashes. Driver error was by far the most common reason for crashes (95.6%), as opposed to vehicle or environmental factors. Among crashes with a driver error, a teen made the error 79.3% of the time (75.8% of all teen-involved crashes). Recognition errors (e.g., inadequate surveillance, distraction) accounted for 46.3% of all teen errors, followed by decision errors (e.g., following too closely, too fast for conditions) (40.1%) and performance errors (e.g., loss of control) (8.0%). Inadequate surveillance, driving too fast for conditions, and distracted driving together accounted for almost half of all crashes. Aggressive driving behavior, drowsy driving, and physical impairments were less commonly cited as critical reasons. Males and females had similar proportions of broadly classified errors, although females were specifically more likely to make inadequate surveillance errors. Our findings support prioritization of interventions targeting driver distraction and surveillance and hazard awareness training. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Quantum error correction for continuously detected errors

    NASA Astrophysics Data System (ADS)

    Ahn, Charlene; Wiseman, H. M.; Milburn, G. J.

    2003-05-01

    We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber et al., Phys. Rev. Lett. 86, 4402 (2001)].

  10. [The influence of low-frequency pulsed electric and magnetic signals or their combination on the normal and modified fibroblasts (an experimental study)].

    PubMed

    Ulitko, M V; Medvedeva, S Yu; Malakhov, V V

    2016-01-01

    The results of clinical studies give evidence of the beneficial preventive and therapeutic effects of the «Tiline-EM» physiotherapeutic device designed for the combined specific treatment of the skin regions onto which both discomfort and pain sensations are directly projected, reflectively active sites and zones, as well as trigger zones with the use of low-frequency pulsed electric current and magnetic field. The efficient application of the device requires the understanding of the general mechanisms underlying such action on the living systems including those operating at the cellular and subcellular levels. The objective of the present study was the investigation of the specific and complex effects produced by the low-frequency pulses of electric current and magnetic field generated in the physiotherapeutic device «Tiline-EM» on the viability, proliferative activity, and morphofunctional characteristics of normal skin fibroblasts and the transformed fibroblast line K-22. It has been demonstrated that the biological effects of the electric and magnetic signals vary depending on the type of the cell culture and the mode of impact. The transformed fibroblasts proved to be more sensitive to the specific and complex effects of electric and magnetic pulses than the normal skin fibroblasts. The combined action of the electric and magnetic signals was shown to have the greatest influence on both varieties of fibroblasts. It manifests itself in the form of enhanced viability, elevated proliferative and synthetic activity in the cultures of transformed fibroblasts and as the acceleration of cell differentiation in the cultures of normal fibroblasts. The effect of stimulation of dermal fibroblast differentiation in response to the combined treatment by the electric and magnetic signals is of interest from the standpoint of the physiotherapeutic use of the «Tiline-EM» device for the purpose of obtaining fibroblasts cultures to be employed in regenerative therapy and

  11. A Frequency and Error Analysis of the Use of Determiners, the Relationships between Noun Phrases, and the Structure of Discourse in English Essays by Native English Writers and Native Chinese, Taiwanese, and Korean Learners of English as a Second Language

    ERIC Educational Resources Information Center

    Gressang, Jane E.

    2010-01-01

    Second language (L2) learners notoriously have trouble using articles in their target languages (e.g., "a", "an", "the" in English). However, researchers disagree about the patterns and causes of these errors. Past studies have found that L2 English learners: (1) Predominantly omit articles (White 2003, Robertson 2000), (2) Overuse "the" (Huebner…

  12. Effect of Media Modified To Mimic Cystic Fibrosis Sputum on the Susceptibility of Aspergillus fumigatus, and the Frequency of Resistance at One Center

    PubMed Central

    Moss, Richard B.; Hernandez, Cathy; Clemons, Karl V.; Martinez, Marife

    2016-01-01

    Studies of cystic fibrosis (CF) patient exacerbations attributed to Pseudomonas aeruginosa infection have indicated a lack of correlation of outcome with in vitro susceptibility results. One explanation is that the media used for testing do not mimic the airway milieu, resulting in incorrect conclusions. Therefore, media have been devised to mimic CF sputum. Aspergillus fumigatus is the leading fungal pathogen in CF, and susceptibility testing is also used to decide therapeutic choices. We assessed whether media designed to mimic CF sputa would give different fungal susceptibility results than those of classical methods, assaying voriconazole, the most utilized anti-Aspergillus drug in this setting, and 30 CF Aspergillus isolates. The frequency of marked resistance (defined as an MIC of >4 μg/ml) in our CF unit by classical methods is 7%. Studies performed with classical methods and with digested sputum medium, synthetic sputum medium, and artificial sputum medium revealed prominent differences in Aspergillus susceptibility results, as well as growth rate, with each medium. Clinical correlative studies are required to determine which results are most useful in predicting outcome. Comparison of MICs with non-CF isolates also indicated the CF isolates were generally more resistant. PMID:26810647

  13. Intermediate frequency magnetic field at 23 kHz does not modify gene expression in human fetus-derived astroglia cells.

    PubMed

    Sakurai, Tomonori; Narita, Eijiro; Shinohara, Naoki; Miyakoshi, Junji

    2012-12-01

    The increased use of induction heating (IH) cooktops in Japan and Europe has raised public concern on potential health effects of the magnetic fields generated by IH cooktops. In this study, we evaluated the effects of intermediate frequency (IF) magnetic fields generated by IH cooktops on gene expression profiles. Human fetus-derived astroglia cells were exposed to magnetic fields at 23 kHz and 100 µT(rms) for 2, 4, and 6 h and gene expression profiles in cells were assessed using cDNA microarray. There were no detectable effects of the IF magnetic fields at 23 kHz on the gene expression profile, whereas the heat treatment at 43 °C for 2 h, as a positive control, affected gene expression including inducing heat shock proteins. Principal component analysis and hierarchical analysis showed that the gene profiles of IF-exposed groups were similar to the sham-exposed group and were different than the heat treatment group. These results demonstrated that exposure of human fetus-derived astroglia cells to an IF magnetic field at 23 kHz and 100 µT(rms) for up to 6 h did not induce detectable changes in gene expression profile.

  14. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  15. Reduced discretization error in HZETRN

    SciTech Connect

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.

  16. Reduced discretization error in HZETRN

    NASA Astrophysics Data System (ADS)

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm2 exposed to both solar particle event and galactic cosmic ray environments.

  17. Exposure of tumor-bearing mice to extremely high-frequency electromagnetic radiation modifies the composition of fatty acids in thymocytes and tumor tissue.

    PubMed

    Gapeyev, Andrew B; Kulagina, Tatiana P; Aripovsky, Alexander V

    2013-08-01

    To test the participation of fatty acids (FA) in antitumor effects of extremely high-frequency electromagnetic radiation (EHF EMR), the changes in the FA composition in the thymus, liver, blood plasma, muscle tissue, and tumor tissue in mice with Ehrlich solid carcinoma exposed to EHF EMR were studied. Normal and tumor-bearing mice were exposed to EHF EMR with effective parameters (42.2 GHz, 0.1 mW/cm2, 20 min daily during five consecutive days beginning the first day after the inoculation of tumor cells). Fatty acid composition of various organs and tissues of mice were determined using a gas chromatography. It was shown that the exposure of normal mice to EHF EMR or tumor growth significantly increased the content of monounsaturated FA (MUFA) and decreased the content of polyunsaturated FA (PUFA) in all tissues examined. Exposure of tumor-bearing mice to EHF EMR led to the recovery of FA composition in thymocytes to the state that is typical for normal animals. In other tissues of tumor-bearing mice, the exposure to EHF EMR did not induce considerable changes that would be significantly distinguished between disturbances caused by EHF EMR exposure or tumor growth separately. In tumor tissue which is characterized by elevated level of MUFA, the exposure to EHF EMR significantly decreased the summary content of MUFA and increased the summary content of PUFA. The recovery of the FA composition in thymocytes and the modification of the FA composition in the tumor under the influence of EHF EMR on tumor-bearing animals may have crucial importance for elucidating the mechanisms of antitumor effects of the electromagnetic radiation.

  18. Prescribing Errors Involving Medication Dosage Forms

    PubMed Central

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  19. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  20. Accounting for Interlanguage Errors

    ERIC Educational Resources Information Center

    Benetti, Jean N.

    1978-01-01

    A study was conducted to test various explanations of the error of unmarked noun plurals made by first generation Italian immigrants. The error appeared to be "fossilized" or not eradicated over a period of time. (SW)

  1. New Modified Band Limited Impedance (BLIMP) Inversion Method Using Envelope Attribute

    NASA Astrophysics Data System (ADS)

    Maulana, Z. L.; Saputro, O. D.; Latief, F. D. E.

    2016-01-01

    Earth attenuates high frequencies from seismic wavelet. Low frequency seismics cannot be obtained by low quality geophone. The low frequencies (0-10 Hz) that are not present in seismic data are important to obtain a good result in acoustic impedance (AI) inversion. AI is important to determine reservoir quality by converting AI to reservoir properties like porosity, permeability and water saturation. The low frequencies can be supplied from impedance log (AI logs), velocity analysis, and from the combination of both data. In this study, we propose that the low frequencies could be obtained from the envelope seismic attribute. This new proposed method is essentially a modified BLIMP (Band Limited Impedance) inversion method, in which the AI logs for BLIMP substituted with the envelope attribute. In low frequency domain (0-10 Hz), the envelope attribute produces high amplitude. This low frequency from the envelope attribute is utilized to replace low frequency from AI logs in BLIMP. Linear trend in this method is acquired from the AI logs. In this study, the method is applied on synthetic seismograms created from impedance log from well ‘X’. The mean squared error from the modified BLIMP inversion is 2-4% for each trace (variation in error is caused by different normalization constant), lower than the conventional BLIMP inversion which produces error of 8%. The new method is also applied on Marmousi2 dataset and show promising result. The modified BLIMP inversion result from Marmousi2 by using one log AI is better than the one produced from the conventional method.

  2. Incorporation of a Redfern Integrated Optics ORION Laser Module with an IPG Photonics Erbium Fiber Laser to Create a Frequency Conversion Photon Doppler Velocimeter for US Army Research Laboratory Measurements: Hardware, Data Analysis, and Error Quantification

    DTIC Science & Technology

    2017-04-01

    to create the optical beat frequencies measured by the heterodyne system and the associated drift of their center frequencies. This drift was...nonoscillating speaker cone. To assess the effects associated with the thermal drift of the 2 lasers, and their deviation from an idealized...demonstrates the methodology toward how the US Army Research Laboratory’s scientists could use FCPDV to eliminate the directional ambiguity associated with

  3. Motion error compensation of multi-legged walking robots

    NASA Astrophysics Data System (ADS)

    Wang, Liangwen; Chen, Xuedong; Wang, Xinjie; Tang, Weigang; Sun, Yi; Pan, Chunmei

    2012-07-01

    Existing errors in the structure and kinematic parameters of multi-legged walking robots, the motion trajectory of robot will diverge from the ideal sports requirements in movement. Since the existing error compensation is usually used for control compensation of manipulator arm, the error compensation of multi-legged robots has seldom been explored. In order to reduce the kinematic error of robots, a motion error compensation method based on the feedforward for multi-legged mobile robots is proposed to improve motion precision of a mobile robot. The locus error of a robot body is measured, when robot moves along a given track. Error of driven joint variables is obtained by error calculation model in terms of the locus error of robot body. Error value is used to compensate driven joint variables and modify control model of robot, which can drive the robots following control model modified. The model of the relation between robot's locus errors and kinematic variables errors is set up to achieve the kinematic error compensation. On the basis of the inverse kinematics of a multi-legged walking robot, the relation between error of the motion trajectory and driven joint variables of robots is discussed. Moreover, the equation set is obtained, which expresses relation among error of driven joint variables, structure parameters and error of robot's locus. Take MiniQuad as an example, when the robot MiniQuad moves following beeline tread, motion error compensation is studied. The actual locus errors of the robot body are measured before and after compensation in the test. According to the test, variations of the actual coordinate value of the robot centroid in x-direction and z-direction are reduced more than one time. The kinematic errors of robot body are reduced effectively by the use of the motion error compensation method based on the feedforward.

  4. Drug Errors in Anaesthesiology

    PubMed Central

    Jain, Rajnish Kumar; Katiyar, Sarika

    2009-01-01

    Summary Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed. PMID:20640103

  5. Modifying effects of low-intensity extremely high-frequency electromagnetic radiation on content and composition of fatty acids in thymus of mice exposed to X-rays.

    PubMed

    Gapeyev, Andrew B; Aripovsky, Alexander V; Kulagina, Tatiana P

    2015-03-01

    The effects of extremely high-frequency electromagnetic radiation (EHF EMR) on thymus weight and its fatty acids (FA) content and FA composition in X-irradiated mice were studied to test the involvement of FA in possible protective effects of EHF EMR against ionizing radiation. Mice were exposed to low-intensity pulse-modulated EHF EMR (42.2 GHz, 0.1 mW/cm(2), 20 min exposure, 1 Hz modulation) and/or X-rays at a dose of 4 Gy with different sequences of the treatments. In 4-5 hours, 10, 30, and 40 days after the last exposure, the thymuses were weighed; total FA content and FA composition of the thymuses were determined on days 1, 10, and 30 using a gas chromatography. It was shown that after X-irradiation of mice the total FA content per mg of thymic tissue was significantly increased in 4-5 h and decreased in 10 and 30 days after the treatment. On days 30 and 40 after X-irradiation, the thymus weight remained significantly reduced. The first and tenth days after X-rays injury independently of the presence and sequence of EHF EMR exposure were characterized by an increased content of polyunsaturated FA (PUFA) and a decreased content of monounsaturated FA (MUFA) with unchanged content of saturated FA (SFA). Exposure of mice to EHF EMR before or after X-irradiation prevented changes in the total FA content in thymic tissue, returned the summary content of PUFA and MUFA to the control level and decreased the summary content of SFA on the 30th day after the treatments, and promoted the restoration of the thymus weight of X-irradiated mice to the 40th day of the observations. Changes in the content and composition of PUFA in the early period after treatments as well as at the restoration of the thymus weight under the combined action of EHF EMR and X-rays indicate to an active participation of FA in the acceleration of post-radiation recovery of the thymus by EHF EMR exposure.

  6. An error management system in a veterinary clinical laboratory.

    PubMed

    Hooijberg, Emma; Leidinger, Ernst; Freeman, Kathleen P

    2012-05-01

    Error recording and management is an integral part of a clinical laboratory quality management system. Analysis and review of recorded errors lead to corrective and preventive actions through modification of existing processes and, ultimately, to quality improvement. Laboratory errors can be divided into preanalytical, analytical, and postanalytical errors depending on where in the laboratory cycle the errors occur. The purpose of the current report is to introduce an error management system in use in a veterinary diagnostic laboratory as well as to examine the amount and types of error recorded during the 8-year period from 2003 to 2010. Annual error reports generated during this period by the error recording system were reviewed, and annual error rates were calculated. In addition, errors were divided into preanalytical, analytical, postanalytical, and "other" categories, and their frequency was examined. Data were further compared to that available from human diagnostic laboratories. Finally, sigma metrics were calculated for the various error categories. Annual error rates per total number of samples ranged from 1.3% in 2003 to 0.7% in 2010. Preanalytical errors ranged from 52% to 77%, analytical from 4% to 14%, postanalytical from 9% to 21%, and other error from 6% to 19% of total errors. Sigma metrics ranged from 4.1 to 4.7. All data were comparable to that reported in human clinical laboratories. The incremental annual reduction of error shows that use of an error management system led to quality improvement.

  7. Systemic factors of errors in the case identification process of the national routine health information system: a case study of Modified Field Health Services Information System in the Philippines.

    PubMed

    Murai, Shinsuke; Lagrada, Leizel P; Gaite, Julita T; Uehara, Naruo

    2011-10-14

    The quality of data in national health information systems has been questionable in most developing countries. However, the mechanisms of errors in the case identification process are not fully understood. This study aimed to investigate the mechanisms of errors in the case identification process in the existing routine health information system (RHIS) in the Philippines by measuring the risk of committing errors for health program indicators used in the Field Health Services Information System (FHSIS 1996), and characterizing those indicators accordingly. A structured questionnaire on the definitions of 12 selected indicators in the FHSIS was administered to 132 health workers in 14 selected municipalities in the province of Palawan. A proportion of correct answers (difficulty index) and a disparity of two proportions of correct answers between higher and lower scored groups (discrimination index) were calculated, and the patterns of wrong answers for each of the 12 items were abstracted from 113 valid responses. None of 12 items reached a difficulty index of 1.00. The average difficulty index of 12 items was 0.266 and the discrimination index that showed a significant difference was 0.216 and above. Compared with these two cut-offs, six items showed non-discrimination against lower difficulty indices of 0.035 (4/113) to 0.195 (22/113), two items showed a positive discrimination against lower difficulty indices of 0.142 (16/113) and 0.248 (28/113), and four items showed a positive discrimination against higher difficulty indices of 0.469 (53/113) to 0.673 (76/113). The results suggest three characteristics of definitions of indicators such as those that are (1) unsupported by the current conditions in the health system, i.e., (a) data are required from a facility that cannot directly generate the data and, (b) definitions of indicators are not consistent with its corresponding program; (2) incomplete or ambiguous, which allow several interpretations; and (3

  8. Systemic factors of errors in the case identification process of the national routine health information system: A case study of Modified Field Health Services Information System in the Philippines

    PubMed Central

    2011-01-01

    Background The quality of data in national health information systems has been questionable in most developing countries. However, the mechanisms of errors in the case identification process are not fully understood. This study aimed to investigate the mechanisms of errors in the case identification process in the existing routine health information system (RHIS) in the Philippines by measuring the risk of committing errors for health program indicators used in the Field Health Services Information System (FHSIS 1996), and characterizing those indicators accordingly. Methods A structured questionnaire on the definitions of 12 selected indicators in the FHSIS was administered to 132 health workers in 14 selected municipalities in the province of Palawan. A proportion of correct answers (difficulty index) and a disparity of two proportions of correct answers between higher and lower scored groups (discrimination index) were calculated, and the patterns of wrong answers for each of the 12 items were abstracted from 113 valid responses. Results None of 12 items reached a difficulty index of 1.00. The average difficulty index of 12 items was 0.266 and the discrimination index that showed a significant difference was 0.216 and above. Compared with these two cut-offs, six items showed non-discrimination against lower difficulty indices of 0.035 (4/113) to 0.195 (22/113), two items showed a positive discrimination against lower difficulty indices of 0.142 (16/113) and 0.248 (28/113), and four items showed a positive discrimination against higher difficulty indices of 0.469 (53/113) to 0.673 (76/113). Conclusions The results suggest three characteristics of definitions of indicators such as those that are (1) unsupported by the current conditions in the health system, i.e., (a) data are required from a facility that cannot directly generate the data and, (b) definitions of indicators are not consistent with its corresponding program; (2) incomplete or ambiguous, which allow

  9. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  10. Medication errors: definitions and classification

    PubMed Central

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  11. Medication errors: definitions and classification.

    PubMed

    Aronson, Jeffrey K

    2009-06-01

    1. To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. 2. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey-Lewis method (based on an understanding of theory and practice). 3. A medication error is 'a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient'. 4. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is 'a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient'. The converse of this, 'balanced prescribing' is 'the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm'. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. 5. A prescription error is 'a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription'. The 'normal features' include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. 6. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies.

  12. Performance Errors in Weight Training and Their Correction.

    ERIC Educational Resources Information Center

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  13. Performance Errors in Weight Training and Their Correction.

    ERIC Educational Resources Information Center

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  14. Derivational Morphophonology: Exploring Errors in Third Graders' Productions

    ERIC Educational Resources Information Center

    Jarmulowicz, Linda; Hay, Sarah E.

    2009-01-01

    Purpose: This study describes a post hoc analysis of segmental, stress, and syllabification errors in third graders' productions of derived English words with the stress-changing suffixes "-ity" and "-ic." We investigated whether (a) derived word frequency influences error patterns, (b) stress and syllabification errors always co-occur, and (c)…

  15. The Nature of Error in Adolescent Student Writing

    ERIC Educational Resources Information Center

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  16. Error analysis of compensation cutting technique for wavefront error of KH2PO4 crystal.

    PubMed

    Tie, Guipeng; Dai, Yifan; Guan, Chaoliang; Zhu, Dengchao; Song, Bing

    2013-09-20

    Considering the wavefront error of KH(2)PO(4) (KDP) crystal is difficult to control through face fly cutting process because of surface shape deformation during vacuum suction, an error compensation technique based on a spiral turning method is put forward. An in situ measurement device is applied to measure the deformed surface shape after vacuum suction, and the initial surface figure error, which is obtained off-line, is added to the in situ surface shape to obtain the final surface figure to be compensated. Then a three-axis servo technique is utilized to cut the final surface shape. In traditional cutting processes, in addition to common error sources such as the error in the straightness of guide ways, spindle rotation error, and error caused by ambient environment variance, three other errors, the in situ measurement error, position deviation error, and servo-following error, are the main sources affecting compensation accuracy. This paper discusses the effect of these three errors on compensation accuracy and provides strategies to improve the final surface quality. Experimental verification was carried out on one piece of KDP crystal with the size of Φ270 mm×11 mm. After one compensation process, the peak-to-valley value of the transmitted wavefront error dropped from 1.9λ (λ=632.8 nm) to approximately 1/3λ, and the mid-spatial-frequency error does not become worse when the frequency of the cutting tool trajectory is controlled by use of a low-pass filter.

  17. Grammatical Errors Produced by English Majors: The Translation Task

    ERIC Educational Resources Information Center

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  18. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  19. Diagnostic errors in interactive telepathology.

    PubMed

    Stauch, G; Schweppe, K W; Kayser, K

    2000-01-01

    Telepathology (TP) as a service in pathology at a distance is now widely used. It is integrated in the daily workflow of numerous pathologists. Meanwhile, in Germany 15 departments of pathology are using the telepathology technique for frozen section service; however, a common recognised quality standard in diagnostic accuracy is still missing. In a first step, the working group Aurich uses a TP system for frozen section service in order to analyse the frequency and sources of errors in TP frozen section diagnoses for evaluating the quality of frozen section slides, the important components of image quality and their influences an diagnostic accuracy. The authors point to the necessity of an optimal training program for all participants in this service in order to reduce the risk of diagnostic errors. In addition, there is need for optimal cooperation of all partners involved in TP service.

  20. CORRELATED ERRORS IN EARTH POINTING MISSIONS

    NASA Technical Reports Server (NTRS)

    Bilanow, Steve; Patt, Frederick S.

    2005-01-01

    Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor

  1. Sources of situation awareness errors in aviation.

    PubMed

    Jones, D G; Endsley, M R

    1996-06-01

    Situation Awareness (SA) is a crucial factor in effective decision-making, especially in the dynamic flight environment. Consequently, an understanding of the types of SA errors that occur in this environment is beneficial. This study uses reports from the Aviation Safety Reporting System (ASRS) database (accessed by the term "situational awareness") to investigate the types of SA errors that occur in aviation. The errors were classified into one of three major categories: Level 1 (failure to correctly perceive the information), Level 2 (failure to comprehend the situation), or Level 3 (failure to project the situation into the future). Of the errors identified, 76.3% were Level 1 SA errors, 20.3% were Level 2, and 3.4% were Level 3. Level 1 SA errors occurred when relevant data were not available, when data were hard to discriminate or detect, when a failure to monitor or observe data occurred, when presented information was misperceived, or when memory loss occurred. Level 2 SA errors involved a lack of or an incomplete mental model, the use of an incorrect mental model, over-reliance on default values, and miscellaneous other factors. Level 3 errors involved either an overprojection of current trends or miscellaneous other factors. These results give an indication of the types and frequency of SA errors that occur in aviation, with failure to monitor or observe available information forming the largest single category. Many other causal factors are also indicated, however, including vigilance, automation problems, and poor mental models.

  2. Everyday Memory Errors in Older Adults

    PubMed Central

    Ossher, Lynn; Flegal, Kristin E.; Lustig, Cindy

    2012-01-01

    Despite concern about cognitive decline in old age, few studies document the types and frequency of memory errors older adults make in everyday life. In the present study, 105 healthy older adults completed the Everyday Memory Questionnaire (EMQ; Sunderland, Harris, & Baddeley, 1983), indicating what memory errors they had experienced in the last 24 hours, the Memory Self-Efficacy Questionnaire (MSEQ; West, Thorn, & Bagwell, 2003), and other neuropsychological and cognitive tasks. EMQ and MSEQ scores were unrelated and made separate contributions to variance on the Mini Mental State Exam (MMSE; Folstein, Folstein, & McHugh, 1975), suggesting separate constructs. Tip-of-the-tongue errors were the most commonly reported, and the EMQ Faces/Places and New Things subscales were most strongly related to MMSE. These findings may help training programs target memory errors commonly experienced by older adults, and suggest which types of memory errors could indicate cognitive declines of clinical concern. PMID:22694275

  3. GCF HSD error control

    NASA Technical Reports Server (NTRS)

    Hung, C. K.

    1978-01-01

    A selective repeat automatic repeat request (ARQ) system was implemented under software control in the Ground Communications Facility error detection and correction (EDC) assembly at JPL and the comm monitor and formatter (CMF) assembly at the DSSs. The CMF and EDC significantly improved real time data quality and significantly reduced the post-pass time required for replay of blocks originally received in error. Since the remote mission operation centers (RMOCs) do not provide compatible error correction equipment, error correction will not be used on the RMOC-JPL high speed data (HSD) circuits. The real time error correction capability will correct error burst or outage of two loop-times or less for each DSS-JPL HSD circuit.

  4. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  5. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  6. Frequency-Offset Cartesian Feedback Based on Polyphase Difference Amplifiers

    PubMed Central

    Zanchi, Marta G.; Pauly, John M.; Scott, Greig C.

    2010-01-01

    A modified Cartesian feedback method called “frequency-offset Cartesian feedback” and based on polyphase difference amplifiers is described that significantly reduces the problems associated with quadrature errors and DC-offsets in classic Cartesian feedback power amplifier control systems. In this method, the reference input and feedback signals are down-converted and compared at a low intermediate frequency (IF) instead of at DC. The polyphase difference amplifiers create a complex control bandwidth centered at this low IF, which is typically offset from DC by 200–1500 kHz. Consequently, the loop gain peak does not overlap DC where voltage offsets, drift, and local oscillator leakage create errors. Moreover, quadrature mismatch errors are significantly attenuated in the control bandwidth. Since the polyphase amplifiers selectively amplify the complex signals characterized by a +90° phase relationship representing positive frequency signals, the control system operates somewhat like single sideband (SSB) modulation. However, the approach still allows the same modulation bandwidth control as classic Cartesian feedback. In this paper, the behavior of the polyphase difference amplifier is described through both the results of simulations, based on a theoretical analysis of their architecture, and experiments. We then describe our first printed circuit board prototype of a frequency-offset Cartesian feedback transmitter and its performance in open and closed loop configuration. This approach should be especially useful in magnetic resonance imaging transmit array systems. PMID:20814450

  7. Medication errors recovered by emergency department pharmacists.

    PubMed

    Rothschild, Jeffrey M; Churchill, William; Erickson, Abbie; Munz, Kristin; Schuur, Jeremiah D; Salzberg, Claudia A; Lewinski, Daniel; Shane, Rita; Aazami, Roshanak; Patka, John; Jaggers, Rondell; Steffenhagen, Aaron; Rough, Steve; Bates, David W

    2010-06-01

    We assess the impact of emergency department (ED) pharmacists on reducing potentially harmful medication errors. We conducted this observational study in 4 academic EDs. Trained pharmacy residents observed a convenience sample of ED pharmacists' activities. The primary outcome was medication errors recovered by pharmacists, including errors intercepted before reaching the patient (near miss or potential adverse drug event), caught after reaching the patient but before causing harm (mitigated adverse drug event), or caught after some harm but before further or worsening harm (ameliorated adverse drug event). Pairs of physician and pharmacist reviewers confirmed recovered medication errors and assessed their potential for harm. Observers were unblinded and clinical outcomes were not evaluated. We conducted 226 observation sessions spanning 787 hours and observed pharmacists reviewing 17,320 medications ordered or administered to 6,471 patients. We identified 504 recovered medication errors, or 7.8 per 100 patients and 2.9 per 100 medications. Most of the recovered medication errors were intercepted potential adverse drug events (90.3%), with fewer mitigated adverse drug events (3.9%) and ameliorated adverse drug events (0.2%). The potential severities of the recovered errors were most often serious (47.8%) or significant (36.2%). The most common medication classes associated with recovered medication errors were antimicrobial agents (32.1%), central nervous system agents (16.2%), and anticoagulant and thrombolytic agents (14.1%). The most common error types were dosing errors, drug omission, and wrong frequency errors. ED pharmacists can identify and prevent potentially harmful medication errors. Controlled trials are necessary to determine the net costs and benefits of ED pharmacist staffing on safety, quality, and costs, especially important considerations for smaller EDs and pharmacy departments. Copyright (c) 2009 American College of Emergency Physicians

  8. Modified cyanobacteria

    DOEpatents

    Vermaas, Willem F J.

    2014-06-17

    Disclosed is a modified photoautotrophic bacterium comprising genes of interest that are modified in terms of their expression and/or coding region sequence, wherein modification of the genes of interest increases production of a desired product in the bacterium relative to the amount of the desired product production in a photoautotrophic bacterium that is not modified with respect to the genes of interest.

  9. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  10. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  11. Post-Error Adjustments

    PubMed Central

    Danielmeier, Claudia; Ullsperger, Markus

    2011-01-01

    When our brain detects an error, this process changes how we react on ensuing trials. People show post-error adaptations, potentially to improve their performance in the near future. At least three types of behavioral post-error adjustments have been observed. These are post-error slowing (PES), post-error reduction of interference, and post-error improvement in accuracy (PIA). Apart from these behavioral changes, post-error adaptations have also been observed on a neuronal level with functional magnetic resonance imaging and electroencephalography. Neuronal post-error adaptations comprise activity increase in task-relevant brain areas, activity decrease in distracter-encoding brain areas, activity modulations in the motor system, and mid-frontal theta power increases. Here, we review the current literature with respect to these post-error adjustments, discuss under which circumstances these adjustments can be observed, and whether the different types of adjustments are linked to each other. We also evaluate different approaches for explaining the functional role of PES. In addition, we report reanalyzed and follow-up data from a flanker task and a moving dots interference task showing (1) that PES and PIA are not necessarily correlated, (2) that PES depends on the response–stimulus interval, and (3) that PES is reliable on a within-subject level over periods as long as several months. PMID:21954390

  12. A Review of the Literature on Computational Errors With Whole Numbers. Mathematics Education Diagnostic and Instructional Centre (MEDIC).

    ERIC Educational Resources Information Center

    Burrows, J. K.

    Research on error patterns associated with whole number computation is reviewed. Details of the results of some of the individual studies cited are given in the appendices. In Appendix A, 33 addition errors, 27 subtraction errors, 41 multiplication errors, and 41 division errors are identified, and the frequency of these errors made by 352…

  13. Cigarette- and Snus-Modified Association Between Unprotected Exposure to Noise from Hunting Rifle Caliber Weapons and High Frequency Hearing Loss. A Cross-Sectional Study Among Swedish Hunters

    PubMed Central

    Honeth, Louise; Ström, Peter; Ploner, Alexander; Bagger-Sjöbäck, Dan; Rosenhall, Ulf; Nyrén, Olof

    2016-01-01

    Aim: To investigate in this cross-sectional study among Swedish hunters if tobacco use modifies the previously observed association, expressed as prevalence ratio (PR), between unprotected exposure to impulse noise from hunting rifle caliber (HRC) weapons and high-frequency hearing impairment (HFHI). Settings and Design: A nationwide cross-sectional epidemiologic study was conducted among Swedish sport hunters in 2012. Materials and Methods: The study was Internet-based and consisted of a questionnaire and an Internet-based audiometry test. Results: In all, 202 hunters completed a questionnaire regarding the hearing test. Associations were modeled using Poisson regression. Current, daily use of tobacco was reported by 61 hunters (19 used cigarettes, 47 moist snuff, and 5 both). Tobacco users tended to be younger, fire more shots with HRC weapons, and report more hunting days. Their adjusted PR (1–6 unprotected HRC shots versus 0) was 3.2 (1.4–6.7), P = 0.01. Among the nonusers of tobacco, the corresponding PR was 1.3 (0.9–1.8), P = 0.18. P value for the interaction was 0.01. The importance of ear protection could not be quantified among hunters with HRC weapons because our data suggested that the HFHI outcome had led to changes in the use of such protection. Among hunters using weapons with less sound energy, however, no or sporadic use of hearing protection was linked to a 60% higher prevalence of HFHI, relative to habitual use. Conclusion: Tobacco use modifies the association between exposure to unprotected impulse noise from HRC weapons and the probability of having HFHI among susceptible hunters. The mechanisms remain to be clarified, but because the effect modification was apparent also among the users of smokeless tobacco, combustion products may not be critical for this effect. PMID:27991471

  14. Cigarette- and snus-modified association between unprotected exposure to noise from hunting rifle caliber weapons and high frequency hearing loss. A cross-sectional study among swedish hunters.

    PubMed

    Honeth, Louise; Ström, Peter; Ploner, Alexander; Bagger-Sjöbäck, Dan; Rosenhall, Ulf; Nyrén, Olof

    2016-01-01

    To investigate in this cross-sectional study among Swedish hunters if tobacco use modifies the previously observed association, expressed as prevalence ratio (PR), between unprotected exposure to impulse noise from hunting rifle caliber (HRC) weapons and high-frequency hearing impairment (HFHI). A nationwide cross-sectional epidemiologic study was conducted among Swedish sport hunters in 2012. The study was Internet-based and consisted of a questionnaire and an Internet-based audiometry test. In all, 202 hunters completed a questionnaire regarding the hearing test. Associations were modeled using Poisson regression. Current, daily use of tobacco was reported by 61 hunters (19 used cigarettes, 47 moist snuff, and 5 both). Tobacco users tended to be younger, fire more shots with HRC weapons, and report more hunting days. Their adjusted PR (1-6 unprotected HRC shots versus 0) was 3.2 (1.4-6.7), P < 0.01. Among the nonusers of tobacco, the corresponding PR was 1.3 (0.9-1.8), P = 0.18. P value for the interaction was 0.01. The importance of ear protection could not be quantified among hunters with HRC weapons because our data suggested that the HFHI outcome had led to changes in the use of such protection. Among hunters using weapons with less sound energy, however, no or sporadic use of hearing protection was linked to a 60% higher prevalence of HFHI, relative to habitual use. Tobacco use modifies the association between exposure to unprotected impulse noise from HRC weapons and the probability of having HFHI among susceptible hunters. The mechanisms remain to be clarified, but because the effect modification was apparent also among the users of smokeless tobacco, combustion products may not be critical for this effect.

  15. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  16. Frequency domain measurement systems

    NASA Technical Reports Server (NTRS)

    Eischer, M. C.

    1978-01-01

    Stable frequency sources and signal processing blocks were characterized by their noise spectra, both discrete and random, in the frequency domain. Conventional measures are outlined, and systems for performing the measurements are described. Broad coverage of system configurations which were found useful is given. Their functioning and areas of application are discussed briefly. Particular attention is given to some of the potential error sources in the measurement procedures, system configurations, double-balanced-mixer-phase-detectors, and application of measuring instruments.

  17. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  18. Twenty Questions about Student Errors.

    ERIC Educational Resources Information Center

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    1986-01-01

    Discusses the value of studying errors made by students in the process of learning science. Addresses 20 research questions dealing with student learning errors. Attempts to characterize errors made by students and clarify some terms used in error research. (TW)

  19. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  20. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  1. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  2. Teacher-Induced Errors.

    ERIC Educational Resources Information Center

    Richmond, Kent C.

    Students of English as a second language (ESL) often come to the classroom with little or no experience in writing in any language and with inaccurate assumptions about writing. Rather than correct these assumptions, teachers often seem to unwittingly reinforce them, actually inducing errors into their students' work. Teacher-induced errors occur…

  3. Learning from Errors

    ERIC Educational Resources Information Center

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  4. Burnout and medical errors among American surgeons.

    PubMed

    Shanafelt, Tait D; Balch, Charles M; Bechamps, Gerald; Russell, Tom; Dyrbye, Lotte; Satele, Daniel; Collicott, Paul; Novotny, Paul J; Sloan, Jeff; Freischlag, Julie

    2010-06-01

    To evaluate the relationship between burnout and perceived major medical errors among American surgeons. Despite efforts to improve patient safety, medical errors by physicians remain a common cause of morbidity and mortality. Members of the American College of Surgeons were sent an anonymous, cross-sectional survey in June 2008. The survey included self-assessment of major medical errors, a validated depression screening tool, and standardized assessments of burnout and quality of life (QOL). Of 7905 participating surgeons, 700 (8.9%) reported concern they had made a major medical error in the last 3 months. Over 70% of surgeons attributed the error to individual rather than system level factors. Reporting an error during the last 3 months had a large, statistically significant adverse relationship with mental QOL, all 3 domains of burnout (emotional exhaustion, depersonalization, and personal accomplishment) and symptoms of depression. Each one point increase in depersonalization (scale range, 0-33) was associated with an 11% increase in the likelihood of reporting an error while each one point increase in emotional exhaustion (scale range, 0-54) was associated with a 5% increase. Burnout and depression remained independent predictors of reporting a recent major medical error on multivariate analysis that controlled for other personal and professional factors. The frequency of overnight call, practice setting, method of compensation, and number of hours worked were not associated with errors on multivariate analysis. Major medical errors reported by surgeons are strongly related to a surgeon's degree of burnout and their mental QOL. Studies are needed to determine how to reduce surgeon distress and how to support surgeons when medical errors occur.

  5. Truncation and Accumulated Errors in Wave Propagation

    NASA Astrophysics Data System (ADS)

    Chiang, Yi-Ling F.

    1988-12-01

    The approximation of the truncation and accumulated errors in the numerical solution of a linear initial-valued partial differential equation problem can be established by using a semidiscretized scheme. This error approximation is observed as a lower bound to the errors of a finite difference scheme. By introducing a modified von Neumann solution, this error approximation is applicable to problems with variable coefficients. To seek an in-depth understanding of this newly established error approximation, numerical experiments were performed to solve the hyperbolic equation {∂U}/{∂t} = -C 1(x)C 2(t) {∂U}/{∂x}, with both continuous and discontinuous initial conditions. We studied three cases: (1) C1( x)= C0 and C2( t)=1; (2) C1( x)= C0 and C2( t= t; and (3) C 1(x)=1+( {solx}/{a}) 2 and C2( t)= C0. Our results show that the errors are problem dependent and are functions of the propagating wave speed. This suggests a need to derive problem-oriented schemes rather than the equation-oriented schemes as is commonly done. Furthermore, in a wave-propagation problem, measurement of the error by the maximum norm is not particularly informative when the wave speed is incorrect.

  6. Multipath induced errors in meteorological Doppler/interferometer location systems

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.

    1984-01-01

    One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.

  7. Uncorrected refractive errors

    PubMed Central

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  8. System contributions to error.

    PubMed

    Adams, J G; Bohan, J S

    2000-11-01

    An unacceptably high rate of medical error occurs in the emergency department (ED). Professional accountability requires that EDs be managed to systematically eliminate error. This requires advocacy and leadership at every level of the specialty and at each institution in order to be effective and sustainable. At the same time, the significant operational challenges that face the ED, such as excessive patient care requirements, should be recognized if error reduction efforts are to remain credible. Proper staffing levels, for example, are an important prerequisite if medical error is to be minimized. Even at times of low volume, however, medical error is probably common. Engineering human factors and operational procedures, promoting team coordination, and standardizing care processes can decrease error and are strongly promoted. Such efforts should be coupled to systematic analysis of errors that occur. Reliable reporting is likely only if the system is based within the specialty to help ensure proper analysis and decrease threat. Ultimate success will require dedicated effort, continued advocacy, and promotion of research.

  9. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  10. Refractive errors and schizophrenia.

    PubMed

    Caspi, Asaf; Vishne, Tali; Reichenberg, Abraham; Weiser, Mark; Dishon, Ayelet; Lubin, Gadi; Shmushkevitz, Motti; Mandel, Yossi; Noy, Shlomo; Davidson, Michael

    2009-02-01

    Refractive errors (myopia, hyperopia and amblyopia), like schizophrenia, have a strong genetic cause, and dopamine has been proposed as a potential mediator in their pathophysiology. The present study explored the association between refractive errors in adolescence and schizophrenia, and the potential familiality of this association. The Israeli Draft Board carries a mandatory standardized visual accuracy assessment. 678,674 males consecutively assessed by the Draft Board and found to be psychiatrically healthy at age 17 were followed for psychiatric hospitalization with schizophrenia using the Israeli National Psychiatric Hospitalization Case Registry. Sib-ships were also identified within the cohort. There was a negative association between refractive errors and later hospitalization for schizophrenia. Future male schizophrenia patients were two times less likely to have refractive errors compared with never-hospitalized individuals, controlling for intelligence, years of education and socioeconomic status [adjusted Hazard Ratio=.55; 95% confidence interval .35-.85]. The non-schizophrenic male siblings of schizophrenia patients also had lower prevalence of refractive errors compared to never-hospitalized individuals. Presence of refractive errors in adolescence is related to lower risk for schizophrenia. The familiality of this association suggests that refractive errors may be associated with the genetic liability to schizophrenia.

  11. Oral Reading Errors of Average and Superior Reading Ability Children.

    ERIC Educational Resources Information Center

    Geoffrion, Leo David

    Oral reading samples were gathered from a group of twenty normal boys from the fourth through sixth grades. All reading errors were coded and classified using a modified version of the taxonomies of Goodman and Burke. Through cluster analysis two distinct error patterns were found. One group consisted of students whose performance was limited…

  12. A Framework for Identifying and Classifying Undergraduate Student Proof Errors

    ERIC Educational Resources Information Center

    Strickland, S.; Rand, B.

    2016-01-01

    This paper describes a framework for identifying, classifying, and coding student proofs, modified from existing proof-grading rubrics. The framework includes 20 common errors, as well as categories for interpreting the severity of the error. The coding scheme is intended for use in a classroom context, for providing effective student feedback. In…

  13. A Framework for Identifying and Classifying Undergraduate Student Proof Errors

    ERIC Educational Resources Information Center

    Strickland, S.; Rand, B.

    2016-01-01

    This paper describes a framework for identifying, classifying, and coding student proofs, modified from existing proof-grading rubrics. The framework includes 20 common errors, as well as categories for interpreting the severity of the error. The coding scheme is intended for use in a classroom context, for providing effective student feedback. In…

  14. An extended Kalman filter based automatic frequency control loop

    NASA Technical Reports Server (NTRS)

    Hinedi, S.

    1988-01-01

    An Automatic Frequency Control (AFC) loop based on an Extended Kalman Filter (EKF) is introduced and analyzed in detail. The scheme involves an EKF which operates on a modified set of data in order to track the frequency of the incoming signal. The algorithm can also be viewed as a modification to the well known cross-product AFC loop. A low carrier-to-noise ratio (CNR), high-dynamic environment is used to test the algorithm and the probability of loss-of-lock is assessed via computer simulations. The scheme is best suited for scenarios in which the frequency error variance can be compromised to achieve a very low operating CNR threshold. This technique can easily be incorporated in the Advanced Receiver (ARX), requiring minimum software modifications.

  15. A time domain frequency-selective multivariate Granger causality approach.

    PubMed

    Leistritz, Lutz; Witte, Herbert

    2016-08-01

    The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.

  16. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  17. Error Detection Processes during Observational Learning

    ERIC Educational Resources Information Center

    Badets, Arnaud; Blandin, Yannick; Wright, David L.; Shea, Charles H.

    2006-01-01

    The purpose of this experiment was to determine whether a faded knowledge of results (KR) frequency during observation of a model's performance enhanced error detection capabilities. During the observation phase, participants observed a model performing a timing task and received KR about the model's performance on each trial or on one of two…

  18. Error Detection Processes during Observational Learning

    ERIC Educational Resources Information Center

    Badets, Arnaud; Blandin, Yannick; Wright, David L.; Shea, Charles H.

    2006-01-01

    The purpose of this experiment was to determine whether a faded knowledge of results (KR) frequency during observation of a model's performance enhanced error detection capabilities. During the observation phase, participants observed a model performing a timing task and received KR about the model's performance on each trial or on one of two…

  19. Verb-Form Errors in EAP Writing

    ERIC Educational Resources Information Center

    Wee, Roselind; Sim, Jacqueline; Jusoff, Kamaruzaman

    2010-01-01

    This study was conducted to identify and describe the written verb-form errors found in the EAP writing of 39 second year learners pursuing a three-year Diploma Programme from a public university in Malaysia. Data for this study, which were collected from a written 350-word discursive essay, were analyzed to determine the types and frequency of…

  20. Errors in reprojection methods in computenzed tomography.

    PubMed

    Trussell, H J; Orun-Ozturk, H; Civanlar, M R

    1987-01-01

    Iterative tomographic reconstruction methods have been developed which can enforce various physical constraints on the reconstructed image. An integral part of most of these methods is the repro. jection of the reconstructed image. These estimated projections are compared to the original projection data and modified according to some criteria based on a priori constraints. In this paper, the errors generated by such reprojection schemes are investigated. Bounds for these errors are derived under simple signal energy assumptions and using probabilistic assumptions on the distribution of discontinuities. These bounds can be used in the enforcement of constraints, in the determination of convergence of the iterative methods, and in the detection of artifacts.

  1. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  2. EMS -- Error Message Service

    NASA Astrophysics Data System (ADS)

    Rees, P. C. T.; Chipperfield, A. J.; Draper, P. W.

    This document describes the Error Message Service, EMS, and its use in system software. The purpose of EMS is to provide facilities for constructing and storing error messages for future delivery to the user -- usually via the Starlink Error Reporting System, ERR (see SUN/104). EMS can be regarded as a simplified version of ERR without the binding to any software environment (e.g., for message output or access to the parameter and data systems). The routines in this library conform to the error reporting conventions described in SUN/104. A knowledge of these conventions, and of the ADAM system (see SG/4), is assumed in what follows. This document is intended for Starlink systems programmers and can safely be ignored by applications programmers and users.

  3. TOA/FOA geolocation error analysis.

    SciTech Connect

    Mason, John Jeffrey

    2008-08-01

    This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

  4. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  5. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  6. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-04-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.

  7. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  8. Waveform frequency notching

    DOEpatents

    Doerry, Armin W.; Andrews, John

    2017-05-09

    The various technologies presented herein relate to incorporating one or more notches into a radar spectrum, whereby the notches relate to one or more frequencies for which no radar transmission is to occur. An instantaneous frequency is monitored and if the frequency is determined to be of a restricted frequency, then a radar signal can be modified. Modification can include replacing the signal with a signal having a different instantaneous amplitude, a different instantaneous phase, etc. The modification can occur in a WFS prior to a DAC, as well as prior to a sin ROM component and/or a cos ROM component. Further, the notch can be dithered to enable formation of a deep notch. The notch can also undergo signal transitioning to enable formation of a deep notch. The restricted frequencies can be stored in a LUT against which an instantaneous frequency can be compared.

  9. Reducing medical errors through better documentation.

    PubMed

    Edwards, Marie; Moczygemba, Jackie

    2004-01-01

    Preventable medical errors occur with alarming frequency in US hospitals. Questions to address include what is a medical error, what errors occur most often, and what solutions can health information technologies offer with better documentation. Preventable injuries caused by mismanagement of treatment happen in all areas of care. Some result from human fallibility and some from system failures. Most errors stem from a combination of the two. Examples of combination errors include wrong-site surgeries, scrambled laboratory results, medication mishaps, misidentification of patients, and equipment failures. Unavailable patient information and illegible handwriting lead to diagnosing and ordering errors. Recent technology offers viable solutions to many of these medical errors. Computer-based medical records, integration with the pharmacy, decision support software, Computerized Physician Order Entry Systems, and bar coding all offer ways to avoid tragic treatment outcomes. Persuading and training hospital staff to use the technology poses a problem, as does budgeting for the new equipment. However, the technology would prove its worth in time. The Institute of Medicine and coalition groups such as Leapfrog Group have recognized the problem that permeates the health care industry, manifests in many ways, and requires the many solutions that information technology offer.

  10. Gear Transmission Error Measurement System Made Operational

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2002-01-01

    A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.

  11. Spatial sampling errors for a satellite-borne scanning radiometer

    NASA Technical Reports Server (NTRS)

    Manalo, Natividad D.; Smith, G. L.

    1991-01-01

    The Clouds and Earth's Radiant Energy System (CERES) scanning radiometer is planned as the Earth radiation budget instrument for the Earth Observation System, to be flown in the late 1990's. In order to minimize the spatial sampling errors of the measurements, it is necessary to select design parameters for the instrument such that the resulting point spread function will minimize spatial sampling errors. These errors are described as aliasing and blurring errors. Aliasing errors are due to presence in the measurements of spatial frequencies beyond the Nyquist frequency, and blurring errors are due to attenuation of frequencies below the Nyquist frequency. The design parameters include pixel shape and dimensions, sampling rate, scan period, and time constants of the measurements. For a satellite-borne scanning radiometer, the pixel footprint grows quickly at large nadir angles. The aliasing errors thus decrease with increasing scan angle, but the blurring errors grow quickly. The best design minimizes the sum of these two errors over a range of scan angles. Results of a parameter study are presented, showing effects of data rates, pixel dimensions, spacecraft altitude, and distance from the spacecraft track.

  12. Surface errors in the course of machining precision optics

    NASA Astrophysics Data System (ADS)

    Biskup, H.; Haberl, A.; Rascher, R.

    2015-08-01

    Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.

  13. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  14. Clock error, jitter, phase error, and differential time of arrival in satellite communications

    NASA Astrophysics Data System (ADS)

    Sorace, Ron

    The maintenance of synchronization in satellite communication systems is critical in contemporary systems, since many signal processing and detection algorithms depend on ascertaining time references. Unfortunately, proper synchronism becomes more difficult to maintain at higher frequencies. Factors such as clock error or jitter, noise, and phase error at a coherent receiver may corrupt a transmitted signal and degrade synchronism at the terminations of a communication link. Further, in some systems an estimate of propagation delay is necessary, but this delay may vary stochastically with the range of the link. This paper presents a model of the components of synchronization error including a simple description of clock error and examination of recursive estimation of the propagation delay time for messages between elements in a satellite communication system. Attention is devoted to jitter, the sources of which are considered to be phase error in coherent reception and jitter in the clock itself.

  15. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  16. Error monitoring in musicians

    PubMed Central

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  17. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  18. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    PubMed

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  19. Filter induced errors in laser anemometer measurements using counter processors

    NASA Technical Reports Server (NTRS)

    Oberle, L. G.; Seasholtz, R. G.

    1985-01-01

    Simulations of laser Doppler anemometer (LDA) systems have focused primarily on noise studies or biasing errors. Another possible source of error is the choice of filter types and filter cutoff frequencies. Before it is applied to the counter portion of the signal processor, a Doppler burst is filtered to remove the pedestal and to reduce noise in the frequency bands outside the region in which the signal occurs. Filtering, however, introduces errors into the measurement of the frequency of the input signal which leads to inaccurate results. Errors caused by signal filtering in an LDA counter-processor data acquisition system are evaluated and filters for a specific application which will reduce these errors are chosen.

  20. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  1. Pediatric antidepressant medication errors in a national error reporting database.

    PubMed

    Rinke, Michael L; Bundy, David G; Shore, Andrew D; Colantuoni, Elizabeth; Morlock, Laura L; Miller, Marlene R

    2010-01-01

    To describe inpatient and outpatient pediatric antidepressant medication errors. We analyzed all error reports from the United States Pharmacopeia MEDMARX database, from 2003 to 2006, involving antidepressant medications and patients younger than 18 years. Of the 451 error reports identified, 95% reached the patient, 6.4% reached the patient and necessitated increased monitoring and/or treatment, and 77% involved medications being used off label. Thirty-three percent of errors cited administering as the macrolevel cause of the error, 30% cited dispensing, 28% cited transcribing, and 7.9% cited prescribing. The most commonly cited medications were sertraline (20%), bupropion (19%), fluoxetine (15%), and trazodone (11%). We found no statistically significant association between medication and reported patient harm; harmful errors involved significantly more administering errors (59% vs 32%, p = .023), errors occurring in inpatient care (93% vs 68%, p = .012) and extra doses of medication (31% vs 10%, p = .025) compared with nonharmful errors. Outpatient errors involved significantly more dispensing errors (p < .001) and more errors due to inaccurate or omitted transcription (p < .001), compared with inpatient errors. Family notification of medication errors was reported in only 12% of errors. Pediatric antidepressant errors often reach patients, frequently involve off-label use of medications, and occur with varying severity and type depending on location and type of medication prescribed. Education and research should be directed toward prompt medication error disclosure and targeted error reduction strategies for specific medication types and settings.

  2. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  3. Frequency retrace of quartz oscillators

    NASA Astrophysics Data System (ADS)

    Euler, F.; Yannoni, N. F.

    Frequency retrace measurements are reported on oven controlled quartz oscillators utilizing AT and SC cut plated and BVA resonators. Prior to full aging, the retrace error is added to the aging effect. With well-aged resonators, after one or several on-off cycles, the frequency settles at a new level characteristic for intermittent operation. Severe frequency shifts have sometimes been found after the first restart following prolonged continuous operation. SC cut resonators appear to show distinctly smaller retrace errors than AT cut.

  4. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    SciTech Connect

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  5. Dialogues on prediction errors.

    PubMed

    Niv, Yael; Schoenbaum, Geoffrey

    2008-07-01

    The recognition that computational ideas from reinforcement learning are relevant to the study of neural circuits has taken the cognitive neuroscience community by storm. A central tenet of these models is that discrepancies between actual and expected outcomes can be used for learning. Neural correlates of such prediction-error signals have been observed now in midbrain dopaminergic neurons, striatum, amygdala and even prefrontal cortex, and models incorporating prediction errors have been invoked to explain complex phenomena such as the transition from goal-directed to habitual behavior. Yet, like any revolution, the fast-paced progress has left an uneven understanding in its wake. Here, we provide answers to ten simple questions about prediction errors, with the aim of exposing both the strengths and the limitations of this active area of neuroscience research.

  6. Error Processing Techniques for the Modified Read Facsimile Code.

    DTIC Science & Technology

    1981-09-01

    MARSHALL L . CAIN Senior Electronics Engineer Assistant Manager Office of NCS Technology (Technology and Standards) and Standards National Communications...headed by National Communications System Assistant Manager Marshall L . Cain, is responsible for the managemene of the Federal Tele- communications...projet,et comporte obligatoirement un bon analyste attacti. A P’ap- plication. It - L ’!MIPLANTATION GEOGRAPHIQUE D’UN RESEAU JNFORtMATtQUE PERFORMANT

  7. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  8. Measuring Cyclic Error in Laser Heterodyne Interferometers

    NASA Technical Reports Server (NTRS)

    Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter

    2010-01-01

    An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-

  9. Optical linear algebra processors - Noise and error-source modeling

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  10. Parental Reports of Children's Scale Errors in Everyday Life

    ERIC Educational Resources Information Center

    Rosengren, Karl S.; Gutierrez, Isabel T.; Anderson, Kathy N.; Schein, Stevie S.

    2009-01-01

    Scale errors refer to behaviors where young children attempt to perform an action on an object that is too small to effectively accommodate the behavior. The goal of this study was to examine the frequency and characteristics of scale errors in everyday life. To do so, the researchers collected parental reports of children's (age range = 13-21…

  11. A Review of Errors in the Journal Abstract

    ERIC Educational Resources Information Center

    Lee, Eunpyo; Kim, Eun-Kyung

    2013-01-01

    This study examines 29 journal abstracts that were completed reviews for publication in the year 2012. It was done to investigate the number (percentage) of abstracts that involved with errors, the most erroneous part of the abstract, and the types and frequency of errors. Also the purpose expanded to compare the results with those of the previous…

  12. Parental Reports of Children's Scale Errors in Everyday Life

    ERIC Educational Resources Information Center

    Rosengren, Karl S.; Gutierrez, Isabel T.; Anderson, Kathy N.; Schein, Stevie S.

    2009-01-01

    Scale errors refer to behaviors where young children attempt to perform an action on an object that is too small to effectively accommodate the behavior. The goal of this study was to examine the frequency and characteristics of scale errors in everyday life. To do so, the researchers collected parental reports of children's (age range = 13-21…

  13. Speech Errors in Progressive Non-Fluent Aphasia

    ERIC Educational Resources Information Center

    Ash, Sharon; McMillan, Corey; Gunawardena, Delani; Avants, Brian; Morgan, Brianna; Khan, Alea; Moore, Peachie; Gee, James; Grossman, Murray

    2010-01-01

    The nature and frequency of speech production errors in neurodegenerative disease have not previously been precisely quantified. In the present study, 16 patients with a progressive form of non-fluent aphasia (PNFA) were asked to tell a story from a wordless children's picture book. Errors in production were classified as either phonemic,…

  14. Optical linear algebra processors - Noise and error-source modeling

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  15. Optical linear algebra processors: noise and error-source modeling.

    PubMed

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  16. Speech Errors in Progressive Non-Fluent Aphasia

    ERIC Educational Resources Information Center

    Ash, Sharon; McMillan, Corey; Gunawardena, Delani; Avants, Brian; Morgan, Brianna; Khan, Alea; Moore, Peachie; Gee, James; Grossman, Murray

    2010-01-01

    The nature and frequency of speech production errors in neurodegenerative disease have not previously been precisely quantified. In the present study, 16 patients with a progressive form of non-fluent aphasia (PNFA) were asked to tell a story from a wordless children's picture book. Errors in production were classified as either phonemic,…

  17. Elimination of error factors, affecting EM and seismic inversions

    NASA Astrophysics Data System (ADS)

    Magomedov, M.; Zuev, M. A.; Korneev, V. A.; Goloshubin, G.; Zuev, J.; Brovman, Y.

    2013-12-01

    EM or seismic data inversions are affected by many factors, which may conceal the responses from target objects. We address here the contributions from the following effects: 1) Pre-survey spectral sensitivity factor. Preliminary information about a target layer can be used for a pre-survey estimation of the required frequency domain and signal level. A universal approach allows making such estimations in real time, helping the survey crew to optimize an acquisition process. 2) Preliminary velocities' identification and their dispersions for all the seismic waves, arising in a stratified media became a fast working tool, based on the exact analytical solution. 3) Vertical gradients effect. For most layers the log data scatter, requiring an averaging pattern. A linear gradient within each representative layer is a reasonable compromise between required inversion accuracy and forward modeling complexity. 4) An effect from the seismic source's radial component becomes comparable with vertical part for explosive sources. If this effect is not taken into account, a serious modeling error takes place. This problem has an algorithmic solution. 5) Seismic modeling is often based on different representations for a source formulated either for a force or to a potential. The wave amplitudes depend on the formulation, making an inversion result sensitive to it. 6) Asymmetrical seismic waves (modified Rayleigh) in symmetrical geometry around liquid fracture come from S-wave and merge with the modified Krauklis wave at high frequencies. A detail analysis of this feature allows a spectral range optimization for the proper wave's extraction. 7) An ultrasonic experiment was conducted to show different waves appearance for a super-thin water-saturated fracture between two Plexiglas plates, being confirmed by comparison with theoretical computations. 8) A 'sandwich effect' was detected by comparison with averaged layer's effect. This opens an opportunity of the shale gas direct

  18. Parallel systems of error processing in the brain.

    PubMed

    Yordanova, Juliana; Falkenstein, Michael; Hohnsbein, Joachim; Kolev, Vasil

    2004-06-01

    Major neurophysiological principles of performance monitoring are not precisely known. It is a current debate in cognitive neuroscience if an error-detection neural system is involved in behavioral control and adaptation. Such a system should generate error-specific signals, but their existence is questioned by observations that correct and incorrect reactions may elicit similar neuroelectric potentials. A new approach based on a time-frequency decomposition of event-related brain potentials was applied to extract covert sub-components from the classical error-related negativity (Ne) and correct-response-related negativity (Nc) in humans. A unique error-specific sub-component from the delta (1.5-3.5 Hz) frequency band was revealed only for Ne, which was associated with error detection at the level of overall performance monitoring. A sub-component from the theta frequency band (4-8 Hz) was associated with motor response execution, but this sub-component also differentiated error from correct reactions indicating error detection at the level of movement monitoring. It is demonstrated that error-specific signals do exist in the brain. More importantly, error detection may occur in multiple functional systems operating in parallel at different levels of behavioral control.

  19. Power Measurement Errors on a Utility Aircraft

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  20. Human error, not communication and systems, underlies surgical complications.

    PubMed

    Fabri, Peter J; Zayas-Castro, José L

    2008-10-01

    This study prospectively assesses the underlying errors contributing to surgical complications over a 12-month period in a complex academic department of surgery using a validated scoring template. Studies in "high reliability organizations" suggest that systems failures are responsible for errors. Reports from the aviation industry target communication failures in the cockpit. No prior studies have developed a validated classification system and have determined the types of errors responsible for surgical complications. A classification system of medical error during operation was created, validated, and data collected on the frequency, type, and severity of medical errors in 9,830 surgical procedures. Statistical analysis of concordance, validity, and reliability were performed. Reported major complications occurred in 332 patients (3.4%) with error in 78.3%: errors in surgical technique (63.5%), judgment errors (29.6%), inattention to detail (29.3%), and incomplete understanding (22.7%). Error contributed more than 50% to the complication in 75%. A total of 13.6% of cases had error but no injury, 34.4% prolongation of hospitalization, 25.1% temporary disability, 8.4% permanent disability, and 16.0% death. In 20%, the error was a "mistake" (the wrong thing), and in 58% a "slip" (the right thing incorrectly). System errors (2%) and communication errors (2%) were infrequently identified. After surgical technique, most surgical error was caused by human factors: judgment, inattention to detail, and incomplete understanding, and not to organizational/system errors or breaks in communication. Training efforts to minimize error and enhance patient safety must address human factor causes of error.

  1. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  2. Satellite Photometric Error Determination

    DTIC Science & Technology

    2015-10-18

    the errors associated with optical photometry used in non-resolved object characterization for the Space Situational Awareness (SSA) community. We...begin with an overview of the standard astronomical techniques used to measure the brightness of spatially unresolved objects (point source photometry ...in deep space. After discussing the standard astronomical techniques, we present the application of astronomical photometry for the purposes of

  3. Frequency coupling in dual frequency capacitively coupled radio-frequency plasmas

    SciTech Connect

    Gans, T.; Schulze, J.; O'Connell, D.; Czarnetzki, U.; Faulkner, R.; Ellingboe, A. R.; Turner, M. M.

    2006-12-25

    An industrial, confined, dual frequency, capacitively coupled, radio-frequency plasma etch reactor (Exelan registered , Lam Research) has been modified for spatially resolved optical measurements. Space and phase resolved optical emission spectroscopy yields insight into the dynamics of the discharge. A strong coupling of the two frequencies is observed in the emission profiles. Consequently, the ionization dynamics, probed through excitation, is determined by both frequencies. The control of plasma density by the high frequency is, therefore, also influenced by the low frequency. Hence, separate control of plasma density and ion energy is rather complex.

  4. (Errors in statistical tests)3.

    PubMed

    Phillips, Carl V; MacLehose, Richard F; Kaufman, Jay S

    2008-07-14

    departure from uniformity, not just its test statistics. We found variation in digit frequencies in the additional data and describe the distinctive pattern of these results. Furthermore, we found that the combined data diverge unambiguously from a uniform distribution. The explanation for this divergence seems unlikely to be that suggested by the previous authors: errors in calculations and transcription.

  5. Report of the Subpanel on Error Characterization and Error Budgets

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The state of knowledge of both user positioning requirements and error models of current and proposed satellite systems is reviewed. In particular the error analysis models for LANDSAT D are described. Recommendations are given concerning the geometric error model for the thematic mapper; interactive user involvement in system error budgeting and modeling and verification on real data sets; and the identification of a strawman mission for modeling key error sources.

  6. Analyzing communication errors in an air medical transport service.

    PubMed

    Dalto, Joseph D; Weir, Charlene; Thomas, Frank

    2013-01-01

    Poor communication can result in adverse events. Presently, no standards exist for classifying and analyzing air medical communication errors. This study sought to determine the frequency and types of communication errors reported within an air medical quality and safety assurance reporting system. Of 825 quality assurance reports submitted in 2009, 278 were randomly selected and analyzed for communication errors. Each communication error was classified and mapped to Clark's communication level hierarchy (ie, levels 1-4). Descriptive statistics were performed, and comparisons were evaluated using chi-square analysis. Sixty-four communication errors were identified in 58 reports (21% of 278). Of the 64 identified communication errors, only 18 (28%) were classified by the staff to be communication errors. Communication errors occurred most often at level 1 (n = 42/64, 66%) followed by level 4 (21/64, 33%). Level 2 and 3 communication failures were rare (, 1%). Communication errors were found in a fifth of quality and safety assurance reports. The reporting staff identified less than a third of these errors. Nearly all communication errors (99%) occurred at either the lowest level of communication (level 1, 66%) or the highest level (level 4, 33%). An air medical communication ontology is necessary to improve the recognition and analysis of communication errors. Copyright © 2013 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  7. Errors and mistakes in breast ultrasound diagnostics

    PubMed Central

    Jakubowski, Wiesław; Migda, Bartosz

    2012-01-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions. PMID:26675358

  8. Errors and mistakes in breast ultrasound diagnostics.

    PubMed

    Jakubowski, Wiesław; Dobruch-Sobczak, Katarzyna; Migda, Bartosz

    2012-09-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  9. Scientific Impacts of Wind Direction Errors

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Kim, Seung-Bum; Lee, Tong; Song, Y. Tony; Tang, Wen-Qing; Atlas, Robert

    2004-01-01

    An assessment on the scientific impact of random errors in wind direction (less than 45 deg) retrieved from space-based observations under weak wind (less than 7 m/s ) conditions was made. averages, and these weak winds cover most of the tropical, sub-tropical, and coastal oceans. Introduction of these errors in the semi-daily winds causes, on average, 5% changes of the yearly mean Ekman and Sverdrup volume transports computed directly from the winds, respectively. These poleward movements of water are the main mechanisms to redistribute heat from the warmer tropical region to the colder high- latitude regions, and they are the major manifestations of the ocean's function in modifying Earth's climate. Simulation by an ocean general circulation model shows that the wind errors introduce a 5% error in the meridional heat transport at tropical latitudes. The simulation also shows that the erroneous winds cause a pile-up of warm surface water in the eastern tropical Pacific, similar to the conditions during El Nino episode. Similar wind directional errors cause significant change in sea-surface temperature and sea-level patterns in coastal oceans in a coastal model simulation. Previous studies have shown that assimilation of scatterometer winds improves 3-5 day weather forecasts in the Southern Hemisphere. When directional information below 7 m/s was withheld, approximately 40% of the improvement was lost

  10. Transmission errors and bearing contact of spur, helical and spiral bevel gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Zhang, J.; Lee, H.-T.; Handschuh, R. F.

    1988-01-01

    An investigation of transmission errors and bearing contact of spur, helical and spiral bevel gears was performed. Modified tooth surfaces for these gears have been proposed in order to absorb linear transmission errors caused by gear misalignment and localize the bearing contact. Numerical examples for spur, helical, and spiral bevel gears are presented to illustrate the behavior of the modified gear surfaces to misalignment and errors of assembly.The numerical results indicate that the modified surfaces will perform with a low level of transmission error in nonideal operating environment.

  11. Transmission errors and bearing contact of spur, helical and spiral bevel gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Zhang, J.; Lee, H.-T.; Handschuh, R. F.

    1988-01-01

    An investigation of transmission errors and bearing contact of spur, helical and spiral bevel gears was performed. Modified tooth surfaces for these gears have been proposed in order to absorb linear transmission errors caused by gear misalignment and localize the bearing contact. Numerical examples for spur, helical, and spiral bevel gears are presented to illustrate the behavior of the modified gear surfaces to misalignment and errors of assembly.The numerical results indicate that the modified surfaces will perform with a low level of transmission error in nonideal operating environment.

  12. Modified SEAGULL

    NASA Technical Reports Server (NTRS)

    Salas, M. D.; Kuehn, M. S.

    1994-01-01

    Original version of program incorporated into program SRGULL (LEW-15093) for use on National Aero-Space Plane project, its duty being to model forebody, inlet, and nozzle portions of vehicle. However, real-gas chemistry effects in hypersonic flow fields limited accuracy of that version, because it assumed perfect-gas properties. As a result, SEAGULL modified according to real-gas equilibrium-chemistry methodology. This program analyzes two-dimensional, hypersonic flows of real gases. Modified version of SEAGULL maintains as much of original program as possible, and retains ability to execute original perfect-gas version.

  13. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  14. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  15. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  16. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  17. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  18. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  19. Torsional Vibration of Machines with Gear Errors

    NASA Astrophysics Data System (ADS)

    Lees, A. W.; Friswell, M. I.; Litak, G.

    2011-07-01

    Vibration and noise induced by errors and faults in gear meshes are key concerns for the performance of many rotating machines and the prediction of developing faults. Of particular concern are displacement errors in the gear mesh and for rigid gears these may be modelled to give a linear set of differential equations with forced excitation. Other faults, such as backlash or friction, may also arise and give non-linear models with rich dynamics. This paper considers the particular case of gear errors modelled as a Fourier series based on the tooth meshing frequency, leading immediately to non-linear equations of motion, even without the presence of other non-linear phenomena. By considering the perturbed response this system may be modelled as a parametrically excited system. This paper motivates the analysis, derives the equations of motion for the case of a single gear mesh, and provides example response simulations of a boiler feed pump including phase portraits and power spectra.

  20. Defining error in anatomic pathology.

    PubMed

    Sirota, Ronald L

    2006-05-01

    Although much has been said and written about medical error and about error in pathology since the publication of the Institute of Medicine's report on medical error in 1999, precise definitions of what constitutes error in anatomic pathology do not exist for the specialty. Without better definitions, it is impossible to accurately judge errors in pathology. The lack of standardized definitions has implications for patient care and for the legal judgment of malpractice. To review the goals of anatomic pathology, to discuss the problems inherent in applying these goals to the judgment of error in pathology, to offer definitions of major and minor errors in pathology, and to discuss error in anatomic pathology in relation to the classic laboratory test cycle. Existing literature. Definitions for major and minor error in anatomic pathology are proffered, and anatomic pathology error is characterized in the classic test cycle.

  1. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error.

  2. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  3. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  4. Hyponatremia: management errors.

    PubMed

    Seo, Jang Won; Park, Tae Jin

    2006-11-01

    Rapid correction of hyponatremia is frequently associated with increased morbidity and mortality. Therefore, it is important to estimate the proper volume and type of infusate required to increase the serum sodium concentration predictably. The major common management errors during the treatment of hyponatremia are inadequate investigation, treatment with fluid restriction for diuretic-induced hyponatremia and treatment with fluid restriction plus intravenous isotonic saline simultaneously. We present two cases of management errors. One is about the problem of rapid correction of hyponatremia in a patient with sepsis and acute renal failure during continuous renal replacement therapy in the intensive care unit. The other is the case of hypothyroidism in which hyponatremia was aggravated by intravenous infusion of dextrose water and isotonic saline infusion was erroneously used to increase serum sodium concentration.

  5. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  6. Pulse Shaping Entangling Gates and Error Supression

    NASA Astrophysics Data System (ADS)

    Hucul, D.; Hayes, D.; Clark, S. M.; Debnath, S.; Quraishi, Q.; Monroe, C.

    2011-05-01

    Control of spin dependent forces is important for generating entanglement and realizing quantum simulations in trapped ion systems. Here we propose and implement a composite pulse sequence based on the Molmer-Sorenson gate to decrease gate infidelity due to frequency and timing errors. The composite pulse sequence uses an optical frequency comb to drive Raman transitions simultaneously detuned from trapped ion transverse motional red and blue sideband frequencies. The spin dependent force displaces the ions in phase space, and the resulting spin-dependent geometric phase depends on the detuning. Voltage noise on the rf electrodes changes the detuning between the trapped ions' motional frequency and the laser, decreasing the fidelity of the gate. The composite pulse sequence consists of successive pulse trains from counter-propagating frequency combs with phase control of the microwave beatnote of the lasers to passively suppress detuning errors. We present the theory and experimental data with one and two ions where a gate is performed with a composite pulse sequence. This work supported by the U.S. ARO, IARPA, the DARPA OLE program, the MURI program; the NSF PIF Program; the NSF Physics Frontier Center at JQI; the European Commission AQUTE program; and the IC postdoc program administered by the NGA.

  7. Reducing Spreadsheet Errors

    DTIC Science & Technology

    2009-09-01

    Basic for Applications ( VBA ) to improve spreadsheets. Program- ming and coding portions of a spreadsheet in VBA (especially iteration) can reduce...effort as well as errors. Users unfamiliar with VBA may begin learning by “recording macros” in Excel. Microsoft’s online tutorials...www.office.microsoft.com/en-us/excel) provide overviews of this and other VBA capabilities. 5) Thorough documentation of spreadsheet development and application is

  8. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  9. Surface temperature measurement errors

    SciTech Connect

    Keltner, N.R.; Beck, J.V.

    1983-05-01

    Mathematical models are developed for the response of surface mounted thermocouples on a thick wall. These models account for the significant causes of errors in both the transient and steady-state response to changes in the wall temperature. In many cases, closed form analytical expressions are given for the response. The cases for which analytical expressions are not obtained can be easily evaluated on a programmable calculator or a small computer.

  10. Feed-forward frequency offset estimation for 32-QAM optical coherent detection.

    PubMed

    Xiao, Fei; Lu, Jianing; Fu, Songnian; Xie, Chenhui; Tang, Ming; Tian, Jinwen; Liu, Deming

    2017-04-17

    Due to the non-rectangular distribution of the constellation points, traditional fast Fourier transform based frequency offset estimation (FFT-FOE) is no longer suitable for 32-QAM signal. Here, we report a modified FFT-FOE technique by selecting and digitally amplifying the inner QPSK ring of 32-QAM after the adaptive equalization, which is defined as QPSK-selection assisted FFT-FOE. Simulation results show that no FOE error occurs with a FFT size of only 512 symbols, when the signal-to-noise ratio (SNR) is above 17.5 dB using our proposed FOE technique. However, the error probability of traditional FFT-FOE scheme for 32-QAM is always intolerant. Finally, our proposed FOE scheme functions well for 10 Gbaud dual polarization (DP)-32-QAM signal to reach 20% forward error correction (FEC) threshold of BER=2×10-2, under the scenario of back-to-back (B2B) transmission.

  11. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  12. Influence of modulation frequency in rubidium cell frequency standards

    NASA Technical Reports Server (NTRS)

    Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.

    1983-01-01

    The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.

  13. Error analysis of optical correlators

    NASA Technical Reports Server (NTRS)

    Ma, Paul W.; Reid, Max B.; Downie, John D.

    1992-01-01

    With the growing interest in using binary phase only filters (BPOF) in optical correlators that are implemented on magnetooptic spatial light modulators, an understanding of the effect of errors in system alignment and optical components is critical in obtaining optimal system performance. We present simulations of optical correlator performance degradation in the presence of eight errors. We break these eight errors into three groups: 1) alignment errors, 2) errors due to a combination of component imperfections and alignment errors, and 3) errors which result solely from non-ideal components. Under the first group, we simulate errors in the distance from the object to the first principle plane of the transform lens, the distance from the second principle plane of the transform lens to the filter plane, and rotational misalignment of the input mask with the filter mask. Next we consider errors which result from a combination of alignment and component imperfections. These include errors in the transform lens, the phase compensation lens, and the inverse Fourier transform lens. Lastly we have the component errors resulting from the choice of spatial light modulator. These include contrast error and phase errors caused by the non-uniform flatness of the masks. The effects of each individual error are discussed, and the result of combining all eight errors under assumptions of reasonable tolerances and system parameters is also presented. Conclusions are drawn as to which tolerances are most critical for optimal system performance.

  14. Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2014-09-30

    Majda, based on earlier theoretical work. 1. Dynamic Stochastic Superresolution of sparseley observed turbulent systems M. Branicki (Post doc...of numerical models. Here, we introduce and study a suite of general Dynamic Stochastic Superresolution (DSS) algorithms and show that, by...resolving subgridscale turbulence through Dynamic Stochastic Superresolution utilizing aliased grids is a potential breakthrough for practical online

  15. Mid-Range Spatial Frequency Errors in Optical Components.

    DTIC Science & Technology

    1983-01-01

    pattern. Malacara (1978, pp. 356-359) describes the diffraction intensity distri- bution on either side of the focal plane and presents a diagram of the...Leoble and Co., Ltd., Aug. 1963. Kintner, Eric C., and Richard M. Sillitto. "A New Analytic Method for Computing the Optical Transfer Function." OPTICA ...2, 1976. Malacara , Daniel (ed). Optical Shop Testing. New York: John Wiley and Sons, 1978. Reticon Corporation. Reticon G Series Data Sheet. Sunnyvale, CA: Reticon, 1976. 41 FILMED 9-85 DTIC

  16. Structural Damage Detection Using Frequency Domain Error Localization.

    DTIC Science & Technology

    1994-12-01

    113 rn ~l-,I T X ~oy Ul C 114 APPENDIX D. FE MODEL / COMPUTER CODES The following is a brief description of MATLAB routines employed in this thesis...R.R., Structural Dynamics, An Introduction to Computer Methods , pp. 383-387, John Wiley and Sons, Inc., 1981. 8. Guyan , R.J., "Reduction of Stiffness...official policy or position of the Department of Defense or the U.S. Government. 12a. DISTRIBUTION/AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE

  17. Data Properties Categorization to Improve Scientific Sensor Data Error Detection

    NASA Astrophysics Data System (ADS)

    Gallegos, I.; Gates, A.; Tweedie, C. E.

    2009-12-01

    Recent advancements in scientific sensor data acquisition technologies have increased the amount of data collected in near-real time. Although the need for error detection in such data sets is widely acknowledged, few organizations to date have automated random and systematic error detection. This poster presents the results of a broad survey of the literature on scientific sensor data collected through networks and environmental observatories with the aim of identifying research priorities needed for the development of automated error detection mechanisms. The key finding of this survey is that there appears to be no overarching consensus about error detection criteria in the environmental sciences and that this likely limits the development and implementation of automated error detection in this domain. The literature survey focused on identifying scientific projects from institutions that have incorporated error detection into their systems, the type of analyzed data, and the type of sensor error detection properties as defined by each project. The projects have mechanisms that perform error detection in both the field sites and data centers. The literature survey was intended to capture a representative sample of projects with published error detection criteria. Several scientific projects have included error detection, mostly as part of the system’s source code; however, error detection properties, which are embedded or hard-coded in the source code, are difficult to refine and require a software developer to modify the source code every time a new error detection property or a modification to an existing property is needed. An alternative to hard-coded error detection properties is an error-detection mechanism, independent of the system used to collect the sensor data, which will automatically detect errors in the supported type of data. Such a mechanism would allow scientists to specify and reuse error detection properties and uses the specified properties

  18. Cognitive control of conscious error awareness: error awareness and error positivity (Pe) amplitude in moderate-to-severe traumatic brain injury (TBI)

    PubMed Central

    Logan, Dustin M.; Hill, Kyle R.; Larson, Michael J.

    2015-01-01

    Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI. PMID:26217212

  19. Cognitive control of conscious error awareness: error awareness and error positivity (Pe) amplitude in moderate-to-severe traumatic brain injury (TBI).

    PubMed

    Logan, Dustin M; Hill, Kyle R; Larson, Michael J

    2015-01-01

    Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI.

  20. Position error propagation in the simplex strapdown navigation system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The results of an analysis of the effects of deterministic error sources on position error in the simplex strapdown navigation system were documented. Improving the long term accuracy of the system was addressed in two phases: understanding and controlling the error within the system, and defining methods of damping the net system error through the use of an external reference velocity or position. Review of the flight and ground data revealed error containing the Schuler frequency as well as non-repeatable trends. The only unbounded terms are those involving gyro bias and azimuth error coupled with velocity. All forms of Schuler-periodic position error were found to be sufficiently large to require update or damping capability unless the source coefficients can be limited to values less than those used in this analysis for misalignment and gyro and accelerometer bias. The first-order effects of the deterministic error sources were determined with a simple error propagator which provided plots of error time functions in response to various source error values.

  1. Drug dispensing errors in a ward stock system.

    PubMed

    Andersen, Stig Ejdrup

    2010-02-01

    The aim of this study was to determine the frequency of drug dispensing errors in a traditional ward stock system operated by nurses and to investigate the effect of potential contributing factors. This was a descriptive study conducted in a teaching hospital from January 2005 to June 2007. In five wards, samples of dispensed solid drugs were collected prospectively and compared with the prescriptions. Data were evaluated using multivariable logistic regression. Overall, 2173 samples were collected, 95.5% of which were correctly dispensed (95% CI 94.5-96.2). In total, 124 errors in 6715 opportunities for error were identified; error rate of 1.85 errors per 100 opportunities for error (95% CI 1.54-2.20). Omission of a dose was the predominant type of error while vitamins and minerals, drugs for acid-related diseases and antipsychotic drugs were the drugs most frequently affected by errors. Multivariable analysis showed that surgical and psychiatric settings were more susceptible to involvement in dispensing errors and that polypharmacy was a risk factor. In this ward stock system, dispensing errors are relatively common, they depend on speciality and are associated with polypharmacy. These results indicate that strategies to reduce dispensing errors should address polypharmacy and focus on high-risk units. This should, however, be substantiated by a future trial.

  2. Errorless and errorful learning modulated by transcranial direct current stimulation

    PubMed Central

    2011-01-01

    Background Errorless learning is advantageous over trial and error learning (errorful learning) as errors are avoided during learning resulting in increased memory performance. Errorful learning challenges the executive control system of memory processes as the erroneous items compete with the correct items during retrieval. The left dorsolateral prefrontal cortex (DLPFC) is a core region involved in this executive control system. Transcranial direct current stimulation (tDCS) can modify the excitability of underlying brain functioning. Results In a single blinded tDCS study one group of young healthy participants received anodal and another group cathodal tDCS of the left DLPFC each compared to sham stimulation. Participants had to learn words in an errorless and an errorful manner using a word stem completion paradigm. The results showed that errorless compared to errorful learning had a profound effect on the memory performance in terms of quality. Anodal stimulation of the left DLPFC did not modulate the memory performance following errorless or errorful learning. By contrast, cathodal stimulation hampered memory performance after errorful learning compared to sham, whereas there was no modulation after errorless learning. Conclusions Concluding, the study further supports the advantages of errorless learning over errorful learning. Moreover, cathodal stimulation of the left DLPFC hampered memory performance following the conflict-inducing errorful learning as compared to no modulation after errorless learning emphasizing the importance of the left DLPFC in executive control of memory. PMID:21781298

  3. Outpatient Prescribing Errors and the Impact of Computerized Prescribing

    PubMed Central

    Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W

    2005-01-01

    Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752

  4. Modifiability Tactics

    DTIC Science & Technology

    2007-09-01

    about purchasing paper copies of SEI reports, please visit the publications portion of our Web site (http://www.sei.cmu.edu/ publications /pubweb.html...architects need to understand how architectural tactics and patterns relate and how to use them effectively. In this report, we explore the relation ...architecture transformations that support the achievement of modifiability [Bass 2003]. In this report, we relate coupling and cohesion to tactics

  5. Monthly streamflow prediction using modified EMD-based support vector machine

    NASA Astrophysics Data System (ADS)

    Huang, Shengzhi; Chang, Jianxia; Huang, Qiang; Chen, Yutong

    2014-04-01

    It is of great significance for operation, planning and dispatching of hydropower station to predict monthly streamflow accurately. Therefore, the main goal of this study is to investigate the accuracy of a modified EMD-SVM model for monthly streamflow forecasting in the Wei River Basin, which has made an improvement by removing the high frequency (IMF1) based on the conventional EMD-SVM model. The EMD-SVM model is obtained by combining empirical mode decomposition and support vector machine. To acquire the optimal c and g values of SVM, the grid research method was employed. Three quantitative standard statistical performance evaluation measures, root mean squared error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were employed to evaluate the performances of the ANN, SVM, EMD-SVM and M-EMDSVM models. The comparison of results reveals that the M-EMDSVM approach has provided a superior alternative to ANN, SVM and EMD-SVM models for forecasting monthly streamflow at Huaxian hydrological station, and its pass rate of prediction reaches up to 82.6% in Huaxian station. To further illustrate the stability and representativeness of the modified EMD-SVM model, the Lintong and Xianyang stations were used to verify the model. The results show that the modified EMD-SVM model has a good stability and great representativeness as well as a high prediction precision.

  6. Frequency Combs

    NASA Astrophysics Data System (ADS)

    Hänsch, Theodor W.; Picqué, Nathalie

    Much of modern research in the field of atomic, molecular, and optical science relies on lasers, which were invented some 50 years ago and perfected in five decades of intense research and development. Today, lasers and photonic technologies impact most fields of science and they have become indispensible in our daily lives. Laser frequency combs were conceived a decade ago as tools for the precision spectroscopy of atomic hydrogen. Through the development of optical frequency comb techniques, technique a setup of the size 1 ×1 m2, good for precision measurements of any frequency, and even commercially available, has replaced the elaborate previous frequency-chain schemes for optical frequency measurements, which only worked for selected frequencies. A true revolution in optical frequency measurements has occurred, paving the way for the creation of all-optical clocks clock with a precision that might approach 10-18. A decade later, frequency combs are now common equipment in all frequency metrology-oriented laboratories. They are also becoming enabling tools for an increasing number of applications, from the calibration of astronomical spectrographs to molecular spectroscopy. This chapter first describes the principle of an optical frequency comb synthesizer. Some of the key technologies to generate such a frequency comb are then presented. Finally, a non-exhaustive overview of the growing applications is given.

  7. Detecting Errors in Programs

    DTIC Science & Technology

    1979-02-01

    unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 DETECTING ERRORS IN PROGRAMS* Lloyd D. Fosdick...from a finite set of tests [35,36]a Recently Howden [37] presented a result showing that for a particular class of Lindenmayer grammars it was possible...Diego, CA. 37o Howden, W.E.: Lindenmayer grammars and symbolic testing. Information Processing Letters 7,1 (Jano 1978), 36-39. 38~ Fitzsimmons, Ann

  8. Wavefront error sensing

    NASA Technical Reports Server (NTRS)

    Tubbs, Eldred F.

    1986-01-01

    A two-step approach to wavefront sensing for the Large Deployable Reflector (LDR) was examined as part of an effort to define wavefront-sensing requirements and to determine particular areas for more detailed study. A Hartmann test for coarse alignment, particularly segment tilt, seems feasible if LDR can operate at 5 microns or less. The direct measurement of the point spread function in the diffraction limited region may be a way to determine piston error, but this can only be answered by a detailed software model of the optical system. The question of suitable astronomical sources for either test must also be addressed.

  9. Error Analysis of Wind Measurements for the University of Illinois Sodium Doppler Temperature System

    NASA Technical Reports Server (NTRS)

    Pfenninger, W. Matthew; Papen, George C.

    1992-01-01

    Four-frequency lidar measurements of temperature and wind velocity require accurate frequency tuning to an absolute reference and long term frequency stability. We quantify frequency tuning errors for the Illinois sodium system, to measure absolute frequencies and a reference interferometer to measure relative frequencies. To determine laser tuning errors, we monitor the vapor cell and interferometer during lidar data acquisition and analyze the two signals for variations as functions of time. Both sodium cell and interferometer are the same as those used to frequency tune the laser. By quantifying the frequency variations of the laser during data acquisition, an error analysis of temperature and wind measurements can be calculated. These error bounds determine the confidence in the calculated temperatures and wind velocities.

  10. Reducing medication errors in critical care: a multimodal approach

    PubMed Central

    Kruer, Rachel M; Jarrell, Andrew S; Latif, Asad

    2014-01-01

    The Institute of Medicine has reported that medication errors are the single most common type of error in health care, representing 19% of all adverse events, while accounting for over 7,000 deaths annually. The frequency of medication errors in adult intensive care units can be as high as 947 per 1,000 patient-days, with a median of 105.9 per 1,000 patient-days. The formulation of drugs is a potential contributor to medication errors. Challenges related to drug formulation are specific to the various routes of medication administration, though errors associated with medication appearance and labeling occur among all drug formulations and routes of administration. Addressing these multifaceted challenges requires a multimodal approach. Changes in technology, training, systems, and safety culture are all strategies to potentially reduce medication errors related to drug formulation in the intensive care unit. PMID:25210478

  11. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  12. Consequences of leaf calibration errors on IMRT delivery

    NASA Astrophysics Data System (ADS)

    Sastre-Padro, M.; Welleweerd, J.; Malinen, E.; Eilertsen, K.; Olsen, D. R.; van der Heide, U. A.

    2007-02-01

    IMRT treatments using multi-leaf collimators may involve a large number of segments in order to spare the organs at risk. When a large proportion of these segments are small, leaf positioning errors may become relevant and have therapeutic consequences. The performance of four head and neck IMRT treatments under eight different cases of leaf positioning errors has been studied. Systematic leaf pair offset errors in the range of ±2.0 mm were introduced, thus modifying the segment sizes of the original IMRT plans. Thirty-six films were irradiated with the original and modified segments. The dose difference and the gamma index (with 2%/2 mm criteria) were used for evaluating the discrepancies between the irradiated films. The median dose differences were linearly related to the simulated leaf pair errors. In the worst case, a 2.0 mm error generated a median dose difference of 1.5%. Following the gamma analysis, two out of the 32 modified plans were not acceptable. In conclusion, small systematic leaf bank positioning errors have a measurable impact on the delivered dose and may have consequences for the therapeutic outcome of IMRT.

  13. Inborn Errors in Immunity

    PubMed Central

    Lionakis, M.S.; Hajishengallis, G.

    2015-01-01

    In recent years, the study of genetic defects arising from inborn errors in immunity has resulted in the discovery of new genes involved in the function of the immune system and in the elucidation of the roles of known genes whose importance was previously unappreciated. With the recent explosion in the field of genomics and the increasing number of genetic defects identified, the study of naturally occurring mutations has become a powerful tool for gaining mechanistic insight into the functions of the human immune system. In this concise perspective, we discuss emerging evidence that inborn errors in immunity constitute real-life models that are indispensable both for the in-depth understanding of human biology and for obtaining critical insights into common diseases, such as those affecting oral health. In the field of oral mucosal immunity, through the study of patients with select gene disruptions, the interleukin-17 (IL-17) pathway has emerged as a critical element in oral immune surveillance and susceptibility to inflammatory disease, with disruptions in the IL-17 axis now strongly linked to mucosal fungal susceptibility, whereas overactivation of the same pathways is linked to inflammatory periodontitis. PMID:25900229

  14. Errors in CT colonography.

    PubMed

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  15. Error and Error Mitigation in Low-Coverage Genome Assemblies

    PubMed Central

    Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam

    2011-01-01

    The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033

  16. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  17. Error Analysis in Mathematics Education.

    ERIC Educational Resources Information Center

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  18. Prospective issues for error detection.

    PubMed

    Blavier, Adélaïde; Rouy, Emmanuelle; Nyssen, Anne-Sophie; de Keyser, Véronique

    2005-06-10

    From the literature on error detection, the authors select several concepts relating error detection mechanisms and prospective memory features. They emphasize the central role of intention in the classification of the errors into slips/lapses/mistakes, in the error handling process and in the usual distinction between action-based and outcome-based detection. Intention is again a core concept in their investigation of prospective memory theory, where they point out the contribution of intention retrievals, intention persistence and output monitoring in the individual's possibilities for detecting their errors. The involvement of the frontal lobes in prospective memory and in error detection is also analysed. From the chronology of a prospective memory task, the authors finally suggest a model for error detection also accounting for neural mechanisms highlighted by studies on error-related brain activity.

  19. Interaction and representational integration: Evidence from speech errors

    PubMed Central

    Goldrick, Matthew; Baker, H. Ross; Murphy, Amanda; Baese-Berk, Melissa

    2011-01-01

    We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing. PMID:21669409

  20. Estimating the Modified Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    1995-01-01

    The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.

  1. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  2. [The error, source of learning].

    PubMed

    Joyeux, Stéphanie; Bohic, Valérie

    2016-05-01

    The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  3. From Monroe to Moreau: an analysis of face naming errors.

    PubMed

    Brédart, S; Valentine, T

    1992-12-01

    Functional models of face recognition and speech production have developed separately. However, naming a familiar face is, of course, an act of speech production. In this paper we propose a revision of Bruce and Young's (1986) model of face processing, which incorporates two features of Levelt's (1989) model of speech production. In particular, the proposed model includes two stages of lexical access for names and monitoring of face naming based on a "perceptual loop". Two predictions were derived from the perceptual loop hypothesis of speech monitoring: (1) naming errors in which a (correct) rare surname is erroneously replaced by a common surname should occur more frequently than the reverse substitution (the error asymmetry effect); (2) naming errors in which a common surname is articulated are more likely to be repaired than errors which result in articulation of a rare surname (the error-repairing effect). Both predictions were supported by an analysis of face naming errors in a laboratory face naming task. In a further experiment we considered the possibility that the effects of surname frequency observed in face naming errors could be explained by the frequency sensitivity of lexical access in speech production. However, no effect of the frequency of the surname of the faces used in the previous experiment was found on face naming latencies. Therefore, it is concluded that the perceptual loop hypothesis provides the more parsimonious account of the entire pattern of the results.

  4. Stronger error disturbance relations for incompatible quantum measurements

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Chiranjib; Shukla, Namrata; Pati, Arun Kumar

    2016-03-01

    We formulate a new error-disturbance relation, which is free from explicit dependence upon variances in observables. This error-disturbance relation shows improvement over the one provided by the Branciard inequality and the Ozawa inequality for some initial states and for a particular class of joint measurements under consideration. We also prove a modified form of Ozawa's error-disturbance relation. The latter relation provides a tighter bound compared to the Ozawa and the Branciard inequalities for a small number of states.

  5. Feature Referenced Error Correction Apparatus.

    DTIC Science & Technology

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  6. Measurement Error. For Good Measure....

    ERIC Educational Resources Information Center

    Johnson, Stephen; Dulaney, Chuck; Banks, Karen

    No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…

  7. The Rules of Spelling Errors.

    ERIC Educational Resources Information Center

    Yannakoudakis, E. J.; Fawthrop, D.

    1983-01-01

    Results of analysis of 1,377 spelling error forms including three categories of spelling errors (consonantal, vowel, and sequential) demonstrate that majority of spelling errors are highly predictable when set of predefined rules based on phonological and sequential considerations are followed algorithmically. Eleven references and equivalent…

  8. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  9. Personalised performance feedback reduces narcotic prescription errors in a NICU

    PubMed Central

    Sullivan, Kevin M; Suh, Sanghee; Monk, Heather; Chuo, John

    2013-01-01

    Objective Neonates are at high risk for significant morbidity and mortality from medication prescribing errors. Despite general awareness of these risks, mistakes continue to happen. Alerts in computerised physician order entry intended to help prescribers avoid errors have not been effective enough. This improvement project delivered feedback of prescribing errors to prescribers in the neonatal intensive care unit (NICU), and measured the impact on medication error frequency. Methods A front-line multidisciplinary team doing multiple Plan Do Study Act cycles developed a system to communicate prescribing errors directly to providers every 2 weeks in the NICU. The primary outcome measure was number of days between medication prescribing errors with particular focus on antibiotic and narcotic errors. Results A T-control chart showed that the number of days between narcotic prescribing errors rose from 3.94 to 22.63 days after the intervention, an 83% improvement. No effect in the number of days between antibiotic prescribing errors during the same period was found. Conclusions An effective system to communicate mistakes can reduce some types of prescribing errors. PMID:23038410

  10. The role of teamworking in error reduction during vascular procedures.

    PubMed

    Soane, Emma; Bicknell, Colin; Mason, Sarah; Godard, Kathleen; Cheshire, Nick

    2014-07-01

    To examine the associations between teamworking processes and error rates during vascular surgical procedures and then make informed recommendations for future studies and practices in this area. This is a single-center observational pilot study. Twelve procedures were observed over a 3-week period by a trained observer. Errors were categorized using a standardized error capture tool. Leadership and teamworking processes were categorized based on the Malakis et al. (2010) framework. Data are expressed as frequencies, means, standard deviations and percentages. Errors rates (per hour) were likely to be reduced when there were effective prebriefing measures to ensure that members were aware of their roles and responsibilities (4.50 vs. 5.39 errors/hr), communications were kept to a practical and effective minimum (4.64 vs. 5.56 errors/hr), when the progress of surgery was communicated throughout (3.14 vs. 8.33 errors/hr), and when team roles changed during the procedure (3.17 vs. 5.97 errors/hr). Reduction of error rates is a critical goal for surgical teams. The present study of teamworking processes in this environment shows that there is a variation that should be further examined. More effective teamworking could prevent or mitigate a range of errors. The development of vascular surgical team members should incorporate principles of teamworking and appropriate communication. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Financial errors in dementia: testing a neuroeconomic conceptual framework.

    PubMed

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L; Rosen, Howard J

    2014-08-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer's disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p < 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p < 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention.

  12. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.

  13. Financial errors in dementia: Testing a neuroeconomic conceptual framework

    PubMed Central

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.

    2013-01-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884

  14. Biasing errors and corrections

    NASA Technical Reports Server (NTRS)

    Meyers, James F.

    1991-01-01

    The dependence of laser velocimeter measurement rate on flow velocity is discussed. Investigations outlining that any dependence is purely statistical, and is nonstationary both spatially and temporally, are described. Main conclusions drawn are that the times between successive particle arrivals should be routinely measured and the calculation of the velocity data rate correlation coefficient should be performed to determine if a dependency exists. If none is found, accept the data ensemble as an independent sample of the flow. If a dependency is found, the data should be modified to obtain an independent sample. Universal correcting procedures should never be applied because their underlying assumptions are not valid.

  15. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  18. On typographical errors.

    PubMed

    Hamilton, J W

    1993-09-01

    In his overall assessment of parapraxes in 1901, Freud included typographical mistakes but did not elaborate on or study this subject nor did he have anything to say about it in his later writings. This paper lists textual errors from a variety of current literary sources and explores the dynamic importance of their execution and the failure to make necessary corrections during the editorial process. While there has been a deemphasis of the role of unconscious determinants in the genesis of all slips as a result of recent findings in cognitive psychology, the examples offered suggest that, with respect to motivation, lapses in compulsivity contribute to their original commission while thematic compliance and voyeuristic issues are important in their not being discovered prior to publication.

  19. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  20. Experimental repetitive quantum error correction.

    PubMed

    Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer

    2011-05-27

    The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.

  1. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  2. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  3. Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.

    ERIC Educational Resources Information Center

    Monagle, E. Brette

    The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…

  4. Phonologic Error Distributions in the Iowa-Nebraska Articulation Norms Project: Word-Initial Consonant Clusters.

    ERIC Educational Resources Information Center

    Smit, Ann Bosma

    1993-01-01

    The errors on word-initial consonant clusters made by children (ages 2-9) in the Iowa-Nebraska Articulation Norms Project were tabulated by age range and frequency. Error data showed support for previous research in the acquisition of clusters. Cluster errors are discussed in terms of theories of phonologic development. (Author/JDD)

  5. A Learner Corpus-Based Study on Verb Errors of Turkish EFL Learners

    ERIC Educational Resources Information Center

    Can, Cem

    As learner corpora have presently become readily accessible, it is practicable to examine interlanguage errors and carry out error analysis (EA) on learner-generated texts. The data available in a learner corpus enable researchers to investigate authentic learner errors and their respective frequencies in terms of types and tokens as well as…

  6. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    ERIC Educational Resources Information Center

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  7. Articulation Error Migration: A Comparison of Single Word and Connected Speech Samples.

    ERIC Educational Resources Information Center

    Healy, Timothy J.; Madison, Charles L.

    1987-01-01

    The study compared frequency and type of articulation error, including error migration, between single word production and connected speech samples when vocabulary was held constant with 20 articulation disordered children (ages 5-12 years). There were significantly more errors in connected speech samples than in single word utterances. (Author/DB)

  8. A comprehensive analysis of translational missense errors in the yeast Saccharomyces cerevisiae.

    PubMed

    Kramer, Emily B; Vallabhaneni, Haritha; Mayer, Lauren M; Farabaugh, Philip J

    2010-09-01

    The process of protein synthesis must be sufficiently rapid and sufficiently accurate to support continued cellular growth. Failure in speed or accuracy can have dire consequences, including disease in humans. Most estimates of the accuracy come from studies of bacterial systems, principally Escherichia coli, and have involved incomplete analysis of possible errors. We recently used a highly quantitative system to measure the frequency of all types of misreading errors by a single tRNA in E. coli. That study found a wide variation in error frequencies among codons; a major factor causing that variation is competition between the correct (cognate) and incorrect (near-cognate) aminoacyl-tRNAs for the mutant codon. Here we extend that analysis to measure the frequency of missense errors by two tRNAs in a eukaryote, the yeast Saccharomyces cerevisiae. The data show that in yeast errors vary by codon from a low of 4 x 10(-5) to a high of 6.9 x 10(-4) per codon and that error frequency is in general about threefold lower than in E. coli, which may suggest that yeast has additional mechanisms that reduce missense errors. Error rate again is strongly influenced by tRNA competition. Surprisingly, missense errors involving wobble position mispairing were much less frequent in S. cerevisiae than in E. coli. Furthermore, the error-inducing aminoglycoside antibiotic, paromomycin, which stimulates errors on all error-prone codons in E. coli, has a more codon-specific effect in yeast.

  9. Method and apparatus for reducing quantization error in laser gyro test data through high speed filtering

    SciTech Connect

    Mark, J.G.; Brown, A.K.; Matthews, A.

    1987-01-06

    A method is described for processing ring laser gyroscope test data comprising the steps of: (a) accumulating the data over a preselected sample period; and (b) filtering the data at a predetermined frequency so that non-time dependent errors are reduced by a substantially greater amount than are time dependent errors; then (c) analyzing the random walk error of the filtered data.

  10. Prediction of discretization error using the error transport equation

    NASA Astrophysics Data System (ADS)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  11. Towards error-free interaction.

    PubMed

    Tsoneva, Tsvetomira; Bieger, Jordi; Garcia-Molina, Gary

    2010-01-01

    Human-machine interaction (HMI) relies on pat- tern recognition algorithms that are not perfect. To improve the performance and usability of these systems we can utilize the neural mechanisms in the human brain dealing with error awareness. This study aims at designing a practical error detection algorithm using electroencephalogram signals that can be integrated in an HMI system. Thus, real-time operation, customization, and operation convenience are important. We address these requirements in an experimental framework simulating machine errors. Our results confirm the presence of brain potentials related to processing of machine errors. These are used to implement an error detection algorithm emphasizing the differences in error processing on a per subject basis. The proposed algorithm uses the individual best bipolar combination of electrode sites and requires short calibration. The single-trial error detection performance on six subjects, characterized by the area under the ROC curve ranges from 0.75 to 0.98.

  12. Frequency spectrum analyzer with phase-lock

    DOEpatents

    Boland, Thomas J.

    1984-01-01

    A frequency-spectrum analyzer with phase-lock for analyzing the frequency and amplitude of an input signal is comprised of a voltage controlled oscillator (VCO) which is driven by a ramp generator, and a phase error detector circuit. The phase error detector circuit measures the difference in phase between the VCO and the input signal, and drives the VCO locking it in phase momentarily with the input signal. The input signal and the output of the VCO are fed into a correlator which transfers the input signal to a frequency domain, while providing an accurate absolute amplitude measurement of each frequency component of the input signal.

  13. Error analysis for relay type satellite-aided search and rescue systems

    NASA Technical Reports Server (NTRS)

    Marini, J. W.

    1979-01-01

    An analysis is made of the errors in the determination of the position of an emergency transmitter in a satellite-aided search and rescue system. The satellite is assumed to be at a height of 820 km in a near-circular near polar orbit. Short data spans of four minutes or less are used. The error sources considered are measurement noise, transmitter frequency drift, ionospheric effects, and error in the assumed height of the transmitter. The errors are calculated for several different transmitter positions, data rates, and data spans. The only transmitter frequency used was 406 MHz, but the result can be scaled to different frequencies.

  14. Error analysis in the measurement of average power with application to switching controllers

    NASA Technical Reports Server (NTRS)

    Maisel, J. E.

    1979-01-01

    The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.

  15. Sub-nanometer periodic nonlinearity error in absolute distance interferometers.

    PubMed

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  16. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  17. Improved Error Thresholds for Measurement-Free Error Correction.

    PubMed

    Crow, Daniel; Joynt, Robert; Saffman, M

    2016-09-23

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10^{-3} to 10^{-4}-comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  18. Error analysis in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.

    1998-06-01

    Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

  19. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  20. Comparison of analytical error and sampling error for contaminated soil.

    PubMed

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.