Sample records for system error rates

  1. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  2. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  3. Decrease in medical command errors with use of a "standing orders" protocol system.

    PubMed

    Holliman, C J; Wuerz, R C; Meador, S A

    1994-05-01

    The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)

  4. Executive Council lists and general practitioner files

    PubMed Central

    Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.

    1974-01-01

    An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588

  5. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System.

    PubMed

    Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang

    2018-05-04

    The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.

  6. Total energy based flight control system

    NASA Technical Reports Server (NTRS)

    Lambregts, Antonius A. (Inventor)

    1985-01-01

    An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.

  7. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System

    PubMed Central

    Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang

    2018-01-01

    The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions. PMID:29734707

  8. 45 CFR 286.205 - How will we determine if a Tribe fails to meet the minimum work participation rate(s)?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...

  9. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  10. Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems

    NASA Astrophysics Data System (ADS)

    El-Ghandour, Osama M.; Saha, Debabrata

    1991-05-01

    A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.

  11. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  12. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  13. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  14. Syndromic surveillance for health information system failures: a feasibility study.

    PubMed

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-05-01

    To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.

  15. Development and implementation of a human accuracy program in patient foodservice.

    PubMed

    Eden, S H; Wood, S M; Ptak, K M

    1987-04-01

    For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.

  16. SEU System Analysis: Not Just the Sum of All Parts

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth

    2014-01-01

    Single event upset (SEU) analysis of complex systems is challenging. Currently, system SEU analysis is performed by component level partitioning and then either: the most dominant SEU cross-sections (SEUs) are used in system error rate calculations; or the partition SEUs are summed to eventually obtain a system error rate. In many cases, system error rates are overestimated because these methods generally overlook system level derating factors. The problem with overestimating is that it can cause overdesign and consequently negatively affect the following: cost, schedule, functionality, and validation/verification. The scope of this presentation is to discuss the risks involved with our current scheme of SEU analysis for complex systems; and to provide alternative methods for improvement.

  17. Angular rate optimal design for the rotary strapdown inertial navigation system.

    PubMed

    Yu, Fei; Sun, Qian

    2014-04-22

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.

  18. Syndromic surveillance for health information system failures: a feasibility study

    PubMed Central

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-01-01

    Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193

  19. Failure analysis and modeling of a VAXcluster system

    NASA Technical Reports Server (NTRS)

    Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.

    1990-01-01

    This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.

  20. Dispensing error rate after implementation of an automated pharmacy carousel system.

    PubMed

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  1. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  2. [Medication error management climate and perception for system use according to construction of medication error prevention system].

    PubMed

    Kim, Myoung Soo

    2012-08-01

    The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.

  3. Online Error Reporting for Managing Quality Control Within Radiology.

    PubMed

    Golnari, Pedram; Forsberg, Daniel; Rosipko, Beverly; Sunshine, Jeffrey L

    2016-06-01

    Information technology systems within health care, such as picture archiving and communication system (PACS) in radiology, can have a positive impact on production but can also risk compromising quality. The widespread use of PACS has removed the previous feedback loop between radiologists and technologists. Instead of direct communication of quality discrepancies found for an examination, the radiologist submitted a paper-based quality-control report. A web-based issue-reporting tool can help restore some of the feedback loop and also provide possibilities for more detailed analysis of submitted errors. The purpose of this study was to evaluate the hypothesis that data from use of an online error reporting software for quality control can focus our efforts within our department. For the 372,258 radiologic examinations conducted during the 6-month period study, 930 errors (390 exam protocol, 390 exam validation, and 150 exam technique) were submitted, corresponding to an error rate of 0.25 %. Within the category exam protocol, technologist documentation had the highest number of submitted errors in ultrasonography (77 errors [44 %]), while imaging protocol errors were the highest subtype error for computed tomography modality (35 errors [18 %]). Positioning and incorrect accession had the highest errors in the exam technique and exam validation error category, respectively, for nearly all of the modalities. An error rate less than 1 % could signify a system with a very high quality; however, a more likely explanation is that not all errors were detected or reported. Furthermore, staff reception of the error reporting system could also affect the reporting rate.

  4. Final report on the development of the geographic position locator (GPL). Volume 12. Data reduction A3FIX: subroutine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niven, W.A.

    The long-term position accuracy of an inertial navigation system depends primarily on the ability of the gyroscopes to maintain a near-perfect reference orientation. Small imperfections in the gyroscopes cause them to drift slowly away from their initial orientation, thereby producing errors in the system's calculations of position. The A3FIX is a computer program subroutine developed to estimate inertial navigation system gyro drift rates with the navigator stopped or moving slowly. It processes data of the navigation system's position error to arrive at estimates of the north- south and vertical gyro drift rates. It also computes changes in the east--west gyromore » drift rate if the navigator is stopped and if data on the system's azimuth error changes are also available. The report describes the subroutine, its capabilities, and gives examples of gyro drift rate estimates that were computed during the testing of a high quality inertial system under the PASSPORT program at the Lawrence Livermore Laboratory. The appendices provide mathematical derivations of the estimation equations that are used in the subroutine, a discussion of the estimation errors, and a program listing and flow diagram. The appendices also contain a derivation of closed form solutions to the navigation equations to clarify the effects that motion and time-varying drift rates induce in the phase-plane relationships between the Schulerfiltered errors in latitude and azimuth snd between the Schulerfiltered errors in latitude and longitude. (auth)« less

  5. Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System

    PubMed Central

    Yu, Fei; Sun, Qian

    2014-01-01

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115

  6. The effectiveness of the error reporting promoting program on the nursing error incidence rate in Korean operating rooms.

    PubMed

    Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung

    2007-03-01

    The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.

  7. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  8. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  9. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  10. Performability modeling based on real data: A case study

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1988-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.

  11. Performability modeling based on real data: A casestudy

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1987-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.

  12. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  13. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  14. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system

    PubMed Central

    Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.

    2015-01-01

    Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702

  15. Prescription errors before and after introduction of electronic medication alert system in a pediatric emergency department.

    PubMed

    Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M

    2015-06-01

    Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.

  16. Effect of atmospheric turbulence on the bit error probability of a space to ground near infrared laser communications link using binary pulse position modulation and an avalanche photodiode detector

    NASA Technical Reports Server (NTRS)

    Safren, H. G.

    1987-01-01

    The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.

  17. DNA Barcoding through Quaternary LDPC Codes

    PubMed Central

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348

  18. DNA Barcoding through Quaternary LDPC Codes.

    PubMed

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).

  19. Making electronic prescribing alerts more effective: scenario-based experimental study in junior doctors

    PubMed Central

    Shah, Priya; Wyatt, Jeremy C; Makubate, Boikanyo; Cross, Frank W

    2011-01-01

    Objective Expert authorities recommend clinical decision support systems to reduce prescribing error rates, yet large numbers of insignificant on-screen alerts presented in modal dialog boxes persistently interrupt clinicians, limiting the effectiveness of these systems. This study compared the impact of modal and non-modal electronic (e-) prescribing alerts on prescribing error rates, to help inform the design of clinical decision support systems. Design A randomized study of 24 junior doctors each performing 30 simulated prescribing tasks in random order with a prototype e-prescribing system. Using a within-participant design, doctors were randomized to be shown one of three types of e-prescribing alert (modal, non-modal, no alert) during each prescribing task. Measurements The main outcome measure was prescribing error rate. Structured interviews were performed to elicit participants' preferences for the prescribing alerts and their views on clinical decision support systems. Results Participants exposed to modal alerts were 11.6 times less likely to make a prescribing error than those not shown an alert (OR 11.56, 95% CI 6.00 to 22.26). Those shown a non-modal alert were 3.2 times less likely to make a prescribing error (OR 3.18, 95% CI 1.91 to 5.30) than those not shown an alert. The error rate with non-modal alerts was 3.6 times higher than with modal alerts (95% CI 1.88 to 7.04). Conclusions Both kinds of e-prescribing alerts significantly reduced prescribing error rates, but modal alerts were over three times more effective than non-modal alerts. This study provides new evidence about the relative effects of modal and non-modal alerts on prescribing outcomes. PMID:21836158

  20. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  1. Impact of an antiretroviral stewardship strategy on medication error rates.

    PubMed

    Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E

    2018-05-02

    The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  2. Cryptographic robustness of a quantum cryptography system using phase-time coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2008-01-15

    A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less

  3. Classification of echolocation clicks from odontocetes in the Southern California Bight.

    PubMed

    Roch, Marie A; Klinck, Holger; Baumann-Pickering, Simone; Mellinger, David K; Qui, Simon; Soldevilla, Melissa S; Hildebrand, John A

    2011-01-01

    This study presents a system for classifying echolocation clicks of six species of odontocetes in the Southern California Bight: Visually confirmed bottlenose dolphins, short- and long-beaked common dolphins, Pacific white-sided dolphins, Risso's dolphins, and presumed Cuvier's beaked whales. Echolocation clicks are represented by cepstral feature vectors that are classified by Gaussian mixture models. A randomized cross-validation experiment is designed to provide conditions similar to those found in a field-deployed system. To prevent matched conditions from inappropriately lowering the error rate, echolocation clicks associated with a single sighting are never split across the training and test data. Sightings are randomly permuted before assignment to folds in the experiment. This allows different combinations of the training and test data to be used while keeping data from each sighting entirely in the training or test set. The system achieves a mean error rate of 22% across 100 randomized three-fold cross-validation experiments. Four of the six species had mean error rates lower than the overall mean, with the presumed Cuvier's beaked whale clicks showing the best performance (<2% error rate). Long-beaked common and bottlenose dolphins proved the most difficult to classify, with mean error rates of 53% and 68%, respectively.

  4. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit

    PubMed Central

    Liu, Shi Qiang; Zhu, Rong

    2016-01-01

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314

  5. An Investigation into Soft Error Detection Efficiency at Operating System Level

    PubMed Central

    Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance. PMID:24574894

  6. An investigation into soft error detection efficiency at operating system level.

    PubMed

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  7. A long-term follow-up evaluation of electronic health record prescribing safety

    PubMed Central

    Abramson, Erika L; Malhotra, Sameer; Osorio, S Nena; Edwards, Alison; Cheriff, Adam; Cole, Curtis; Kaushal, Rainu

    2013-01-01

    Objective To be eligible for incentives through the Electronic Health Record (EHR) Incentive Program, many providers using older or locally developed EHRs will be transitioning to new, commercial EHRs. We previously evaluated prescribing errors made by providers in the first year following transition from a locally developed EHR with minimal prescribing clinical decision support (CDS) to a commercial EHR with robust CDS. Following system refinements, we conducted this study to assess the rates and types of errors 2 years after transition and determine the evolution of errors. Materials and methods We conducted a mixed methods cross-sectional case study of 16 physicians at an academic-affiliated ambulatory clinic from April to June 2010. We utilized standardized prescription and chart review to identify errors. Fourteen providers also participated in interviews. Results We analyzed 1905 prescriptions. The overall prescribing error rate was 3.8 per 100 prescriptions (95% CI 2.8 to 5.1). Error rates were significantly lower 2 years after transition (p<0.001 compared to pre-implementation, 12 weeks and 1 year after transition). Rates of near misses remained unchanged. Providers positively appreciated most system refinements, particularly reduced alert firing. Discussion Our study suggests that over time and with system refinements, use of a commercial EHR with advanced CDS can lead to low prescribing error rates, although more serious errors may require targeted interventions to eliminate them. Reducing alert firing frequency appears particularly important. Our results provide support for federal efforts promoting meaningful use of EHRs. Conclusions Ongoing error monitoring can allow CDS to be optimally tailored and help achieve maximal safety benefits. Clinical Trials Registration ClinicalTrials.gov, Identifier: NCT00603070. PMID:23578816

  8. Approaching Error-Free Customer Satisfaction through Process Change and Feedback Systems

    ERIC Educational Resources Information Center

    Berglund, Kristin M.; Ludwig, Timothy D.

    2009-01-01

    Employee-based errors result in quality defects that can often impact customer satisfaction. This study examined the effects of a process change and feedback system intervention on error rates of 3 teams of retail furniture distribution warehouse workers. Archival records of error codes were analyzed and aggregated as the measure of quality. The…

  9. Throughput of Coded Optical CDMA Systems with AND Detectors

    NASA Astrophysics Data System (ADS)

    Memon, Kehkashan A.; Umrani, Fahim A.; Umrani, A. W.; Umrani, Naveed A.

    2012-09-01

    Conventional detection techniques used in optical code-division multiple access (OCDMA) systems are not optimal and result in poor bit error rate performance. This paper analyzes the coded performance of optical CDMA systems with AND detectors for enhanced throughput efficiencies and improved error rate performance. The results show that the use of AND detectors significantly improve the performance of an optical channel.

  10. Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.

    PubMed

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.

  11. Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation

    PubMed Central

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672

  12. Decision support system for determining the contact lens for refractive errors patients with classification ID3

    NASA Astrophysics Data System (ADS)

    Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.

    2017-01-01

    Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.

  13. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system.

    PubMed

    Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O

    2015-02-01

    To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.

  14. Human operator response to error-likely situations in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1988-01-01

    The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.

  15. SEU induced errors observed in microprocessor systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asenek, V.; Underwood, C.; Oldfield, M.

    In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.

  16. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  17. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  18. Electronic Inventory Systems and Barcode Technology: Impact on Pharmacy Technical Accuracy and Error Liability

    PubMed Central

    Oldland, Alan R.; May, Sondra K.; Barber, Gerard R.; Stolpman, Nancy M.

    2015-01-01

    Purpose: To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. Methods: During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Results: Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Conclusions: Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training. PMID:25684799

  19. Electronic inventory systems and barcode technology: impact on pharmacy technical accuracy and error liability.

    PubMed

    Oldland, Alan R; Golightly, Larry K; May, Sondra K; Barber, Gerard R; Stolpman, Nancy M

    2015-01-01

    To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training.

  20. Accuracy assessment of high-rate GPS measurements for seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  1. Automatic learning rate adjustment for self-supervising autonomous robot control

    NASA Technical Reports Server (NTRS)

    Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.

    1992-01-01

    Described is an application in which an Artificial Neural Network (ANN) controls the positioning of a robot arm with five degrees of freedom by using visual feedback provided by two cameras. This application and the specific ANN model, local liner maps, are based on the work of Ritter, Martinetz, and Schulten. We extended their approach by generating a filtered, average positioning error from the continuous camera feedback and by coupling the learning rate to this error. When the network learns to position the arm, the positioning error decreases and so does the learning rate until the system stabilizes at a minimum error and learning rate. This abolishes the need for a predetermined cooling schedule. The automatic cooling procedure results in a closed loop control with no distinction between a learning phase and a production phase. If the positioning error suddenly starts to increase due to an internal failure such as a broken joint, or an environmental change such as a camera moving, the learning rate increases accordingly. Thus, learning is automatically activated and the network adapts to the new condition after which the error decreases again and learning is 'shut off'. The automatic cooling is therefore a prerequisite for the autonomy and the fault tolerance of the system.

  2. In-Flight Pitot-Static Calibration

    NASA Technical Reports Server (NTRS)

    Foster, John V. (Inventor); Cunningham, Kevin (Inventor)

    2016-01-01

    A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.

  3. A comparison of medication administration errors from original medication packaging and multi-compartment compliance aids in care homes: A prospective observational study.

    PubMed

    Gilmartin-Thomas, Julia Fiona-Maree; Smith, Felicity; Wolfe, Rory; Jani, Yogini

    2017-07-01

    No published study has been specifically designed to compare medication administration errors between original medication packaging and multi-compartment compliance aids in care homes, using direct observation. Compare the effect of original medication packaging and multi-compartment compliance aids on medication administration accuracy. Prospective observational. Ten Greater London care homes. Nurses and carers administering medications. Between October 2014 and June 2015, a pharmacist researcher directly observed solid, orally administered medications in tablet or capsule form at ten purposively sampled care homes (five only used original medication packaging and five used both multi-compartment compliance aids and original medication packaging). The medication administration error rate was calculated as the number of observed doses administered (or omitted) in error according to medication administration records, compared to the opportunities for error (total number of observed doses plus omitted doses). Over 108.4h, 41 different staff (35 nurses, 6 carers) were observed to administer medications to 823 residents during 90 medication administration rounds. A total of 2452 medication doses were observed (1385 from original medication packaging, 1067 from multi-compartment compliance aids). One hundred and seventy eight medication administration errors were identified from 2493 opportunities for error (7.1% overall medication administration error rate). A greater medication administration error rate was seen for original medication packaging than multi-compartment compliance aids (9.3% and 3.1% respectively, risk ratio (RR)=3.9, 95% confidence interval (CI) 2.4 to 6.1, p<0.001). Similar differences existed when comparing medication administration error rates between original medication packaging (from original medication packaging-only care homes) and multi-compartment compliance aids (RR=2.3, 95%CI 1.1 to 4.9, p=0.03), and between original medication packaging and multi-compartment compliance aids within care homes that used a combination of both medication administration systems (RR=4.3, 95%CI 2.7 to 6.8, p<0.001). A significant difference in error rate was not observed between use of a single or combination medication administration system (p=0.44). The significant difference in, and high overall, medication administration error rate between original medication packaging and multi-compartment compliance aids supports the use of the latter in care homes, as well as local investigation of tablet and capsule impact on medication administration errors and staff training to prevent errors occurring. As a significant difference in error rate was not observed between use of a single or combination medication administration system, common practice of using both multi-compartment compliance aids (for most medications) and original packaging (for medications with stability issues) is supported. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbee, D; McCarthy, A; Galavis, P

    Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less

  5. WE-H-BRC-09: Simulated Errors in Mock Radiotherapy Plans to Quantify the Effectiveness of the Physics Plan Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopan, O; Kalet, A; Smith, W

    2016-06-15

    Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included inmore » the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in detecting those errors with low detection rates.« less

  6. Symbol Error Rate of Underlay Cognitive Relay Systems over Rayleigh Fading Channel

    NASA Astrophysics Data System (ADS)

    Ho van, Khuong; Bao, Vo Nguyen Quoc

    Underlay cognitive systems allow secondary users (SUs) to access the licensed band allocated to primary users (PUs) for better spectrum utilization with the power constraint imposed on SUs such that their operation does not harm the normal communication of PUs. This constraint, which limits the coverage range of SUs, can be offset by relaying techniques that take advantage of shorter range communication for lower path loss. Symbol error rate (SER) analysis of underlay cognitive relay systems over fading channel has not been reported in the literature. This paper fills this gap. The derived SER expressions are validated by simulations and show that underlay cognitive relay systems suffer a high error floor for any modulation level.

  7. Comparison of medication safety effectiveness among nine critical access hospitals.

    PubMed

    Cochran, Gary L; Haynatzki, Gleb

    2013-12-15

    The rates of medication errors across three different medication dispensing and administration systems frequently used in critical access hospitals (CAHs) were analyzed. Nine CAHs agreed to participate in this prospective study and were assigned to one of three groups based on similarities in their medication-use processes: (1) less than 10 hours per week of onsite pharmacy support and no bedside barcode system, (2) onsite pharmacy support for 40 hours per week and no bedside barcode system, and (3) onsite pharmacy support for 40 or more hours per week with a bedside barcode system. Errors were characterized by severity, phase of origination, type, and cause. Characteristics of the medication being administered and a number of best practices were collected for each medication pass. Logistic regression was used to identify significant predictors of errors. A total of 3103 medication passes were observed. More medication errors originated in hospitals that had onsite pharmacy support for less than 10 hours per week and no bedside barcode system than in other types of hospitals. A bedside barcode system had the greatest impact on lowering the odds of an error reaching the patient. Wrong dose and omission were common error types. Human factors and communication were the two most frequently identified causes of error for all three systems. Medication error rates were lower in CAHs with 40 or more hours per week of onsite pharmacy support with or without a bedside barcode system compared with hospitals with less than 10 hours per week of pharmacy support and no bedside barcode system.

  8. Detecting imipenem resistance in Acinetobacter baumannii by automated systems (BD Phoenix, Microscan WalkAway, Vitek 2); high error rates with Microscan WalkAway

    PubMed Central

    2009-01-01

    Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to validate imipenem susceptibility. Etest, whereever available, may be used as an easy method to confirm imipenem susceptibility. PMID:19291298

  9. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  10. Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection

    PubMed Central

    Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J

    2017-01-01

    Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474

  11. Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.

    PubMed

    Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J

    2017-08-18

    The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.

  12. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    PubMed

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  13. Smart photodetector arrays for error control in page-oriented optical memory

    NASA Astrophysics Data System (ADS)

    Schaffer, Maureen Elizabeth

    1998-12-01

    Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data modulation and error correction coding for the purpose of error control in the POM system. These techniques are adapted, where possible, for 2D data and evaluated as to their suitability for a SPA implementation in terms of BER, code rate, decoder time and pixel complexity. Our analysis shows that differential data modulation combined with relatively simple block codes known as array codes provide a powerful means to achieve the desired data transfer rates while reducing error rates to industry requirements. Finally, we demonstrate the first smart photodetector array designed to perform parallel error correction on an entire page of data and satisfy the sustained data rates of page-oriented optical memories. Our implementation integrates a monolithic PN photodiode array and differential input receiver for optoelectronic signal conversion with a cluster error correction code using 0.35-mum CMOS. This approach provides high sensitivity, low electrical power dissipation, and fast parallel correction of 2 x 2-bit cluster errors in an 8 x 8 bit code block to achieve corrected output data rates scalable to 102 Gbps in the current technology increasing to 1.88 Tbps in 0.1-mum CMOS.

  14. Impact of Internally Developed Electronic Prescription on Prescribing Errors at Discharge from the Emergency Department

    PubMed Central

    Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif

    2017-01-01

    Introduction Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%–38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. Methods We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Results Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). Conclusion A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive. PMID:28874948

  15. Impact of Internally Developed Electronic Prescription on Prescribing Errors at Discharge from the Emergency Department.

    PubMed

    Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif

    2017-08-01

    Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%-38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive.

  16. A national physician survey of diagnostic error in paediatrics.

    PubMed

    Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B

    2016-10-01

    This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.

  17. Online automatic tuning and control for fed-batch cultivation

    PubMed Central

    van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.

    2007-01-01

    Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554

  18. Effect of bar-code technology on the safety of medication administration.

    PubMed

    Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K

    2010-05-06

    Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society

  19. The Use of Categorized Time-Trend Reporting of Radiation Oncology Incidents: A Proactive Analytical Approach to Improving Quality and Safety Over Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Anthony, E-mail: anthony.arnold@sesiahs.health.nsw.gov.a; Delaney, Geoff P.; Cassapi, Lynette

    Purpose: Radiotherapy is a common treatment for cancer patients. Although incidence of error is low, errors can be severe or affect significant numbers of patients. In addition, errors will often not manifest until long periods after treatment. This study describes the development of an incident reporting tool that allows categorical analysis and time trend reporting, covering first 3 years of use. Methods and Materials: A radiotherapy-specific incident analysis system was established. Staff members were encouraged to report actual errors and near-miss events detected at prescription, simulation, planning, or treatment phases of radiotherapy delivery. Trend reporting was reviewed monthly. Results: Reportsmore » were analyzed for the first 3 years of operation (May 2004-2007). A total of 688 reports was received during the study period. The actual error rate was 0.2% per treatment episode. During the study period, the actual error rates reduced significantly from 1% per year to 0.3% per year (p < 0.001), as did the total event report rates (p < 0.0001). There were 3.5 times as many near misses reported compared with actual errors. Conclusions: This system has allowed real-time analysis of events within a radiation oncology department to a reduced error rate through focus on learning and prevention from the near-miss reports. Plans are underway to develop this reporting tool for Australia and New Zealand.« less

  20. Computer-assisted bar-coding system significantly reduces clinical laboratory specimen identification errors in a pediatric oncology hospital.

    PubMed

    Hayden, Randall T; Patterson, Donna J; Jay, Dennis W; Cross, Carl; Dotson, Pamela; Possel, Robert E; Srivastava, Deo Kumar; Mirro, Joseph; Shenep, Jerry L

    2008-02-01

    To assess the ability of a bar code-based electronic positive patient and specimen identification (EPPID) system to reduce identification errors in a pediatric hospital's clinical laboratory. An EPPID system was implemented at a pediatric oncology hospital to reduce errors in patient and laboratory specimen identification. The EPPID system included bar-code identifiers and handheld personal digital assistants supporting real-time order verification. System efficacy was measured in 3 consecutive 12-month time frames, corresponding to periods before, during, and immediately after full EPPID implementation. A significant reduction in the median percentage of mislabeled specimens was observed in the 3-year study period. A decline from 0.03% to 0.005% (P < .001) was observed in the 12 months after full system implementation. On the basis of the pre-intervention detected error rate, it was estimated that EPPID prevented at least 62 mislabeling events during its first year of operation. EPPID decreased the rate of misidentification of clinical laboratory samples. The diminution of errors observed in this study provides support for the development of national guidelines for the use of bar coding for laboratory specimens, paralleling recent recommendations for medication administration.

  1. [Design and accuracy analysis of upper slicing system of MSCT].

    PubMed

    Jiang, Rongjian

    2013-05-01

    The upper slicing system is the main components of the optical system in MSCT. This paper focuses on the design of upper slicing system and its accuracy analysis to improve the accuracy of imaging. The error of slice thickness and ray center by bearings, screw and control system were analyzed and tested. In fact, the accumulated error measured is less than 1 microm, absolute error measured is less than 10 microm. Improving the accuracy of the upper slicing system contributes to the appropriate treatment methods and success rate of treatment.

  2. Error Detection in Mechanized Classification Systems

    ERIC Educational Resources Information Center

    Hoyle, W. G.

    1976-01-01

    When documentary material is indexed by a mechanized classification system, and the results judged by trained professionals, the number of documents in disagreement, after suitable adjustment, defines the error rate of the system. In a test case disagreement was 22 percent and, of this 22 percent, the computer correctly identified two-thirds of…

  3. Outpatient Prescribing Errors and the Impact of Computerized Prescribing

    PubMed Central

    Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W

    2005-01-01

    Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752

  4. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    NASA Astrophysics Data System (ADS)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  5. Analysis of the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery☆

    PubMed Central

    Arba-Mosquera, Samuel; Aslanides, Ioannis M.

    2012-01-01

    Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.

  6. Analysis of ionospheric refraction error corrections for GRARR systems

    NASA Technical Reports Server (NTRS)

    Mallinckrodt, A. J.; Parker, H. C.; Berbert, J. H.

    1971-01-01

    A determination is presented of the ionospheric refraction correction requirements for the Goddard range and range rate (GRARR) S-band, modified S-band, very high frequency (VHF), and modified VHF systems. The relation ships within these four systems are analyzed to show that the refraction corrections are the same for all four systems and to clarify the group and phase nature of these corrections. The analysis is simplified by recognizing that the range rate is equivalent to a carrier phase range change measurement. The equation for the range errors are given.

  7. Bayesian assessment of overtriage and undertriage at a level I trauma centre.

    PubMed

    DiDomenico, Paul B; Pietzsch, Jan B; Paté-Cornell, M Elisabeth

    2008-07-13

    We analysed the trauma triage system at a specific level I trauma centre to assess rates of over- and undertriage and to support recommendations for system improvements. The triage process is designed to estimate the severity of patient injury and allocate resources accordingly, with potential errors of overestimation (overtriage) consuming excess resources and underestimation (undertriage) potentially leading to medical errors.We first modelled the overall trauma system using risk analysis methods to understand interdependencies among the actions of the participants. We interviewed six experienced trauma surgeons to obtain their expert opinion of the over- and undertriage rates occurring in the trauma centre. We then assessed actual over- and undertriage rates in a random sample of 86 trauma cases collected over a six-week period at the same centre. We employed Bayesian analysis to quantitatively combine the data with the prior probabilities derived from expert opinion in order to obtain posterior distributions. The results were estimates of overtriage and undertriage in 16.1 and 4.9% of patients, respectively. This Bayesian approach, which provides a quantitative assessment of the error rates using both case data and expert opinion, provides a rational means of obtaining a best estimate of the system's performance. The overall approach that we describe in this paper can be employed more widely to analyse complex health care delivery systems, with the objective of reduced errors, patient risk and excess costs.

  8. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-09

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  9. Social deviance activates the brain's error-monitoring system.

    PubMed

    Kim, Bo-Rin; Liss, Alison; Rao, Monica; Singer, Zachary; Compton, Rebecca J

    2012-03-01

    Social psychologists have long noted the tendency for human behavior to conform to social group norms. This study examined whether feedback indicating that participants had deviated from group norms would elicit a neural signal previously shown to be elicited by errors and monetary losses. While electroencephalograms were recorded, participants (N = 30) rated the attractiveness of 120 faces and received feedback giving the purported average rating made by a group of peers. The feedback was manipulated so that group ratings either were the same as a participant's rating or deviated by 1, 2, or 3 points. Feedback indicating deviance from the group norm elicited a feedback-related negativity, a brainwave signal known to be elicited by objective performance errors and losses. The results imply that the brain treats deviance from social norms as an error.

  10. Performance evaluation of FSO system using wavelength and time diversity over malaga turbulence channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Balaji, K. A.; Prabu, K.

    2018-03-01

    There is an immense demand for high bandwidth and high data rate systems, which is fulfilled by wireless optical communication or free space optics (FSO). Hence FSO gained a pivotal role in research which has a added advantage of both cost-effective and licence free huge bandwidth. Unfortunately the optical signal in free space suffers from irradiance and phase fluctuations due to atmospheric turbulence and pointing errors which deteriorates the signal and degrades the performance of communication system over longer distance which is undesirable. In this paper, we have considered polarization shift keying (POLSK) system applied with wavelength and time diversity technique over Malaga(M)distribution to mitigate turbulence induced fading. We derived closed form mathematical expressions for estimating the systems outage probability and average bit error rate (BER). Ultimately from the results we can infer that wavelength and time diversity schemes enhances these systems performance.

  11. Data quality in a DRG-based information system.

    PubMed

    Colin, C; Ecochard, R; Delahaye, F; Landrivon, G; Messy, P; Morgon, E; Matillon, Y

    1994-09-01

    The aim of this study initiated in May 1990 was to evaluate the quality of the medical data collected from the main hospital of the "Hospices Civils de Lyon", Edouard Herriot Hospital. We studied a random sample of 593 discharge abstracts from 12 wards of the hospital. Quality control was performed by checking multi-hospitalized patients' personal data, checking that each discharge abstract was exhaustive, examining the quality of abstracting, studying diagnoses and medical procedures coding, and checking data entry. Assessment of personal data showed a 4.4% error rate. It was mainly accounted for by spelling mistakes in surnames and first names, and mistakes in dates of birth. The quality of a discharge abstract was estimated according to the two purposes of the medical information system: description of hospital morbidity per patient and Diagnosis Related Group's case mix. Error rates in discharge abstracts were expressed in two ways: an overall rate for errors of concordance between Discharge Abstracts and Medical Records, and a specific rate for errors modifying classification in Diagnosis Related Groups (DRG). For abstracting medical information, these error rates were 11.5% (SE +/- 2.2) and 7.5% (SE +/- 1.9) respectively. For coding diagnoses and procedures, they were 11.4% (SE +/- 1.5) and 1.3% (SE +/- 0.5) respectively. For data entry on the computerized data base, the error rate was 2% (SE +/- 0.5) and 0.2% (SE +/- 0.05). Quality control must be performed regularly because it demonstrates the degree of participation from health care teams and the coherence of the database.(ABSTRACT TRUNCATED AT 250 WORDS)

  12. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  13. Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Fisher, Brad L.; Wolff, David B.

    2007-01-01

    This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.

  14. Effect of Vertical Rate Error on Recovery from Loss of Well Clear Between UAS and Non-Cooperative Intruders

    NASA Technical Reports Server (NTRS)

    Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2016-01-01

    When an Unmanned Aircraft System (UAS) encounters an intruder and is unable to maintain required temporal and spatial separation between the two vehicles, it is referred to as a loss of well-clear. In this state, the UAS must make its best attempt to regain separation while maximizing the minimum separation between itself and the intruder. When encountering a non-cooperative intruder (an aircraft operating under visual flight rules without ADS-B or an active transponder) the UAS must rely on the radar system to provide the intruders location, velocity, and heading information. As many UAS have limited climb and descent performance, vertical position andor vertical rate errors make it difficult to determine whether an intruder will pass above or below them. To account for that, there is a proposal by RTCA Special Committee 228 to prohibit guidance systems from providing vertical guidance to regain well-clear to UAS in an encounter with a non-cooperative intruder unless their radar system has vertical position error below 175 feet (95) and vertical velocity errors below 200 fpm (95). Two sets of fast-time parametric studies was conducted, each with 54000 pairwise encounters between a UAS and non-cooperative intruder to determine the suitability of offering vertical guidance to regain well clear to a UAS in the presence of radar sensor noise. The UAS was not allowed to maneuver until it received well-clear recovery guidance. The maximum severity of the loss of well-clear was logged and used as the primary indicator of the separation achieved by the UAS. One set of 54000 encounters allowed the UAS to maneuver either vertically or horizontally, while the second permitted horizontal maneuvers, only. Comparing the two data sets allowed researchers to see the effect of allowing vertical guidance to a UAS for a particular encounter and vertical rate error. Study results show there is a small reduction in the average severity of a loss of well-clear when vertical maneuvers are suppressed, for all vertical error rate thresholds examined. However, results also show that in roughly 35 of the encounters where a vertical maneuver was selected, forcing the UAS to do a horizontal maneuver instead increased the severity of the loss of well-clear for that encounter. Finally, results showed a small reduction in the number of severe losses of well-clear when the high performance UAS (2000 fpm climb and descent rate) was allowed to maneuver vertically, and the vertical rate error was below 500 fpm. Overall, the results show that using a single vertical rate threshold is not advisable, and that limiting a UAS to horizontal maneuvers when vertical rate errors are above 175 fpm can make a UAS less safe about a third of the time. It is suggested that the hard limit be removed, and system manufacturers instructed to account for their own UAS performance, as well as vertical rate error and encounter geometry, when determining whether or not to provide vertical guidance to regain well-clear.

  15. Comparison of medication safety systems in critical access hospitals: Combined analysis of two studies.

    PubMed

    Cochran, Gary L; Barrett, Ryan S; Horn, Susan D

    2016-08-01

    The role of pharmacist transcription, onsite pharmacist dispensing, use of automated dispensing cabinets (ADCs), nurse-nurse double checks, or barcode-assisted medication administration (BCMA) in reducing medication error rates in critical access hospitals (CAHs) was evaluated. Investigators used the practice-based evidence methodology to identify predictors of medication errors in 12 Nebraska CAHs. Detailed information about each medication administered was recorded through direct observation. Errors were identified by comparing the observed medication administered with the physician's order. Chi-square analysis and Fisher's exact test were used to measure differences between groups of medication-dispensing procedures. Nurses observed 6497 medications being administered to 1374 patients. The overall error rate was 1.2%. The transcription error rates for orders transcribed by an onsite pharmacist were slightly lower than for orders transcribed by a telepharmacy service (0.10% and 0.33%, respectively). Fewer dispensing errors occurred when medications were dispensed by an onsite pharmacist versus any other method of medication acquisition (0.10% versus 0.44%, p = 0.0085). The rates of dispensing errors for medications that were retrieved from a single-cell ADC (0.19%), a multicell ADC (0.45%), or a drug closet or general supply (0.77%) did not differ significantly. BCMA was associated with a higher proportion of dispensing and administration errors intercepted before reaching the patient (66.7%) compared with either manual double checks (10%) or no BCMA or double check (30.4%) of the medication before administration (p = 0.0167). Onsite pharmacist dispensing and BCMA were associated with fewer medication errors and are important components of a medication safety strategy in CAHs. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  16. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

  17. The effect of jitter on the performance of space coherent optical communication system with Costas loop

    NASA Astrophysics Data System (ADS)

    Li, Xin; Hong, Yifeng; Wang, Jinfang; Liu, Yang; Sun, Xun; Li, Mi

    2018-01-01

    Numerous communication techniques and optical devices successfully applied in space optical communication system indicates a good portability of it. With this good portability, typical coherent demodulation technique of Costas loop can be easily adopted in space optical communication system. As one of the components of pointing error, the effect of jitter plays an important role in the communication quality of such system. Here, we obtain the probability density functions (PDF) of different jitter degrees and explain their essential effect on the bit error rate (BER) space optical communication system. Also, under the effect of jitter, we research the bit error rate of space coherent optical communication system using Costas loop with different system parameters of transmission power, divergence angle, receiving diameter, avalanche photodiode (APD) gain, and phase deviation caused by Costas loop. Through a numerical simulation of this kind of communication system, we demonstrate the relationship between the BER and these system parameters, and some corresponding methods of system optimization are presented to enhance the communication quality.

  18. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  19. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.

  20. Evaluation of a Web-based Error Reporting Surveillance System in a Large Iranian Hospital.

    PubMed

    Askarian, Mehrdad; Ghoreishi, Mahboobeh; Akbari Haghighinejad, Hourvash; Palenik, Charles John; Ghodsi, Maryam

    2017-08-01

    Proper reporting of medical errors helps healthcare providers learn from adverse incidents and improve patient safety. A well-designed and functioning confidential reporting system is an essential component to this process. There are many error reporting methods; however, web-based systems are often preferred because they can provide; comprehensive and more easily analyzed information. This study addresses the use of a web-based error reporting system. This interventional study involved the application of an in-house designed "voluntary web-based medical error reporting system." The system has been used since July 2014 in Nemazee Hospital, Shiraz University of Medical Sciences. The rate and severity of errors reported during the year prior and a year after system launch were compared. The slope of the error report trend line was steep during the first 12 months (B = 105.727, P = 0.00). However, it slowed following launch of the web-based reporting system and was no longer statistically significant (B = 15.27, P = 0.81) by the end of the second year. Most recorded errors were no-harm laboratory types and were due to inattention. Usually, they were reported by nurses and other permanent employees. Most reported errors occurred during morning shifts. Using a standardized web-based error reporting system can be beneficial. This study reports on the performance of an in-house designed reporting system, which appeared to properly detect and analyze medical errors. The system also generated follow-up reports in a timely and accurate manner. Detection of near-miss errors could play a significant role in identifying areas of system defects.

  1. WE-A-17A-03: Catheter Digitization in High-Dose-Rate Brachytherapy with the Assistance of An Electromagnetic (EM) Tracking System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, AL; Bhagwat, MS; Buzurovic, I

    Purpose: To investigate the use of a system using EM tracking, postprocessing and error-detection algorithms for measuring brachytherapy catheter locations and for detecting errors and resolving uncertainties in treatment-planning catheter digitization. Methods: An EM tracker was used to localize 13 catheters in a clinical surface applicator (A) and 15 catheters inserted into a phantom (B). Two pairs of catheters in (B) crossed paths at a distance <2 mm, producing an undistinguishable catheter artifact in that location. EM data was post-processed for noise reduction and reformatted to provide the dwell location configuration. CT-based digitization was automatically extracted from the brachytherapy planmore » DICOM files (CT). EM dwell digitization error was characterized in terms of the average and maximum distance between corresponding EM and CT dwells per catheter. The error detection rate (detected errors / all errors) was calculated for 3 types of errors: swap of two catheter numbers; incorrect catheter number identification superior to the closest position between two catheters (mix); and catheter-tip shift. Results: The averages ± 1 standard deviation of the average and maximum registration error per catheter were 1.9±0.7 mm and 3.0±1.1 mm for (A) and 1.6±0.6 mm and 2.7±0.8 mm for (B). The error detection rate was 100% (A and B) for swap errors, mix errors, and shift >4.5 mm (A) and >5.5 mm (B); errors were detected for shifts on average >2.0 mm (A) and >2.4 mm (B). Both mix errors associated with undistinguishable catheter artifacts were detected and at least one of the involved catheters was identified. Conclusion: We demonstrated the use of an EM tracking system for localization of brachytherapy catheters, detection of digitization errors and resolution of undistinguishable catheter artifacts. Automatic digitization may be possible with a registration between the imaging and the EM frame of reference. Research funded by the Kaye Family Award 2012.« less

  2. Modulation/demodulation techniques for satellite communications. Part 1: Background

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1981-01-01

    Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.

  3. Performance Analysis of Amplify-and-Forward Relaying FSO/SC-QAM Systems over Weak Turbulence Channels and Pointing Error Impairments

    NASA Astrophysics Data System (ADS)

    Trung, Ha Duyen

    2017-12-01

    In this paper, the end-to-end performance of free-space optical (FSO) communication system combining with Amplify-and-Forward (AF)-assisted or fixed-gain relaying technology using subcarrier quadrature amplitude modulation (SC-QAM) over weak atmospheric turbulence channels modeled by log-normal distribution with pointing error impairments is studied. More specifically, unlike previous studies on AF relaying FSO communication systems without pointing error effects; the pointing error effect is studied by taking into account the influence of beamwidth, aperture size and jitter variance. In addition, a combination of these models to analyze the combined effect of atmospheric turbulence and pointing error to AF relaying FSO/SC-QAM systems is used. Finally, an analytical expression is derived to evaluate the average symbol error rate (ASER) performance of such systems. The numerical results show that the impact of pointing error on the performance of AF relaying FSO/SC-QAM systems and how we use proper values of aperture size and beamwidth to improve the performance of such systems. Some analytical results are confirmed by Monte-Carlo simulations.

  4. Near field communications technology and the potential to reduce medication errors through multidisciplinary application

    PubMed Central

    Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J.; Tabirca, Sabin; O’Driscoll, Aoife; Corrigan, Mark

    2016-01-01

    Background Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. Methods An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. Results A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). Conclusions An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment. PMID:28293602

  5. Near field communications technology and the potential to reduce medication errors through multidisciplinary application.

    PubMed

    O'Connell, Emer; Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J; Tabirca, Sabin; O'Driscoll, Aoife; Corrigan, Mark

    2016-01-01

    Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment.

  6. Performance improvement of robots using a learning control scheme

    NASA Technical Reports Server (NTRS)

    Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.

    1987-01-01

    Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

  7. CO2 laser ranging systems study

    NASA Technical Reports Server (NTRS)

    Filippi, C. A.

    1975-01-01

    The conceptual design and error performance of a CO2 laser ranging system are analyzed. Ranging signal and subsystem processing alternatives are identified, and their comprehensive evaluation yields preferred candidate solutions which are analyzed to derive range and range rate error contributions. The performance results are presented in the form of extensive tables and figures which identify the ranging accuracy compromises as a function of the key system design parameters and subsystem performance indexes. The ranging errors obtained are noted to be within the high accuracy requirements of existing NASA/GSFC missions with a proper system design.

  8. A predictability study of Lorenz's 28-variable model as a dynamical system

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, V.

    1993-01-01

    The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.

  9. Does the sensorimotor system minimize prediction error or select the most likely prediction during object lifting?

    PubMed Central

    McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.

    2016-01-01

    The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821

  10. Cost comparison of unit dose and traditional drug distribution in a long-term-care facility.

    PubMed

    Lepinski, P W; Thielke, T S; Collins, D M; Hanson, A

    1986-11-01

    Unit dose and traditional drug distribution systems were compared in a 352-bed long-term-care facility by analyzing nursing time, medication-error rate, medication costs, and waste. Time spent by nurses in preparing, administering, charting, and other tasks associated with medications was measured with a stop-watch on four different nursing units during six-week periods before and after the nursing home began using unit dose drug distribution. Medication-error rate before and after implementation of the unit dose system was determined by patient profile audits and medication inventories. Medication costs consisted of patient billing costs (acquisition cost plus fee) and cost of medications destroyed. The unit dose system required a projected 1507.2 hours less nursing time per year. Mean medication-error rates were 8.53% and 0.97% for the traditional and unit dose systems, respectively. Potential annual savings because of decreased medication waste with the unit dose system were $2238.72. The net increase in cost for the unit dose system was estimated at $615.05 per year, or approximately $1.75 per patient. The unit dose system appears safer and more time-efficient than the traditional system, although its costs are higher.

  11. A Comprehensive Quality Assurance Program for Personnel and Procedures in Radiation Oncology: Value of Voluntary Error Reporting and Checklists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalapurakal, John A., E-mail: j-kalapurakal@northwestern.edu; Zafirovski, Aleksandar; Smith, Jeffery

    Purpose: This report describes the value of a voluntary error reporting system and the impact of a series of quality assurance (QA) measures including checklists and timeouts on reported error rates in patients receiving radiation therapy. Methods and Materials: A voluntary error reporting system was instituted with the goal of recording errors, analyzing their clinical impact, and guiding the implementation of targeted QA measures. In response to errors committed in relation to treatment of the wrong patient, wrong treatment site, and wrong dose, a novel initiative involving the use of checklists and timeouts for all staff was implemented. The impactmore » of these and other QA initiatives was analyzed. Results: From 2001 to 2011, a total of 256 errors in 139 patients after 284,810 external radiation treatments (0.09% per treatment) were recorded in our voluntary error database. The incidence of errors related to patient/tumor site, treatment planning/data transfer, and patient setup/treatment delivery was 9%, 40.2%, and 50.8%, respectively. The compliance rate for the checklists and timeouts initiative was 97% (P<.001). These and other QA measures resulted in a significant reduction in many categories of errors. The introduction of checklists and timeouts has been successful in eliminating errors related to wrong patient, wrong site, and wrong dose. Conclusions: A comprehensive QA program that regularly monitors staff compliance together with a robust voluntary error reporting system can reduce or eliminate errors that could result in serious patient injury. We recommend the adoption of these relatively simple QA initiatives including the use of checklists and timeouts for all staff to improve the safety of patients undergoing radiation therapy in the modern era.« less

  12. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  13. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  14. Adaptive data rate control TDMA systems as a rain attenuation compensation technique

    NASA Technical Reports Server (NTRS)

    Sato, Masaki; Wakana, Hiromitsu; Takahashi, Takashi; Takeuchi, Makoto; Yamamoto, Minoru

    1993-01-01

    Rainfall attenuation has a severe effect on signal strength and impairs communication links for future mobile and personal satellite communications using Ka-band and millimeter wave frequencies. As rain attenuation compensation techniques, several methods such as uplink power control, site diversity, and adaptive control of data rate or forward error correction have been proposed. In this paper, we propose a TDMA system that can compensate rain attenuation by adaptive control of transmission rates. To evaluate the performance of this TDMA terminal, we carried out three types of experiments: experiments using a Japanese CS-3 satellite with Ka-band transponders, in house IF loop-back experiments, and computer simulations. Experimental results show that this TDMA system has advantages over the conventional constant-rate TDMA systems, as resource sharing technique, in both bit error rate and total TDMA burst lengths required for transmitting given information.

  15. Evaluation of a Teleform-based data collection system: a multi-center obesity research case study.

    PubMed

    Jenkins, Todd M; Wilson Boyce, Tawny; Akers, Rachel; Andringa, Jennifer; Liu, Yanhong; Miller, Rosemary; Powers, Carolyn; Ralph Buncher, C

    2014-06-01

    Utilizing electronic data capture (EDC) systems in data collection and management allows automated validation programs to preemptively identify and correct data errors. For our multi-center, prospective study we chose to use TeleForm, a paper-based data capture software that uses recognition technology to create case report forms (CRFs) with similar functionality to EDC, including custom scripts to identify entry errors. We quantified the accuracy of the optimized system through a data audit of CRFs and the study database, examining selected critical variables for all subjects in the study, as well as an audit of all variables for 25 randomly selected subjects. Overall we found 6.7 errors per 10,000 fields, with similar estimates for critical (6.9/10,000) and non-critical (6.5/10,000) variables-values that fall below the acceptable quality threshold of 50 errors per 10,000 established by the Society for Clinical Data Management. However, error rates were found to widely vary by type of data field, with the highest rate observed with open text fields. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.

  17. Research on target information optics communications transmission characteristic and performance in multi-screens testing system

    NASA Astrophysics Data System (ADS)

    Li, Hanshan

    2016-04-01

    To enhance the stability and reliability of multi-screens testing system, this paper studies multi-screens target optical information transmission link properties and performance in long-distance, sets up the discrete multi-tone modulation transmission model based on geometric model of laser multi-screens testing system and visible light information communication principle; analyzes the electro-optic and photoelectric conversion function of sender and receiver in target optical information communication system; researches target information transmission performance and transfer function of the generalized visible-light communication channel; found optical information communication transmission link light intensity space distribution model and distribution function; derives the SNR model of information transmission communication system. Through the calculation and experiment analysis, the results show that the transmission error rate increases with the increment of transmission rate in a certain channel modulation depth; when selecting the appropriate transmission rate, the bit error rate reach 0.01.

  18. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  19. An investigation of errors and data processing techniques for an RF multilateration system. [position and velocity measurements of vertical takeoff aircraft during landing

    NASA Technical Reports Server (NTRS)

    Britt, C. L., Jr.

    1975-01-01

    The development of an RF Multilateration system to provide accurate position and velocity measurements during the approach and landing phase of Vertical Takeoff Aircraft operation is discussed. The system uses an angle-modulated ranging signal to provide both range and range rate measurements between an aircraft transponder and multiple ground stations. Range and range rate measurements are converted to coordinate measurements and the coordinate and coordinate rate information is transmitted by an integral data link to the aircraft. Data processing techniques are analyzed to show advantages and disadvantages. Error analyses are provided to permit a comparison of the various techniques.

  20. Technological Advancements and Error Rates in Radiation Therapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less

  1. Comparison of Agar Dilution, Disk Diffusion, MicroScan, and Vitek Antimicrobial Susceptibility Testing Methods to Broth Microdilution for Detection of Fluoroquinolone-Resistant Isolates of the Family Enterobacteriaceae

    PubMed Central

    Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.

    1999-01-01

    Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809

  2. The assessment of cognitive errors using an observer-rated method.

    PubMed

    Drapeau, Martin

    2014-01-01

    Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.

  3. Can an online clinical data management service help in improving data collection and data quality in a developing country setting?

    PubMed

    Wildeman, Maarten A; Zandbergen, Jeroen; Vincent, Andrew; Herdini, Camelia; Middeldorp, Jaap M; Fles, Renske; Dalesio, Otilia; van der Donk, Emile; Tan, I Bing

    2011-08-08

    Data collection by electronic medical record (EMR) systems have been proven to be helpful in data collection for scientific research and in improving healthcare. For a multi-centre trial in Indonesia and the Netherlands a web based system was selected to enable all participating centres to easily access data. This study assesses whether the introduction of a clinical trial data management service (CTDMS) composed of electronic case report forms (eCRF) can result in effective data collection and treatment monitoring. Data items entered were checked for inconsistencies automatically when submitted online. The data were divided into primary and secondary data items. We analysed both the total number of errors and the change in error rate, for both primary and secondary items, over the first five month of the trial. In the first five months 51 patients were entered. The primary data error rate was 1.6%, whilst that for secondary data was 2.7% against acceptable error rates for analysis of 1% and 2.5% respectively. The presented analysis shows that after five months since the introduction of the CTDMS the primary and secondary data error rates reflect acceptable levels of data quality. Furthermore, these error rates were decreasing over time. The digital nature of the CTDMS, as well as the online availability of that data, gives fast and easy insight in adherence to treatment protocols. As such, the CTDMS can serve as a tool to train and educate medical doctors and can improve treatment protocols.

  4. Single Event Test Methodologies and System Error Rate Analysis for Triple Modular Redundant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael

    2010-01-01

    We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.

  5. Evaluation of genomic high-throughput sequencing data generated on Illumina HiSeq and Genome Analyzer systems

    PubMed Central

    2011-01-01

    Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484

  6. Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.

    PubMed

    Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M

    2006-10-01

    Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P < .001), and the sensitivity increased from 70.2% to 90.6% (P < .001). Cases with an immediate interpretation had a lower noninterpretable specimen rate than those without immediate interpretation (P < .001). Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.

  7. Servo control booster system for minimizing following error

    DOEpatents

    Wise, William L.

    1985-01-01

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  8. Reduction in specimen labeling errors after implementation of a positive patient identification system in phlebotomy.

    PubMed

    Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F

    2010-06-01

    Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.

  9. Computer calculated dose in paediatric prescribing.

    PubMed

    Kirk, Richard C; Li-Meng Goh, Denise; Packia, Jeya; Min Kam, Huey; Ong, Benjamin K C

    2005-01-01

    Medication errors are an important cause of hospital-based morbidity and mortality. However, only a few medication error studies have been conducted in children. These have mainly quantified errors in the inpatient setting; there is very little data available on paediatric outpatient and emergency department medication errors and none on discharge medication. This deficiency is of concern because medication errors are more common in children and it has been suggested that the risk of an adverse drug event as a consequence of a medication error is higher in children than in adults. The aims of this study were to assess the rate of medication errors in predominantly ambulatory paediatric patients and the effect of computer calculated doses on medication error rates of two commonly prescribed drugs. This was a prospective cohort study performed in a paediatric unit in a university teaching hospital between March 2003 and August 2003. The hospital's existing computer clinical decision support system was modified so that doctors could choose the traditional prescription method or the enhanced method of computer calculated dose when prescribing paracetamol (acetaminophen) or promethazine. All prescriptions issued to children (<16 years of age) at the outpatient clinic, emergency department and at discharge from the inpatient service were analysed. A medication error was defined as to have occurred if there was an underdose (below the agreed value), an overdose (above the agreed value), no frequency of administration specified, no dose given or excessive total daily dose. The medication error rates and the factors influencing medication error rates were determined using SPSS version 12. From March to August 2003, 4281 prescriptions were issued. Seven prescriptions (0.16%) were excluded, hence 4274 prescriptions were analysed. Most prescriptions were issued by paediatricians (including neonatologists and paediatric surgeons) and/or junior doctors. The error rate in the children's emergency department was 15.7%, for outpatients was 21.5% and for discharge medication was 23.6%. Most errors were the result of an underdose (64%; 536/833). The computer calculated dose error rate was 12.6% compared with the traditional prescription error rate of 28.2%. Logistical regression analysis showed that computer calculated dose was an important and independent variable influencing the error rate (adjusted relative risk = 0.436, 95% CI 0.336, 0.520, p < 0.001). Other important independent variables were seniority and paediatric training of the person prescribing and the type of drug prescribed. Medication error, especially underdose, is common in outpatient, emergency department and discharge prescriptions. Computer calculated doses can significantly reduce errors, but other risk factors have to be concurrently addressed to achieve maximum benefit.

  10. Effect of Electronic Editing on Error Rate of Newspaper.

    ERIC Educational Resources Information Center

    Randall, Starr D.

    1979-01-01

    A study of a North Carolina newspaper indicates that newspapers using fully integrated electronic editing systems have fewer errors in spelling, punctuation, sentence construction, hyphenation, and typography than newspapers not using electronic editing. (GT)

  11. Making Residents Part of the Safety Culture: Improving Error Reporting and Reducing Harms.

    PubMed

    Fox, Michael D; Bump, Gregory M; Butler, Gabriella A; Chen, Ling-Wan; Buchert, Andrew R

    2017-01-30

    Reporting medical errors is a focus of the patient safety movement. As frontline physicians, residents are optimally positioned to recognize errors and flaws in systems of care. Previous work highlights the difficulty of engaging residents in identification and/or reduction of medical errors and in integrating these trainees into their institutions' cultures of safety. The authors describe the implementation of a longitudinal, discipline-based, multifaceted curriculum to enhance the reporting of errors by pediatric residents at Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center. The key elements of this curriculum included providing the necessary education to identify medical errors with an emphasis on systems-based causes, modeling of error reporting by faculty, and integrating error reporting and discussion into the residents' daily activities. The authors tracked monthly error reporting rates by residents and other health care professionals, in addition to serious harm event rates at the institution. The interventions resulted in significant increases in error reports filed by residents, from 3.6 to 37.8 per month over 4 years (P < 0.0001). This increase in resident error reporting correlated with a decline in serious harm events, from 15.0 to 8.1 per month over 4 years (P = 0.01). Integrating patient safety into the everyday resident responsibilities encourages frequent reporting and discussion of medical errors and leads to improvements in patient care. Multiple simultaneous interventions are essential to making residents part of the safety culture of their training hospitals.

  12. Significant and Sustained Reduction in Chemotherapy Errors Through Improvement Science.

    PubMed

    Weiss, Brian D; Scott, Melissa; Demmel, Kathleen; Kotagal, Uma R; Perentesis, John P; Walsh, Kathleen E

    2017-04-01

    A majority of children with cancer are now cured with highly complex chemotherapy regimens incorporating multiple drugs and demanding monitoring schedules. The risk for error is high, and errors can occur at any stage in the process, from order generation to pharmacy formulation to bedside drug administration. Our objective was to describe a program to eliminate errors in chemotherapy use among children. To increase reporting of chemotherapy errors, we supplemented the hospital reporting system with a new chemotherapy near-miss reporting system. After the model for improvement, we then implemented several interventions, including a daily chemotherapy huddle, improvements to the preparation and delivery of intravenous therapy, headphones for clinicians ordering chemotherapy, and standards for chemotherapy administration throughout the hospital. Twenty-two months into the project, we saw a centerline shift in our U chart of chemotherapy errors that reached the patient from a baseline rate of 3.8 to 1.9 per 1,000 doses. This shift has been sustained for > 4 years. In Poisson regression analyses, we found an initial increase in error rates, followed by a significant decline in errors after 16 months of improvement work ( P < .001). After the model for improvement, our improvement efforts were associated with significant reductions in chemotherapy errors that reached the patient. Key drivers for our success included error vigilance through a huddle, standardization, and minimization of interruptions during ordering.

  13. Common errors of drug administration in infants: causes and avoidance.

    PubMed

    Anderson, B J; Ellis, J F

    1999-01-01

    Drug administration errors are common in infants. Although the infant population has a high exposure to drugs, there are few data concerning pharmacokinetics or pharmacodynamics, or the influence of paediatric diseases on these processes. Children remain therapeutic orphans. Formulations are often suitable only for adults; in addition, the lack of maturation of drug elimination processes, alteration of body composition and influence of size render the calculation of drug doses complex in infants. The commonest drug administration error in infants is one of dose, and the commonest hospital site for this error is the intensive care unit. Drug errors are a consequence of system error, and preventive strategies are possible through system analysis. The goal of a zero drug error rate should be aggressively sought, with systems in place that aim to eliminate the effects of inevitable human error. This involves review of the entire system from drug manufacture to drug administration. The nuclear industry, telecommunications and air traffic control services all practise error reduction policies with zero error as a clear goal, not by finding fault in the individual, but by identifying faults in the system and building into that system mechanisms for picking up faults before they occur. Such policies could be adapted to medicine using interventions both specific (the production of formulations which are for children only and clearly labelled, regular audit by pharmacists, legible prescriptions, standardised dose tables) and general (paediatric drug trials, education programmes, nonpunitive error reporting) to reduce the number of errors made in giving medication to infants.

  14. Modeling validation and control analysis for controlled temperature and humidity of air conditioning system.

    PubMed

    Lee, Jing-Nang; Lin, Tsung-Min; Chen, Chien-Chih

    2014-01-01

    This study constructs an energy based model of thermal system for controlled temperature and humidity air conditioning system, and introduces the influence of the mass flow rate, heater and humidifier for proposed control criteria to achieve the controlled temperature and humidity of air conditioning system. Then, the reliability of proposed thermal system model is established by both MATLAB dynamic simulation and the literature validation. Finally, the PID control strategy is applied for controlling the air mass flow rate, humidifying capacity, and heating, capacity. The simulation results show that the temperature and humidity are stable at 541 sec, the disturbance of temperature is only 0.14 °C, 0006 kg(w)/kg(da) in steady-state error of humidity ratio, and the error rate is only 7.5%. The results prove that the proposed system is an effective controlled temperature and humidity of an air conditioning system.

  15. Modeling Validation and Control Analysis for Controlled Temperature and Humidity of Air Conditioning System

    PubMed Central

    Lee, Jing-Nang; Lin, Tsung-Min

    2014-01-01

    This study constructs an energy based model of thermal system for controlled temperature and humidity air conditioning system, and introduces the influence of the mass flow rate, heater and humidifier for proposed control criteria to achieve the controlled temperature and humidity of air conditioning system. Then, the reliability of proposed thermal system model is established by both MATLAB dynamic simulation and the literature validation. Finally, the PID control strategy is applied for controlling the air mass flow rate, humidifying capacity, and heating, capacity. The simulation results show that the temperature and humidity are stable at 541 sec, the disturbance of temperature is only 0.14°C, 0006 kgw/kgda in steady-state error of humidity ratio, and the error rate is only 7.5%. The results prove that the proposed system is an effective controlled temperature and humidity of an air conditioning system. PMID:25250390

  16. A measurement-based performability model for a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.

    1987-01-01

    A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.

  17. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  18. SU-G-BRB-11: On the Sensitivity of An EPID-Based 3D Dose Verification System to Detect Delivery Errors in VMAT Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez, P; Olaciregui-Ruiz, I; Mijnheer, B

    2016-06-15

    Purpose: To investigate the sensitivity of an EPID-based 3D dose verification system to detect delivery errors in VMAT treatments. Methods: For this study 41 EPID-reconstructed 3D in vivo dose distributions of 15 different VMAT plans (H&N, lung, prostate and rectum) were selected. To simulate the effect of delivery errors, their TPS plans were modified by: 1) scaling of the monitor units by ±3% and ±6% and 2) systematic shifting of leaf bank positions by ±1mm, ±2mm and ±5mm. The 3D in vivo dose distributions where then compared to the unmodified and modified treatment plans. To determine the detectability of themore » various delivery errors, we made use of a receiver operator characteristic (ROC) methodology. True positive and false positive rates were calculated as a function of the γ-parameters γmean, γ1% (near-maximum γ) and the PTV dose parameter ΔD{sub 50} (i.e. D{sub 50}(EPID)-D{sub 50}(TPS)). The ROC curve is constructed by plotting the true positive rate vs. the false positive rate. The area under the ROC curve (AUC) then serves as a measure of the performance of the EPID dosimetry system in detecting a particular error; an ideal system has AUC=1. Results: The AUC ranges for the machine output errors and systematic leaf position errors were [0.64 – 0.93] and [0.48 – 0.92] respectively using γmean, [0.57 – 0.79] and [0.46 – 0.85] using γ1% and [0.61 – 0.77] and [ 0.48 – 0.62] using ΔD{sub 50}. Conclusion: For the verification of VMAT deliveries, the parameter γmean is the best discriminator for the detection of systematic leaf position errors and monitor unit scaling errors. Compared to γmean and γ1%, the parameter ΔD{sub 50} performs worse as a discriminator in all cases.« less

  19. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  20. Augmented burst-error correction for UNICON laser memory. [digital memory

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1974-01-01

    A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.

  1. Simulation study of communication link for Pioneer Saturn/Uranus atmospheric entry probe. [signal acquisition by candidate modem for radio link

    NASA Technical Reports Server (NTRS)

    Hinrichs, C. A.

    1974-01-01

    A digital simulation is presented for a candidate modem in a modeled atmospheric scintillation environment with Doppler, Doppler rate, and signal attenuation typical of the radio link conditions for an outer planets atmospheric entry probe. The results indicate that the signal acquisition characteristics and the channel error rate are acceptable for the system requirements of the radio link. The simulation also outputs data for calculating other error statistics and a quantized symbol stream from which error correction decoding can be analyzed.

  2. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    PubMed

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  3. High speed and adaptable error correction for megabit/s rate quantum key distribution

    PubMed Central

    Dixon, A. R.; Sato, H.

    2014-01-01

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90–94% of the ideal secure key rate over all fibre distances from 0–80 km. PMID:25450416

  4. Errors in laboratory medicine: practical lessons to improve patient safety.

    PubMed

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.

  5. Design of an all-attitude flight control system to execute commanded bank angles and angles of attack

    NASA Technical Reports Server (NTRS)

    Burgin, G. H.; Eggleston, D. M.

    1976-01-01

    A flight control system for use in air-to-air combat simulation was designed. The input to the flight control system are commanded bank angle and angle of attack, the output are commands to the control surface actuators such that the commanded values will be achieved in near minimum time and sideslip is controlled to remain small. For the longitudinal direction, a conventional linear control system with gains scheduled as a function of dynamic pressure is employed. For the lateral direction, a novel control system, consisting of a linear portion for small bank angle errors and a bang-bang control system for large errors and error rates is employed.

  6. The calculation of average error probability in a digital fibre optical communication system

    NASA Astrophysics Data System (ADS)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  7. Dual processing and diagnostic errors.

    PubMed

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  8. Use of Positive Blood Cultures for Direct Identification and Susceptibility Testing with the Vitek 2 System

    PubMed Central

    de Cueto, Marina; Ceballos, Esther; Martinez-Martinez, Luis; Perea, Evelio J.; Pascual, Alvaro

    2004-01-01

    In order to further decrease the time lapse between initial inoculation of blood culture media and the reporting of results of identification and antimicrobial susceptibility tests for microorganisms causing bacteremia, we performed a prospective study in which specially processed fluid from positive blood culture bottles from Bactec 9240 (Becton Dickinson, Cockeysville, Md.) containing aerobic media were directly inoculated into Vitek 2 system cards (bio-Mérieux, France). Organism identification and susceptibility results were compared with those obtained from cards inoculated with a standardized bacterial suspension obtained following subculture to agar; 100 consecutive positive monomicrobic blood cultures, consisting of 50 gram-negative rods and 50 gram-positive cocci, were included in the study. For gram-negative organisms, 31 of the 50 (62%) showed complete agreement with the standard method for species identification, while none of the 50 gram-positive cocci were correctly identified by the direct method. For gram-negative rods, there were 50% categorical agreements between the direct and standard methods for all drugs tested. The very major error rate was 2.4%, and the major error rate was 0.6%. The overall error rate for gram-negatives was 6.6%. Complete agreement in clinical categories of all antimicrobial agents evaluated was obtained for 19 of 50 (38%) gram-positive cocci evaluated; the overall error rate was 8.4%, with 2.8% minor errors, 2.4% major errors, and 3.2% very major errors. These findings suggest that the Vitek 2 cards inoculated directly from positive Bactec 9240 bottles do not provide acceptable bacterial identification or susceptibility testing in comparison with corresponding cards tested by a standard method. PMID:15297523

  9. Effects of fog on the bit-error rate of a free-space laser communication system.

    PubMed

    Strickland, B R; Lavan, M J; Woodbridge, E; Chan, V

    1999-01-20

    Free-space laser communication (lasercom) systems are subject to performance degradation when heavy fog or smoke obscures the line of sight. The bit-error rate (BER) of a high-bandwidth (570 Mbits/s) lasercom system was correlated with the atmospheric transmission over a folded path of 2.4 km. BER's of 10(-7) were observed when the atmospheric transmission was as low as 0.25%, whereas BER's of less than 10(-10) were observed when the transmission was above 2.5%. System performance was approximately 10 dB less than calculated, with the discrepancy attributed to scintillation, multiple scattering, and absorption. Peak power of the 810-nm communications laser was 186 mW, and the beam divergence was purposely degraded to 830 murad. These results were achieved without the use of error correction schemes or active tracking. An optimized system with narrower beam divergence and active tracking could be expected to yield significantly better performance.

  10. Commissioning and quality assurance for VMAT delivery systems: An efficient time-resolved system using real-time EPID imaging.

    PubMed

    Zwan, Benjamin J; Barnes, Michael P; Hindmarsh, Jonathan; Lim, Seng B; Lovelock, Dale M; Fuangrod, Todsaporn; O'Connor, Daryl J; Keall, Paul J; Greer, Peter B

    2017-08-01

    An ideal commissioning and quality assurance (QA) program for Volumetric Modulated Arc Therapy (VMAT) delivery systems should assess the performance of each individual dynamic component as a function of gantry angle. Procedures within such a program should also be time-efficient, independent of the delivery system and be sensitive to all types of errors. The purpose of this work is to develop a system for automated time-resolved commissioning and QA of VMAT control systems which meets these criteria. The procedures developed within this work rely solely on images obtained, using an electronic portal imaging device (EPID) without the presence of a phantom. During the delivery of specially designed VMAT test plans, EPID frames were acquired at 9.5 Hz, using a frame grabber. The set of test plans was developed to individually assess the performance of the dose delivery and multileaf collimator (MLC) control systems under varying levels of delivery complexities. An in-house software tool was developed to automatically extract features from the EPID images and evaluate the following characteristics as a function of gantry angle: dose delivery accuracy, dose rate constancy, beam profile constancy, gantry speed constancy, dynamic MLC positioning accuracy, MLC speed and acceleration constancy, and synchronization between gantry angle, MLC positioning and dose rate. Machine log files were also acquired during each delivery and subsequently compared to information extracted from EPID image frames. The largest difference between measured and planned dose at any gantry angle was 0.8% which correlated with rapid changes in dose rate and gantry speed. For all other test plans, the dose delivered was within 0.25% of the planned dose for all gantry angles. Profile constancy was not found to vary with gantry angle for tests where gantry speed and dose rate were constant, however, for tests with varying dose rate and gantry speed, segments with lower dose rate and higher gantry speed exhibited less profile stability. MLC positional accuracy was not observed to be dependent on the degree of interdigitation. MLC speed was measured for each individual leaf and slower leaf speeds were shown to be compensated for by lower dose rates. The test procedures were found to be sensitive to 1 mm systematic MLC errors, 1 mm random MLC errors, 0.4 mm MLC gap errors and synchronization errors between the MLC, dose rate and gantry angle controls systems of 1°. In general, parameters measured by both EPID and log files agreed with the plan, however, a greater average departure from the plan was evidenced by the EPID measurements. QA test plans and analysis methods have been developed to assess the performance of each dynamic component of VMAT deliveries individually and as a function of gantry angle. This methodology relies solely on time-resolved EPID imaging without the presence of a phantom and has been shown to be sensitive to a range of delivery errors. The procedures developed in this work are both comprehensive and time-efficient and can be used for streamlined commissioning and QA of VMAT delivery systems. © 2017 American Association of Physicists in Medicine.

  11. Effect of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths.

    PubMed

    Payne, Velma L; Medvedeva, Olga; Legowski, Elizabeth; Castine, Melissa; Tseytlin, Eugene; Jukic, Drazen; Crowley, Rebecca S

    2009-11-01

    Determine effects of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths. Determine if limited enforcement in a medical tutoring system inhibits students from learning the optimal and most efficient solution path. Describe the type of deviations from the optimal solution path that occur during tutoring, and how these deviations change over time. Determine if the size of the problem-space (domain scope), has an effect on learning gains when using a tutor with limited enforcement. Analyzed data mined from 44 pathology residents using SlideTutor-a Medical Intelligent Tutoring System in Dermatopathology that teaches histopathologic diagnosis and reporting skills based on commonly used diagnostic algorithms. Two subdomains were included in the study representing sub-algorithms of different sizes and complexities. Effects of the tutoring system on student errors, goal states and solution paths were determined. Students gradually increase the frequency of steps that match the tutoring system's expectation of expert performance. Frequency of errors gradually declines in all categories of error significance. Student performance frequently differs from the tutor-defined optimal path. However, as students continue to be tutored, they approach the optimal solution path. Performance in both subdomains was similar for both errors and goal differences. However, the rate at which students progress toward the optimal solution path differs between the two domains. Tutoring in superficial perivascular dermatitis, the larger and more complex domain was associated with a slower rate of approximation towards the optimal solution path. Students benefit from a limited-enforcement tutoring system that leverages diagnostic algorithms but does not prevent alternative strategies. Even with limited enforcement, students converge toward the optimal solution path.

  12. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  13. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    ERIC Educational Resources Information Center

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  14. Error analysis for relay type satellite-aided search and rescue systems

    NASA Technical Reports Server (NTRS)

    Marini, J. W.

    1977-01-01

    An analysis was made of the errors in the determination of the position of an emergency transmitter in a satellite aided search and rescue system. The satellite was assumed to be at a height of 820 km in a near circular near polar orbit. Short data spans of four minutes or less were used. The error sources considered were measurement noise, transmitter frequency drift, ionospheric effects and error in the assumed height of the transmitter. The errors were calculated for several different transmitter positions, data rates and data spans. The only transmitter frequency used was 406 MHz, but the results can be scaled to different frequencies. In a typical case, in which four Doppler measurements were taken over a span of two minutes, the position error was about 1.2 km.

  15. Errors in radiation oncology: A study in pathways and dosimetric impact

    PubMed Central

    Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff

    2005-01-01

    As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793

  16. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-11-15

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization ofmore » the homodyne detection scheme.« less

  17. Development and Assessment of a Medication Safety Measurement Program in a Long-Term Care Pharmacy.

    PubMed

    Hertig, John B; Hultgren, Kyle E; Parks, Scott; Rondinelli, Rick

    2016-02-01

    Medication errors continue to be a major issue in the health care system, including in long-term care facilities. While many hospitals and health systems have developed methods to identify, track, and prevent these errors, long-term care facilities historically have not invested in these error-prevention strategies. The objective of this study was two-fold: 1) to develop a set of medication-safety process measures for dispensing in a long-term care pharmacy, and 2) to analyze the data from those measures to determine the relative safety of the process. The study was conducted at In Touch Pharmaceuticals in Valparaiso, Indiana. To assess the safety of the medication-use system, each step was documented using a comprehensive flowchart (process flow map) tool. Once completed and validated, the flowchart was used to complete a "failure modes and effects analysis" (FMEA) identifying ways a process may fail. Operational gaps found during FMEA were used to identify points of measurement. The research identified a set of eight measures as potential areas of failure; data were then collected on each one of these. More than 133,000 medication doses (opportunities for errors) were included in the study during the research time frame (April 1, 2014, and ended on June 4, 2014). Overall, there was an approximate order-entry error rate of 15.26%, with intravenous errors at 0.37%. A total of 21 errors migrated through the entire medication-use system. These 21 errors in 133,000 opportunities resulted in a final check error rate of 0.015%. A comprehensive medication-safety measurement program was designed and assessed. This study demonstrated the ability to detect medication errors in a long-term pharmacy setting, thereby making process improvements measureable. Future, larger, multi-site studies should be completed to test this measurement program.

  18. 4.5-Gb/s RGB-LED based WDM visible light communication system employing CAP modulation and RLS based adaptive equalization.

    PubMed

    Wang, Yiguang; Huang, Xingxing; Tao, Li; Shi, Jianyang; Chi, Nan

    2015-05-18

    Inter-symbol interference (ISI) is one of the key problems that seriously limit transmission data rate in high-speed VLC systems. To eliminate ISI and further improve the system performance, series of equalization schemes have been widely investigated. As an adaptive algorithm commonly used in wireless communication, RLS is also suitable for visible light communication due to its quick convergence and better performance. In this paper, for the first time we experimentally demonstrate a high-speed RGB-LED based WDM VLC system employing carrier-less amplitude and phase (CAP) modulation and recursive least square (RLS) based adaptive equalization. An aggregate data rate of 4.5Gb/s is successfully achieved over 1.5-m indoor free space transmission with the bit error rate (BER) below the 7% forward error correction (FEC) limit of 3.8x10(-3). To the best of our knowledge, this is the highest data rate ever achieved in RGB-LED based VLC systems.

  19. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.

    1986-01-01

    High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.

  20. A biometric identification system based on eigenpalm and eigenfinger features.

    PubMed

    Ribaric, Slobodan; Fratric, Ivan

    2005-11-01

    This paper presents a multimodal biometric identification system based on the features of the human hand. We describe a new biometric approach to personal identification using eigenfinger and eigenpalm features, with fusion applied at the matching-score level. The identification process can be divided into the following phases: capturing the image; preprocessing; extracting and normalizing the palm and strip-like finger subimages; extracting the eigenpalm and eigenfinger features based on the K-L transform; matching and fusion; and, finally, a decision based on the (k, l)-NN classifier and thresholding. The system was tested on a database of 237 people (1,820 hand images). The experimental results showed the effectiveness of the system in terms of the recognition rate (100 percent), the equal error rate (EER = 0.58 percent), and the total error rate (TER = 0.72 percent).

  1. Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas

    NASA Astrophysics Data System (ADS)

    Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.

    2017-12-01

    Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.

  2. Evaluation of Application Accuracy and Performance of a Hydraulically Operated Variable-Rate Aerial Application System

    USDA-ARS?s Scientific Manuscript database

    An aerial variable-rate application system consisting of a DGPS-based guidance system, automatic flow controller, and hydraulically controlled pump/valve was evaluated for response time to rapidly changing flow requirements and accuracy of application. Spray deposition position error was evaluated ...

  3. Measurement-based analysis of error latency. [in computer operating system

    NASA Technical Reports Server (NTRS)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  4. Multimodal system designed to reduce errors in recording and administration of drugs in anaesthesia: prospective randomised clinical evaluation.

    PubMed

    Merry, Alan F; Webster, Craig S; Hannam, Jacqueline; Mitchell, Simon J; Henderson, Robert; Reid, Papaarangi; Edwards, Kylie-Ellen; Jardim, Anisoara; Pak, Nick; Cooper, Jeremy; Hopley, Lara; Frampton, Chris; Short, Timothy G

    2011-09-22

    To clinically evaluate a new patented multimodal system (SAFERSleep) designed to reduce errors in the recording and administration of drugs in anaesthesia. Prospective randomised open label clinical trial. Five designated operating theatres in a major tertiary referral hospital. Eighty nine consenting anaesthetists managing 1075 cases in which there were 10,764 drug administrations. Use of the new system (which includes customised drug trays and purpose designed drug trolley drawers to promote a well organised anaesthetic workspace and aseptic technique; pre-filled syringes for commonly used anaesthetic drugs; large legible colour coded drug labels; a barcode reader linked to a computer, speakers, and touch screen to provide automatic auditory and visual verification of selected drugs immediately before each administration; automatic compilation of an anaesthetic record; an on-screen and audible warning if an antibiotic has not been administered within 15 minutes of the start of anaesthesia; and certain procedural rules-notably, scanning the label before each drug administration) versus conventional practice in drug administration with a manually compiled anaesthetic record. Primary: composite of errors in the recording and administration of intravenous drugs detected by direct observation and by detailed reconciliation of the contents of used drug vials against recorded administrations; and lapses in responding to an intermittent visual stimulus (vigilance latency task). Secondary: outcomes in patients; analyses of anaesthetists' tasks and assessments of workload; evaluation of the legibility of anaesthetic records; evaluation of compliance with the procedural rules of the new system; and questionnaire based ratings of the respective systems by participants. The overall mean rate of drug errors per 100 administrations was 9.1 (95% confidence interval 6.9 to 11.4) with the new system (one in 11 administrations) and 11.6 (9.3 to 13.9) with conventional methods (one in nine administrations) (P = 0.045 for difference). Most were recording errors, and, though fewer drug administration errors occurred with the new system, the comparison with conventional methods did not reach significance. Rates of errors in drug administration were lower when anaesthetists consistently applied two key principles of the new system (scanning the drug barcode before administering each drug and keeping the voice prompt active) than when they did not: mean 6.0 (3.1 to 8.8) errors per 100 administrations v 9.7 (8.4 to 11.1) respectively (P = 0.004). Lapses in the vigilance latency task occurred in 12% (58/471) of cases with the new system and 9% (40/473) with conventional methods (P = 0.052). The records generated by the new system were more legible, and anaesthetists preferred the new system, particularly in relation to long, complex, and emergency cases. There were no differences between new and conventional systems in respect of outcomes in patients or anaesthetists' workload. The new system was associated with a reduction in errors in the recording and administration of drugs in anaesthesia, attributable mainly to a reduction in recording errors. Automatic compilation of the anaesthetic record increased legibility but also increased lapses in a vigilance latency task and decreased time spent watching monitors. Trial registration Australian New Zealand Clinical Trials Registry No 12608000068369.

  5. Servo control booster system for minimizing following error

    DOEpatents

    Wise, W.L.

    1979-07-26

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  6. Securing quantum key distribution systems using fewer states

    NASA Astrophysics Data System (ADS)

    Islam, Nurul T.; Lim, Charles Ci Wen; Cahall, Clinton; Kim, Jungsang; Gauthier, Daniel J.

    2018-04-01

    Quantum key distribution (QKD) allows two remote users to establish a secret key in the presence of an eavesdropper. The users share quantum states prepared in two mutually unbiased bases: one to generate the key while the other monitors the presence of the eavesdropper. Here, we show that a general d -dimension QKD system can be secured by transmitting only a subset of the monitoring states. In particular, we find that there is no loss in the secure key rate when dropping one of the monitoring states. Furthermore, it is possible to use only a single monitoring state if the quantum bit error rates are low enough. We apply our formalism to an experimental d =4 time-phase QKD system, where only one monitoring state is transmitted, and obtain a secret key rate of 17.4 ±2.8 Mbits/s at a 4 dB channel loss and with a quantum bit error rate of 0.045 ±0.001 and 0.037 ±0.001 in time and phase bases, respectively, which is 58.4% of the secret key rate that can be achieved with the full setup. This ratio can be increased, potentially up to 100%, if the error rates in time and phase basis are reduced. Our results demonstrate that it is possible to substantially simplify the design of high-dimensional QKD systems, including those that use the spatial or temporal degrees of freedom of the photon, and still outperform qubit-based (d =2 ) protocols.

  7. Evaluation of causes and frequency of medication errors during information technology downtime.

    PubMed

    Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F

    2009-06-15

    The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.

  8. Application of human reliability analysis to nursing errors in hospitals.

    PubMed

    Inoue, Kayoko; Koizumi, Akio

    2004-12-01

    Adverse events in hospitals, such as in surgery, anesthesia, radiology, intensive care, internal medicine, and pharmacy, are of worldwide concern and it is important, therefore, to learn from such incidents. There are currently no appropriate tools based on state-of-the art models available for the analysis of large bodies of medical incident reports. In this study, a new model was developed to facilitate medical error analysis in combination with quantitative risk assessment. This model enables detection of the organizational factors that underlie medical errors, and the expedition of decision making in terms of necessary action. Furthermore, it determines medical tasks as module practices and uses a unique coding system to describe incidents. This coding system has seven vectors for error classification: patient category, working shift, module practice, linkage chain (error type, direct threat, and indirect threat), medication, severity, and potential hazard. Such mathematical formulation permitted us to derive two parameters: error rates for module practices and weights for the aforementioned seven elements. The error rate of each module practice was calculated by dividing the annual number of incident reports of each module practice by the annual number of the corresponding module practice. The weight of a given element was calculated by the summation of incident report error rates for an element of interest. This model was applied specifically to nursing practices in six hospitals over a year; 5,339 incident reports with a total of 63,294,144 module practices conducted were analyzed. Quality assurance (QA) of our model was introduced by checking the records of quantities of practices and reproducibility of analysis of medical incident reports. For both items, QA guaranteed legitimacy of our model. Error rates for all module practices were approximately of the order 10(-4) in all hospitals. Three major organizational factors were found to underlie medical errors: "violation of rules" with a weight of 826 x 10(-4), "failure of labor management" with a weight of 661 x 10(-4), and "defects in the standardization of nursing practices" with a weight of 495 x 10(-4).

  9. ASCERTAINMENT OF ON-ROAD SAFETY ERRORS BASED ON VIDEO REVIEW

    PubMed Central

    Dawson, Jeffrey D.; Uc, Ergun Y.; Anderson, Steven W.; Dastrup, Elizabeth; Johnson, Amy M.; Rizzo, Matthew

    2011-01-01

    Summary Using an instrumented vehicle, we have studied several aspects of the on-road performance of healthy and diseased elderly drivers. One goal from such studies is to ascertain the type and frequency of driving safety errors. Because the judgment of such errors is somewhat subjective, we applied a taxonomy system of 15 general safety error categories and 76 specific safety error types. We also employed and trained professional driving instructors to review the video data of the on-road drives. In this report, we illustrate our rating system on a group of 111 drivers, ages 65 to 89. These drivers made errors in 13 of the 15 error categories, comprising 42 of the 76 error types. A mean (SD) of 35.8 (12.8) safety errors per drive were noted, with 2.1 (1.7) of them being judged as serious. Our methodology may be useful in applications such as intervention studies, and in longitudinal studies of changes in driving abilities in patients with declining cognitive ability. PMID:24273753

  10. Assessment of Automatically Exported Clinical Data from a Hospital Information System for Clinical Research in Multiple Myeloma.

    PubMed

    Torres, Viviana; Cerda, Mauricio; Knaup, Petra; Löpprich, Martin

    2016-01-01

    An important part of the electronic information available in Hospital Information System (HIS) has the potential to be automatically exported to Electronic Data Capture (EDC) platforms for improving clinical research. This automation has the advantage of reducing manual data transcription, a time consuming and prone to errors process. However, quantitative evaluations of the process of exporting data from a HIS to an EDC system have not been reported extensively, in particular comparing with manual transcription. In this work an assessment to study the quality of an automatic export process, focused in laboratory data from a HIS is presented. Quality of the laboratory data was assessed in two types of processes: (1) a manual process of data transcription, and (2) an automatic process of data transference. The automatic transference was implemented as an Extract, Transform and Load (ETL) process. Then, a comparison was carried out between manual and automatic data collection methods. The criteria to measure data quality were correctness and completeness. The manual process had a general error rate of 2.6% to 7.1%, obtaining the lowest error rate if data fields with a not clear definition were removed from the analysis (p < 10E-3). In the case of automatic process, the general error rate was 1.9% to 12.1%, where lowest error rate is obtained when excluding information missing in the HIS but transcribed to the EDC from other physical sources. The automatic ETL process can be used to collect laboratory data for clinical research if data in the HIS as well as physical documentation not included in HIS, are identified previously and follows a standardized data collection protocol.

  11. Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory

    NASA Astrophysics Data System (ADS)

    Yan, Daqin; Wang, Fuzhong; Wang, Shuo

    2017-12-01

    Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.

  12. Corrections of clinical chemistry test results in a laboratory information system.

    PubMed

    Wang, Sihe; Ho, Virginia

    2004-08-01

    The recently released reports by the Institute of Medicine, To Err Is Human and Patient Safety, have received national attention because of their focus on the problem of medical errors. Although a small number of studies have reported on errors in general clinical laboratories, there are, to our knowledge, no reported studies that focus on errors in pediatric clinical laboratory testing. To characterize the errors that have caused corrections to have to be made in pediatric clinical chemistry results in the laboratory information system, Misys. To provide initial data on the errors detected in pediatric clinical chemistry laboratories in order to improve patient safety in pediatric health care. All clinical chemistry staff members were informed of the study and were requested to report in writing when a correction was made in the laboratory information system, Misys. Errors were detected either by the clinicians (the results did not fit the patients' clinical conditions) or by the laboratory technologists (the results were double-checked, and the worksheets were carefully examined twice a day). No incident that was discovered before or during the final validation was included. On each Monday of the study, we generated a report from Misys that listed all of the corrections made during the previous week. We then categorized the corrections according to the types and stages of the incidents that led to the corrections. A total of 187 incidents were detected during the 10-month study, representing a 0.26% error detection rate per requisition. The distribution of the detected incidents included 31 (17%) preanalytic incidents, 46 (25%) analytic incidents, and 110 (59%) postanalytic incidents. The errors related to noninterfaced tests accounted for 50% of the total incidents and for 37% of the affected tests and orderable panels, while the noninterfaced tests and panels accounted for 17% of the total test volume in our laboratory. This pilot study provided the rate and categories of errors detected in a pediatric clinical chemistry laboratory based on the corrections of results in the laboratory information system. A direct interface of the instruments to the laboratory information system showed that it had favorable effects on reducing laboratory errors.

  13. A data-driven modeling approach to stochastic computation for low-energy biomedical devices.

    PubMed

    Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen

    2011-01-01

    Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.

  14. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  15. Developing an eLearning tool formalizing in YAWL the guidelines used in a transfusion medicine service.

    PubMed

    Russo, Paola; Piazza, Miriam; Leonardi, Giorgio; Roncoroni, Layla; Russo, Carlo; Spadaro, Salvatore; Quaglini, Silvana

    2012-01-01

    The blood transfusion is a complex activity subject to a high risk of eventually fatal errors. The development and application of computer-based systems could help reducing the error rate, playing a fundamental role in the improvement of the quality of care. This poster presents an under development eLearning tool formalizing the guidelines of the transfusion process. This system, implemented in YAWL (Yet Another Workflow Language), will be used to train the personnel in order to improve the efficiency of care and to reduce errors.

  16. Output regulation control for switched stochastic delay systems with dissipative property under error-dependent switching

    NASA Astrophysics Data System (ADS)

    Li, L. L.; Jin, C. L.; Ge, X.

    2018-01-01

    In this paper, the output regulation problem with dissipative property for a class of switched stochastic delay systems is investigated, based on an error-dependent switching law. Under the assumption that none subsystem is solvable for the problem, a sufficient condition is derived by structuring multiple Lyapunov-Krasovskii functionals with respect to multiple supply rates, via designing error feedback regulators. The condition is also established when dissipative property reduces to passive property. Finally, two numerical examples are given to demonstrate the feasibility and efficiency of the present method.

  17. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  18. [A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].

    PubMed

    Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki

    2016-03-01

    The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.

  19. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  20. On Performance of Linear Multiuser Detectors for Wireless Multimedia Applications

    NASA Astrophysics Data System (ADS)

    Agarwal, Rekha; Reddy, B. V. R.; Bindu, E.; Nayak, Pinki

    In this paper, performance of different multi-rate schemes in DS-CDMA system is evaluated. The analysis of multirate linear multiuser detectors with multiprocessing gain is analyzed for synchronous Code Division Multiple Access (CDMA) systems. Variable data rate is achieved by varying the processing gain. Our conclusion is that bit error rate for multirate and single rate systems can be made same with a tradeoff with number of users in linear multiuser detectors.

  1. Study of Systems Using Inertia Wheels for Precise Attitude Control of a Satellite

    NASA Technical Reports Server (NTRS)

    White, John S.; Hansen, Q. Marion

    1961-01-01

    Systems using inertia wheels are evaluated in this report to determine their suitability for precise attitude control of a satellite and to select superior system configurations. Various possible inertia wheel system configurations are first discussed in a general manner. Three of these systems which appear more promising than the others are analyzed in detail, using the Orbiting Astronomical Observatory as an example. The three systems differ from each other only by the method of damping, which is provided by either a rate gyro, an error-rate network, or a tachometer in series with a high-pass filter. An analytical investigation which consists of a generalized linear analysis, a nonlinear analysis using the switching-time method, and an analog computer study shows that all three systems are theoretically capable of producing adequate response and also of maintaining the required pointing accuracy for the Orbiting Astronomical Observatory of plus or minus 0.1 second of arc. Practical considerations and an experimental investigation show, however, that the system which uses an error-rate network to provide damping is superior to the other two systems. The system which uses a rate gyro is shown to be inferior because the threshold level causes a significant amount of limit-cycle operation, and the system which uses a tachometer with a filter is shown to be inferior because a device with the required dynamic range of operation does not appear to be available. The experimental laboratory apparatus used to investigate the dynamic performance of the systems is described, and experimental results are included to show that under laboratory conditions with relatively large extraneous disturbances, a dynamic tracking error of less than plus or minus 0.5 second of arc was obtained.

  2. An Evaluation of Departmental Radiation Oncology Incident Reports: Anticipating a National Reporting System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terezakis, Stephanie A., E-mail: stereza1@jhmi.edu; Harris, Kendra M.; Ford, Eric

    Purpose: Systems to ensure patient safety are of critical importance. The electronic incident reporting systems (IRS) of 2 large academic radiation oncology departments were evaluated for events that may be suitable for submission to a national reporting system (NRS). Methods and Materials: All events recorded in the combined IRS were evaluated from 2007 through 2010. Incidents were graded for potential severity using the validated French Nuclear Safety Authority (ASN) 5-point scale. These incidents were categorized into 7 groups: (1) human error, (2) software error, (3) hardware error, (4) error in communication between 2 humans, (5) error at the human-software interface,more » (6) error at the software-hardware interface, and (7) error at the human-hardware interface. Results: Between the 2 systems, 4407 incidents were reported. Of these events, 1507 (34%) were considered to have the potential for clinical consequences. Of these 1507 events, 149 (10%) were rated as having a potential severity of ≥2. Of these 149 events, the committee determined that 79 (53%) of these events would be submittable to a NRS of which the majority was related to human error or to the human-software interface. Conclusions: A significant number of incidents were identified in this analysis. The majority of events in this study were related to human error and to the human-software interface, further supporting the need for a NRS to facilitate field-wide learning and system improvement.« less

  3. Non-health care facility anticonvulsant medication errors in the United States.

    PubMed

    DeDonato, Emily A; Spiller, Henry A; Casavant, Marcel J; Chounthirath, Thitphalak; Hodges, Nichole L; Smith, Gary A

    2018-06-01

    This study provides an epidemiological description of non-health care facility medication errors involving anticonvulsant drugs. A retrospective analysis of National Poison Data System data was conducted on non-health care facility medication errors involving anticonvulsant drugs reported to US Poison Control Centers from 2000 through 2012. During the study period, 108,446 non-health care facility medication errors involving anticonvulsant pharmaceuticals were reported to US Poison Control Centers, averaging 8342 exposures annually. The annual frequency and rate of errors increased significantly over the study period, by 96.6 and 76.7%, respectively. The rate of exposures resulting in health care facility use increased by 83.3% and the rate of exposures resulting in serious medical outcomes increased by 62.3%. In 2012, newer anticonvulsants, including felbamate, gabapentin, lamotrigine, levetiracetam, other anticonvulsants (excluding barbiturates), other types of gamma aminobutyric acid, oxcarbazepine, topiramate, and zonisamide, accounted for 67.1% of all exposures. The rate of non-health care facility anticonvulsant medication errors reported to Poison Control Centers increased during 2000-2012, resulting in more frequent health care facility use and serious medical outcomes. Newer anticonvulsants, although often considered safer and more easily tolerated, were responsible for much of this trend and should still be administered with caution.

  4. The Witness-Voting System

    NASA Astrophysics Data System (ADS)

    Gerck, Ed

    We present a new, comprehensive framework to qualitatively improve election outcome trustworthiness, where voting is modeled as an information transfer process. Although voting is deterministic (all ballots are counted), information is treated stochastically using Information Theory. Error considerations, including faults, attacks, and threats by adversaries, are explicitly included. The influence of errors may be corrected to achieve an election outcome error as close to zero as desired (error-free), with a provably optimal design that is applicable to any type of voting, with or without ballots. Sixteen voting system requirements, including functional, performance, environmental and non-functional considerations, are derived and rated, meeting or exceeding current public-election requirements. The voter and the vote are unlinkable (secret ballot) although each is identifiable. The Witness-Voting System (Gerck, 2001) is extended as a conforming implementation of the provably optimal design that is error-free, transparent, simple, scalable, robust, receipt-free, universally-verifiable, 100% voter-verified, and end-to-end audited.

  5. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.

  6. Concatenated coding for low date rate space communications.

    NASA Technical Reports Server (NTRS)

    Chen, C. H.

    1972-01-01

    In deep space communications with distant planets, the data rate as well as the operating SNR may be very low. To maintain the error rate also at a very low level, it is necessary to use a sophisticated coding system (longer code) without excessive decoding complexity. The concatenated coding has been shown to meet such requirements in that the error rate decreases exponentially with the overall length of the code while the decoder complexity increases only algebraically. Three methods of concatenating an inner code with an outer code are considered. Performance comparison of the three concatenated codes is made.

  7. Learning without Borders: A Review of the Implementation of Medical Error Reporting in Médecins Sans Frontières

    PubMed Central

    Shanks, Leslie; Bil, Karla; Fernhout, Jena

    2015-01-01

    Objective To analyse the results from the first 3 years of implementation of a medical error reporting system in Médecins Sans Frontières-Operational Centre Amsterdam (MSF) programs. Methodology A medical error reporting policy was developed with input from frontline workers and introduced to the organisation in June 2010. The definition of medical error used was “the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim.” All confirmed error reports were entered into a database without the use of personal identifiers. Results 179 errors were reported from 38 projects in 18 countries over the period of June 2010 to May 2013. The rate of reporting was 31, 42, and 106 incidents/year for reporting year 1, 2 and 3 respectively. The majority of errors were categorized as dispensing errors (62 cases or 34.6%), errors or delays in diagnosis (24 cases or 13.4%) and inappropriate treatment (19 cases or 10.6%). The impact of the error was categorized as no harm (58, 32.4%), harm (70, 39.1%), death (42, 23.5%) and unknown in 9 (5.0%) reports. Disclosure to the patient took place in 34 cases (19.0%), did not take place in 46 (25.7%), was not applicable for 5 (2.8%) cases and not reported for 94 (52.5%). Remedial actions introduced at headquarters level included guideline revisions and changes to medical supply procedures. At field level improvements included increased training and supervision, adjustments in staffing levels, and adaptations to the organization of the pharmacy. Conclusion It was feasible to implement a voluntary reporting system for medical errors despite the complex contexts in which MSF intervenes. The reporting policy led to system changes that improved patient safety and accountability to patients. Challenges remain in achieving widespread acceptance of the policy as evidenced by the low reporting and disclosure rates. PMID:26381622

  8. Optimum Code Rates for Noncoherent MFSK with Errors and Erasures Decoding over Rayleigh Fading Channels

    NASA Technical Reports Server (NTRS)

    Ritcey, Adina Matache James A.

    1997-01-01

    In this paper, we analyze the performance of a communication system employing M-ary frequency shift keying (FSK) modulation with errors-and-erasures decoding using Viterbi ratio threshold technique for erasure insertion, in Rayleigh fading and AWGN channels.

  9. Analysis of 23 364 patient-generated, physician-reviewed malpractice claims from a non-tort, blame-free, national patient insurance system: lessons learned from Sweden.

    PubMed

    Pukk-Härenstam, K; Ask, J; Brommels, M; Thor, J; Penaloza, R V; Gaffney, F A

    2009-02-01

    In Sweden, patient malpractice claims are handled administratively and compensated if an independent physician review confirms patient injury resulting from medical error. Full access to all malpractice claims and hospital discharge data for the country provided a unique opportunity to assess the validity of patient claims as indicators of medical error and patient injury. To determine: (1) the percentage of patient malpractice claims validated by independent physician review, (2) actual malpractice claims rates (claims frequency / clinical volume) and (3) differences between Swedish and other national malpractice claims rates. DESIGN, SETTING AND MATERIAL: Swedish national malpractice claims and hospital discharge data were combined, and malpractice claims rates were determined by county, hospital, hospital department, surgical procedure, patient age and sex and compared with published studies on medical error and malpractice. From 1997 to 2004, there were 23 364 inpatient malpractice claims filed by Swedish patients treated at hospitals reporting 11 514 798 discharges. The overall claims rate, 0.20%, was stable over the period of study and was similar to that found in other tort and administrative compensation systems. Over this 8-year period, 49.5% (range 47.0-52.6%) of filed claims were judged valid and eligible for compensation. Claims rates varied significantly across hospitals; surgical specialties accounted for 46% of discharges, but 88% of claims. There were also large differences in claims rates for procedures. Patient-generated malpractice claims, as collected in the Swedish malpractice insurance system and adjusted for clinical volumes, have a high validity, as assessed by standardised physician review, and provide unique new information on malpractice risks, preventable medical errors and patient injuries. Systematic collection and analysis of patient-generated quality of care complaints should be encouraged, regardless of the malpractice compensation system in use.

  10. Neural Flight Control System

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen

    2003-01-01

    The Neural Flight Control System (NFCS) was developed to address the need for control systems that can be produced and tested at lower cost, easily adapted to prototype vehicles and for flight systems that can accommodate damaged control surfaces or changes to aircraft stability and control characteristics resulting from failures or accidents. NFCS utilizes on a neural network-based flight control algorithm which automatically compensates for a broad spectrum of unanticipated damage or failures of an aircraft in flight. Pilot stick and rudder pedal inputs are fed into a reference model which produces pitch, roll and yaw rate commands. The reference model frequencies and gains can be set to provide handling quality characteristics suitable for the aircraft of interest. The rate commands are used in conjunction with estimates of the aircraft s stability and control (S&C) derivatives by a simplified Dynamic Inverse controller to produce virtual elevator, aileron and rudder commands. These virtual surface deflection commands are optimally distributed across the aircraft s available control surfaces using linear programming theory. Sensor data is compared with the reference model rate commands to produce an error signal. A Proportional/Integral (PI) error controller "winds up" on the error signal and adds an augmented command to the reference model output with the effect of zeroing the error signal. In order to provide more consistent handling qualities for the pilot, neural networks learn the behavior of the error controller and add in the augmented command before the integrator winds up. In the case of damage sufficient to affect the handling qualities of the aircraft, an Adaptive Critic is utilized to reduce the reference model frequencies and gains to stay within a flyable envelope of the aircraft.

  11. English speech sound development in preschool-aged children from bilingual English-Spanish environments.

    PubMed

    Gildersleeve-Neumann, Christina E; Kester, Ellen S; Davis, Barbara L; Peña, Elizabeth D

    2008-07-01

    English speech acquisition by typically developing 3- to 4-year-old children with monolingual English was compared to English speech acquisition by typically developing 3- to 4-year-old children with bilingual English-Spanish backgrounds. We predicted that exposure to Spanish would not affect the English phonetic inventory but would increase error frequency and type in bilingual children. Single-word speech samples were collected from 33 children. Phonetically transcribed samples for the 3 groups (monolingual English children, English-Spanish bilingual children who were predominantly exposed to English, and English-Spanish bilingual children with relatively equal exposure to English and Spanish) were compared at 2 time points and for change over time for phonetic inventory, phoneme accuracy, and error pattern frequencies. Children demonstrated similar phonetic inventories. Some bilingual children produced Spanish phonemes in their English and produced few consonant cluster sequences. Bilingual children with relatively equal exposure to English and Spanish averaged more errors than did bilingual children who were predominantly exposed to English. Both bilingual groups showed higher error rates than English-only children overall, particularly for syllable-level error patterns. All language groups decreased in some error patterns, although the ones that decreased were not always the same across language groups. Some group differences of error patterns and accuracy were significant. Vowel error rates did not differ by language group. Exposure to English and Spanish may result in a higher English error rate in typically developing bilinguals, including the application of Spanish phonological properties to English. Slightly higher error rates are likely typical for bilingual preschool-aged children. Change over time at these time points for all 3 groups was similar, suggesting that all will reach an adult-like system in English with exposure and practice.

  12. The NEEDS Data Base Management and Archival Mass Memory System

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.; Bryant, S. B.; Thomas, D. T.; Wagnon, F. W.

    1980-01-01

    A Data Base Management System and an Archival Mass Memory System are being developed that will have a 10 to the 12th bit on-line and a 10 to the 13th off-line storage capacity. The integrated system will accept packetized data from the data staging area at 50 Mbps, create a comprehensive directory, provide for file management, record the data, perform error detection and correction, accept user requests, retrieve the requested data files and provide the data to multiple users at a combined rate of 50 Mbps. Stored and replicated data files will have a bit error rate of less than 10 to the -9th even after ten years of storage. The integrated system will be demonstrated to prove the technology late in 1981.

  13. Antidepressant and antipsychotic medication errors reported to United States poison control centers.

    PubMed

    Kamboj, Alisha; Spiller, Henry A; Casavant, Marcel J; Chounthirath, Thitphalak; Hodges, Nichole L; Smith, Gary A

    2018-05-08

    To investigate unintentional therapeutic medication errors associated with antidepressant and antipsychotic medications in the United States and expand current knowledge on the types of errors commonly associated with these medications. A retrospective analysis of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications was conducted using data from the National Poison Data System. From 2000 to 2012, poison control centers received 207 670 calls reporting unintentional therapeutic errors associated with antidepressant or antipsychotic medications that occurred outside of a health care facility, averaging 15 975 errors annually. The rate of antidepressant-related errors increased by 50.6% from 2000 to 2004, decreased by 6.5% from 2004 to 2006, and then increased 13.0% from 2006 to 2012. The rate of errors related to antipsychotic medications increased by 99.7% from 2000 to 2004 and then increased by 8.8% from 2004 to 2012. Overall, 70.1% of reported errors occurred among adults, and 59.3% were among females. The medications most frequently associated with errors were selective serotonin reuptake inhibitors (30.3%), atypical antipsychotics (24.1%), and other types of antidepressants (21.5%). Most medication errors took place when an individual inadvertently took or was given a medication twice (41.0%), inadvertently took someone else's medication (15.6%), or took the wrong medication (15.6%). This study provides a comprehensive overview of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications. The frequency and rate of these errors increased significantly from 2000 to 2012. Given that use of these medications is increasing in the US, this study provides important information about the epidemiology of the associated medication errors. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Performance analysis of 1-km free-space optical communication system over real atmospheric turbulence channels

    NASA Astrophysics Data System (ADS)

    Liu, Dachang; Wang, Zixiong; Liu, Jianguo; Tan, Jun; Yu, Lijuan; Mei, Haiping; Zhou, Yusong; Zhu, Ninghua

    2017-10-01

    The performance of a free-space optical communication system is highly affected by the atmospheric turbulence in terms of scintillation. An optical communication system based on intensity-modulation direct-detection was built with 1-km transmission distance to evaluate the bit error rate (BER) performance over real atmospheric turbulence. 2.5-, 5-, and 10-Gbps data rate transmissions were carried out, where error-free transmission could be achieved during over 37% of the 2.5-Gbps transmissions and over 43% of the 5-Gbps transmissions. In the rest of the transmissions, BER deteriorated as the refractive-index structure constant increased, while the two measured items have almost the same trend.

  15. Comparison of Errors Using Two Length-Based Tape Systems for Prehospital Care in Children.

    PubMed

    Rappaport, Lara D; Brou, Lina; Givens, Tim; Mandt, Maria; Balakas, Ashley; Roswell, Kelley; Kotas, Jason; Adelgais, Kathleen M

    2016-01-01

    The use of a length/weight-based tape (LBT) for equipment size and drug dosing for pediatric patients is recommended in a joint statement by multiple national organizations. A new system, known as Handtevy™, allows for rapid determination of critical drug doses without performing calculations. To compare two LBT systems for dosing errors and time to medication administration in simulated prehospital scenarios. This was a prospective randomized trial comparing the Broselow Pediatric Emergency Tape™ (Broselow) and Handtevy LBT™ (Handtevy). Paramedics performed 2 pediatric simulations: cardiac arrest with epinephrine administration and hypoglycemia mandating dextrose. Each scenario was repeated utilizing both systems with a 1-year-old and 5-year-old size manikin. Facilitators recorded identified errors and time points of critical actions including time to medication. We enrolled 80 paramedics, performing 320 simulations. For Dextrose, there were significantly more errors with Broselow (63.8%) compared to Handtevy (13.8%) and time to administration was longer with the Broselow system (220 seconds vs. 173 seconds). For epinephrine, the LBTs were similar in overall error rate (Broselow 21.3% vs. Handtevy 16.3%) and time to administration (89 vs. 91 seconds). Cognitive errors were more frequent when using the Broselow compared to Handtevy, particularly with dextrose administration. The frequency of procedural errors was similar between the two LBT systems. In simulated prehospital scenarios, use of the Handtevy LBT system resulted in fewer errors for dextrose administration compared to the Broselow LBT, with similar time to administration and accuracy of epinephrine administration.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisk, William J.; Sullivan, Douglas

    This pilot scale study evaluated the counting accuracy of two people counting systems that could be used in demand controlled ventilation systems to provide control signals for modulating outdoor air ventilation rates. The evaluations included controlled challenges of the people counting systems using pre-planned movements of occupants through doorways and evaluations of counting accuracies when naive occupants (i.e., occupants unaware of the counting systems) passed through the entrance doors of the building or room. The two people counting systems had high counting accuracy accuracies, with errors typically less than 10percent, for typical non-demanding counting events. However, counting errors were highmore » in some highly challenging situations, such as multiple people passing simultaneously through a door. Counting errors, for at least one system, can be very high if people stand in the field of view of the sensor. Both counting system have limitations and would need to be used only at appropriate sites and where the demanding situations that led to counting errors were rare.« less

  17. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  18. New hybrid reverse differential pulse position width modulation scheme for wireless optical communication

    NASA Astrophysics Data System (ADS)

    Liao, Renbo; Liu, Hongzhan; Qiao, Yaojun

    2014-05-01

    In order to improve the power efficiency and reduce the packet error rate of reverse differential pulse position modulation (RDPPM) for wireless optical communication (WOC), a hybrid reverse differential pulse position width modulation (RDPPWM) scheme is proposed, based on RDPPM and reverse pulse width modulation. Subsequently, the symbol structure of RDPPWM is briefly analyzed, and its performance is compared with that of other modulation schemes in terms of average transmitted power, bandwidth requirement, and packet error rate over ideal additive white Gaussian noise (AWGN) channels. Based on the given model, the simulation results show that the proposed modulation scheme has the advantages of improving the power efficiency and reducing the bandwidth requirement. Moreover, in terms of error probability performance, RDPPWM can achieve a much lower packet error rate than that of RDPPM. For example, at the same received signal power of -28 dBm, the packet error rate of RDPPWM can decrease to 2.6×10-12, while that of RDPPM is 2.2×10. Furthermore, RDPPWM does not need symbol synchronization at the receiving end. These considerations make RDPPWM a favorable candidate to select as the modulation scheme in the WOC systems.

  19. Performance analysis of dual-hop optical wireless communication systems over k-distribution turbulence channel with pointing error

    NASA Astrophysics Data System (ADS)

    Mishra, Neha; Sriram Kumar, D.; Jha, Pranav Kumar

    2017-06-01

    In this paper, we investigate the performance of the dual-hop free space optical (FSO) communication systems under the effect of strong atmospheric turbulence together with misalignment effects (pointing error). We consider a relay assisted link using decode and forward (DF) relaying protocol between source and destination with the assumption that Channel State Information is available at both transmitting and receiving terminals. The atmospheric turbulence channels are modeled by k-distribution with pointing error impairment. The exact closed form expression is derived for outage probability and bit error rate and illustrated through numerical plots. Further BER results are compared for the different modulation schemes.

  20. Automated drug dispensing system reduces medication errors in an intensive care setting.

    PubMed

    Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick

    2010-12-01

    We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; p<.05); however, no significant difference was observed before automated dispensing system implementation (20.4% and 19.3%, respectively; not significant). Before-and-after comparisons in the study unit also showed a significantly reduced percentage of total opportunities for error (20.4% and 13.5%; p<.01). An analysis of detailed opportunities for error showed a significant impact of the automated dispensing system in reducing preparation errors (p<.05). Most errors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.

  1. Renal Drug Dosing

    PubMed Central

    Vogel, Erin A.; Billups, Sarah J.; Herner, Sheryl J.

    2016-01-01

    Summary Objective The purpose of this study was to compare the effectiveness of an outpatient renal dose adjustment alert via a computerized provider order entry (CPOE) clinical decision support system (CDSS) versus a CDSS with alerts made to dispensing pharmacists. Methods This was a retrospective analysis of patients with renal impairment and 30 medications that are contraindicated or require dose-adjustment in such patients. The primary outcome was the rate of renal dosing errors for study medications that were dispensed between August and December 2013, when a pharmacist-based CDSS was in place, versus August through December 2014, when a prescriber-based CDSS was in place. A dosing error was defined as a prescription for one of the study medications dispensed to a patient where the medication was contraindicated or improperly dosed based on the patient’s renal function. The denominator was all prescriptions for the study medications dispensed during each respective study period. Results During the pharmacist- and prescriber-based CDSS study periods, 49,054 and 50,678 prescriptions, respectively, were dispensed for one of the included medications. Of these, 878 (1.8%) and 758 (1.5%) prescriptions were dispensed to patients with renal impairment in the respective study periods. Patients in each group were similar with respect to age, sex, and renal function stage. Overall, the five-month error rate was 0.38%. Error rates were similar between the two groups: 0.36% and 0.40% in the pharmacist- and prescriber-based CDSS, respectively (p=0.523). The medication with the highest error rate was dofetilide (0.51% overall) while the medications with the lowest error rate were dabigatran, fondaparinux, and spironolactone (0.00% overall). Conclusions Prescriber- and pharmacist-based CDSS provided comparable, low rates of potential medication errors. Future studies should be undertaken to examine patient benefits of the prescriber-based CDSS. PMID:27466041

  2. Automated alignment system for optical wireless communication systems using image recognition.

    PubMed

    Brandl, Paul; Weiss, Alexander; Zimmermann, Horst

    2014-07-01

    In this Letter, we describe the realization of a tracked line-of-sight optical wireless communication system for indoor data distribution. We built a laser-based transmitter with adaptive focus and ray steering by a microelectromechanical systems mirror. To execute the alignment procedure, we used a CMOS image sensor at the transmitter side and developed an algorithm for image recognition to localize the receiver's position. The receiver is based on a self-developed optoelectronic integrated chip with low requirements on the receiver optics to make the system economically attractive. With this system, we were able to set up the communication link automatically without any back channel and to perform error-free (bit error rate <10⁻⁹) data transmission over a distance of 3.5 m with a data rate of 3 Gbit/s.

  3. A dual-phantom system for validation of velocity measurements in stenosis models under steady flow.

    PubMed

    Blake, James R; Easson, William J; Hoskins, Peter R

    2009-09-01

    A dual-phantom system is developed for validation of velocity measurements in stenosis models. Pairs of phantoms with identical geometry and flow conditions are manufactured, one for ultrasound and one for particle image velocimetry (PIV). The PIV model is made from silicone rubber, and a new PIV fluid is made that matches the refractive index of 1.41 of silicone. Dynamic scaling was performed to correct for the increased viscosity of the PIV fluid compared with that of the ultrasound blood mimic. The degree of stenosis in the models pairs agreed to less than 1%. The velocities in the laminar flow region up to the peak velocity location agreed to within 15%, and the difference could be explained by errors in ultrasound velocity estimation. At low flow rates and in mild stenoses, good agreement was observed in the distal flow fields, excepting the maximum velocities. At high flow rates, there was considerable difference in velocities in the poststenosis flow field (maximum centreline differences of 30%), which would seem to represent real differences in hydrodynamic behavior between the two models. Sources of error included: variation of viscosity because of temperature (random error, which could account for differences of up to 7%); ultrasound velocity estimation errors (systematic errors); and geometry effects in each model, particularly because of imperfect connectors and corners (systematic errors, potentially affecting the inlet length and flow stability). The current system is best placed to investigate measurement errors in the laminar flow region rather than the poststenosis turbulent flow region.

  4. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.

  5. Void fraction and velocity measurement of simulated bubble in a rotating disc using high frame rate neutron radiography.

    PubMed

    Saito, Y; Mishima, K; Matsubayashi, M

    2004-10-01

    To evaluate measurement error of local void fraction and velocity field in a gas-molten metal two-phase flow by high-frame-rate neutron radiography, experiments using a rotating stainless-steel disc, which has several holes of various diameters and depths simulating gas bubbles, were performed. Measured instantaneous void fraction and velocity field of the simulated bubbles were compared with the calculated values based on the rotating speed, the diameter and the depth of the holes as parameters and the measurement error was evaluated. The rotating speed was varied from 0 to 350 rpm (tangential velocity of the simulated bubbles from 0 to 1.5 m/s). The effect of shutter speed of the imaging system on the measurement error was also investigated. It was revealed from the Lagrangian time-averaged void fraction profile that the measurement error of the instantaneous void fraction depends mainly on the light-decay characteristics of the fluorescent converter. The measurement error of the instantaneous local void fraction of simulated bubbles is estimated to be 20%. In the present imaging system, the light-decay characteristics of the fluorescent converter affect the measurement remarkably, and so should be taken into account in estimating the measurement error of the local void fraction profile.

  6. Markov chain algorithms: a template for building future robust low-power systems

    PubMed Central

    Deka, Biplab; Birklykke, Alex A.; Duwe, Henry; Mansinghka, Vikash K.; Kumar, Rakesh

    2014-01-01

    Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications—Boolean satisfiability, sorting, low-density parity-check decoding and clustering—how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems. PMID:24842030

  7. Influence of measurement error on Maxwell's demon

    NASA Astrophysics Data System (ADS)

    Sørdal, Vegard; Bergli, Joakim; Galperin, Y. M.

    2017-06-01

    In any general cycle of measurement, feedback, and erasure, the measurement will reduce the entropy of the system when information about the state is obtained, while erasure, according to Landauer's principle, is accompanied by a corresponding increase in entropy due to the compression of logical and physical phase space. The total process can in principle be fully reversible. A measurement error reduces the information obtained and the entropy decrease in the system. The erasure still gives the same increase in entropy, and the total process is irreversible. Another consequence of measurement error is that a bad feedback is applied, which further increases the entropy production if the proper protocol adapted to the expected error rate is not applied. We consider the effect of measurement error on a realistic single-electron box Szilard engine, and we find the optimal protocol for the cycle as a function of the desired power P and error ɛ .

  8. Error floor behavior study of LDPC codes for concatenated codes design

    NASA Astrophysics Data System (ADS)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  9. Eliminating ambiguity in digital signals

    NASA Technical Reports Server (NTRS)

    Weber, W. J., III

    1979-01-01

    Multiamplitude minimum shift keying (mamsk) transmission system, method of differential encoding overcomes problem of ambiguity associated with advanced digital-transmission techniques with little or no penalty in transmission rate, error rate, or system complexity. Principle of method states, if signal points are properly encoded and decoded, bits are detected correctly, regardless of phase ambiguities.

  10. Error analysis for reducing noisy wide-gap concentric cylinder rheometric data for nonlinear fluids - Theory and applications

    NASA Technical Reports Server (NTRS)

    Borgia, Andrea; Spera, Frank J.

    1990-01-01

    This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.

  11. Accentuate the Negative: Grammatical Errors during Narrative Production as a Clinical Marker of Central Nervous System Abnormality in School-Aged Children with Fetal Alcohol Spectrum Disorders

    ERIC Educational Resources Information Center

    Thorne, John C.

    2017-01-01

    Purpose: The purpose of this study was to examine (a) whether increased grammatical error rates during a standardized narrative task are a more clinically useful marker of central nervous system abnormality in Fetal Alcohol Spectrum Disorders (FASD) than common measures of productivity or grammatical complexity and (b) whether combining the rate…

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellefson, S; Department of Human Oncology, University of Wisconsin, Madison, WI; Culberson, W

    Purpose: Discrepancies in absolute dose values have been detected between the ViewRay treatment planning system and ArcCHECK readings when performing delivery quality assurance on the ViewRay system with the ArcCHECK-MR diode array (SunNuclear Corporation). In this work, we investigate whether these discrepancies are due to errors in the ViewRay planning and/or delivery system or due to errors in the ArcCHECK’s readings. Methods: Gamma analysis was performed on 19 ViewRay patient plans using the ArcCHECK. Frequency analysis on the dose differences was performed. To investigate whether discrepancies were due to measurement or delivery error, 10 diodes in low-gradient dose regions weremore » chosen to compare with ion chamber measurements in a PMMA phantom with the same size and shape as the ArcCHECK, provided by SunNuclear. The diodes chosen all had significant discrepancies in absolute dose values compared to the ViewRay TPS. Absolute doses to PMMA were compared between the ViewRay TPS calculations, ArcCHECK measurements, and measurements in the PMMA phantom. Results: Three of the 19 patient plans had 3%/3mm gamma passing rates less than 95%, and ten of the 19 plans had 2%/2mm passing rates less than 95%. Frequency analysis implied a non-random error process. Out of the 10 diode locations measured, ion chamber measurements were all within 2.2% error relative to the TPS and had a mean error of 1.2%. ArcCHECK measurements ranged from 4.5% to over 15% error relative to the TPS and had a mean error of 8.0%. Conclusion: The ArcCHECK performs well for quality assurance on the ViewRay under most circumstances. However, under certain conditions the absolute dose readings are significantly higher compared to the planned doses. As the ion chamber measurements consistently agree with the TPS, it can be concluded that the discrepancies are due to ArcCHECK measurement error and not TPS or delivery system error. This work was funded by the Bhudatt Paliwal Professorship and the University of Wisconsin Medical Radiation Research Center.« less

  13. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-01-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  14. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    NASA Technical Reports Server (NTRS)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  15. A new rate-dependent model for high-frequency tracking performance enhancement of piezoactuator system

    NASA Astrophysics Data System (ADS)

    Tian, Lizhi; Xiong, Zhenhua; Wu, Jianhua; Ding, Han

    2017-05-01

    Feedforward-feedback control is widely used in motion control of piezoactuator systems. Due to the phase lag caused by incomplete dynamics compensation, the performance of the composite controller is greatly limited at high frequency. This paper proposes a new rate-dependent model to improve the high-frequency tracking performance by reducing dynamics compensation error. The rate-dependent model is designed as a function of the input and input variation rate to describe the input-output relationship of the residual system dynamics which mainly performs as phase lag in a wide frequency band. Then the direct inversion of the proposed rate-dependent model is used to compensate the residual system dynamics. Using the proposed rate-dependent model as feedforward term, the open loop performance can be improved significantly at medium-high frequency. Then, combining the with feedback controller, the composite controller can provide enhanced close loop performance from low frequency to high frequency. At the frequency of 1 Hz, the proposed controller presents the same performance as previous methods. However, at the frequency of 900 Hz, the tracking error is reduced to be 30.7% of the decoupled approach.

  16. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  17. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  18. A method of predicting flow rates required to achieve anti-icing performance with a porous leading edge ice protection system

    NASA Technical Reports Server (NTRS)

    Kohlman, D. L.; Albright, A. E.

    1983-01-01

    An analytical method was developed for predicting minimum flow rates required to provide anti-ice protection with a porous leading edge fluid ice protection system. The predicted flow rates compare with an average error of less than 10 percent to six experimentally determined flow rates from tests in the NASA Icing Research Tunnel on a general aviation wing section.

  19. Discrepancies in medication entries between anesthetic and pharmacy records using electronic databases.

    PubMed

    Vigoda, Michael M; Gencorelli, Frank J; Lubarsky, David A

    2007-10-01

    Accurate recording of disposition of controlled substances is required by regulatory agencies. Linking anesthesia information management systems (AIMS) with medication dispensing systems may facilitate automated reconciliation of medication discrepancies. In this retrospective investigation at a large academic hospital, we reviewed 11,603 cases (spanning an 8-mo period) comparing records of medications (i.e., narcotics, benzodiazepines, ketamine, and thiopental) recorded as removed from our automated medication dispensing system with medications recorded as administered in our AIMS. In 15% of cases, we found discrepancies between dispensed versus administered medications. Discrepancies occurred in both the AIMS (8% cases) and the medication dispensing system (10% cases). Although there were many different types of user errors, nearly 75% of them resulted from either an error in the amount of drug waste documented in the medication dispensing system (35%); or an error in documenting the medication in the AIMS (40%). A significant percentage of cases contained data entry errors in both the automated dispensing and AIMS. This error rate limits the current practicality of automating the necessary reconciliation. An electronic interface between an AIMS and a medication dispensing system could alert users of medication entry errors prior to finalizing a case, thus reducing the time (and cost) of reconciling discrepancies.

  20. Using nurses and office staff to report prescribing errors in primary care.

    PubMed

    Kennedy, Amanda G; Littenberg, Benjamin; Senders, John W

    2008-08-01

    To implement a prescribing-error reporting system in primary care offices and analyze the reports. Descriptive analysis of a voluntary prescribing-error-reporting system Seven primary care offices in Vermont, USA. One hundred and three prescribers, managers, nurses and office staff. Nurses and office staff were asked to report all communications with community pharmacists regarding prescription problems. All reports were classified by severity category, setting, error mode, prescription domain and error-producing conditions. All practices submitted reports, although reporting decreased by 3.6 reports per month (95% CI, -2.7 to -4.4, P<0.001, by linear regression analysis). Two hundred and sixteen reports were submitted. Nearly 90% (142/165) of errors were severity Category B (errors that did not reach the patient) according to the National Coordinating Council for Medication Error Reporting and Prevention Index for Categorizing Medication Errors. Nineteen errors reached the patient without causing harm (Category C); and 4 errors caused temporary harm requiring intervention (Category E). Errors involving strength were found in 30% of reports, including 23 prescriptions written for strengths not commercially available. Antidepressants, narcotics and antihypertensives were the most frequent drug classes reported. Participants completed an exit survey with a response rate of 84.5% (87/103). Nearly 90% (77/87) of respondents were willing to continue reporting after the study ended, however none of the participants currently submit reports. Nurses and office staff are a valuable resource for reporting prescribing errors. However, without ongoing reminders, the reporting system is not sustainable.

  1. Innovations in Medication Preparation Safety and Wastage Reduction: Use of a Workflow Management System in a Pediatric Hospital.

    PubMed

    Davis, Stephen Jerome; Hurtado, Josephine; Nguyen, Rosemary; Huynh, Tran; Lindon, Ivan; Hudnall, Cedric; Bork, Sara

    2017-01-01

    Background: USP <797> regulatory requirements have mandated that pharmacies improve aseptic techniques and cleanliness of the medication preparation areas. In addition, the Institute for Safe Medication Practices (ISMP) recommends that technology and automation be used as much as possible for preparing and verifying compounded sterile products. Objective: To determine the benefits associated with the implementation of the workflow management system, such as reducing medication preparation and delivery errors, reducing quantity and frequency of medication errors, avoiding costs, and enhancing the organization's decision to move toward positive patient identification (PPID). Methods: At Texas Children's Hospital, data were collected and analyzed from January 2014 through August 2014 in the pharmacy areas in which the workflow management system would be implemented. Data were excluded for September 2014 during the workflow management system oral liquid implementation phase. Data were collected and analyzed from October 2014 through June 2015 to determine whether the implementation of the workflow management system reduced the quantity and frequency of reported medication errors. Data collected and analyzed during the study period included the quantity of doses prepared, number of incorrect medication scans, number of doses discontinued from the workflow management system queue, and the number of doses rejected. Data were collected and analyzed to identify patterns of incorrect medication scans, to determine reasons for rejected medication doses, and to determine the reduction in wasted medications. Results: During the 17-month study period, the pharmacy department dispensed 1,506,220 oral liquid and injectable medication doses. From October 2014 through June 2015, the pharmacy department dispensed 826,220 medication doses that were prepared and checked via the workflow management system. Of those 826,220 medication doses, there were 16 reported incorrect volume errors. The error rate after the implementation of the workflow management system averaged 8.4%, which was a 1.6% reduction. After the implementation of the workflow management system, the average number of reported oral liquid medication and injectable medication errors decreased to 0.4 and 0.2 times per week, respectively. Conclusion: The organization was able to achieve its purpose and goal of improving the provision of quality pharmacy care through optimal medication use and safety by reducing medication preparation errors. Error rates decreased and the workflow processes were streamlined, which has led to seamless operations within the pharmacy department. There has been significant cost avoidance and waste reduction and enhanced interdepartmental satisfaction due to the reduction of reported medication errors.

  2. Data driven CAN node reliability assessment for manufacturing system

    NASA Astrophysics Data System (ADS)

    Zhang, Leiming; Yuan, Yong; Lei, Yong

    2017-01-01

    The reliability of the Controller Area Network(CAN) is critical to the performance and safety of the system. However, direct bus-off time assessment tools are lacking in practice due to inaccessibility of the node information and the complexity of the node interactions upon errors. In order to measure the mean time to bus-off(MTTB) of all the nodes, a novel data driven node bus-off time assessment method for CAN network is proposed by directly using network error information. First, the corresponding network error event sequence for each node is constructed using multiple-layer network error information. Then, the generalized zero inflated Poisson process(GZIP) model is established for each node based on the error event sequence. Finally, the stochastic model is constructed to predict the MTTB of the node. The accelerated case studies with different error injection rates are conducted on a laboratory network to demonstrate the proposed method, where the network errors are generated by a computer controlled error injection system. Experiment results show that the MTTB of nodes predicted by the proposed method agree well with observations in the case studies. The proposed data driven node time to bus-off assessment method for CAN networks can successfully predict the MTTB of nodes by directly using network error event data.

  3. Performance analysis of decode-and-forward dual-hop optical spatial modulation with diversity combiner over atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Odeyemi, Kehinde O.; Owolawi, Pius A.; Srivastava, Viranjay M.

    2017-11-01

    Dual-hops transmission is a growing interest technique that can be used to mitigate against atmospheric turbulence along the Free Space Optical (FSO) communication links. This paper analyzes the performance of Decode-and-Forward (DF) dual-hops FSO systems in-conjunction with spatial modulation and diversity combiners over a Gamma-Gamma atmospheric turbulence channel using heterodyne detection. Maximum Ratio Combiner (MRC), Equal Gain Combiner (EGC) and Selection Combiner (SC) are considered at the relay and destination as mitigation tools to improve the system error performance. Power series expansion of modified Bessel function is used to derive the closed form expression for the end-to-end Average Pairwise Error Probability (APEP) expressions for each of the combiners under study and a tight upper bound on the Average Bit Error Rate (ABER) per hop is given. Thus, the overall end-to-end ABER for the dual-hops FSO system is then evaluated. The numerical results depicted that dual-hops transmission systems outperformed the direct link systems. Moreover, the impact of having the same and different combiners at the relay and destination are also presented. The results also confirm that the combination of dual hops transmission with spatial modulation and diversity combiner significantly improves the systems error rate with the MRC combiner offering an optimal performance with respect to variation in atmospheric turbulence, change in links average received SNR and link range of the system.

  4. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  5. Autosophy: an alternative vision for satellite communication, compression, and archiving

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus; Holtz, Eric; Kalienky, Diana

    2006-08-01

    Satellite communication and archiving systems are now designed according to an outdated Shannon information theory where all data is transmitted in meaningless bit streams. Video bit rates, for example, are determined by screen size, color resolution, and scanning rates. The video "content" is irrelevant so that totally random images require the same bit rates as blank images. An alternative system design, based on the newer Autosophy information theory, is now evolving, which transmits data "contend" or "meaning" in a universally compatible 64bit format. This would allow mixing all multimedia transmissions in the Internet's packet stream. The new systems design uses self-assembling data structures, which grow like data crystals or data trees in electronic memories, for both communication and archiving. The advantages for satellite communication and archiving may include: very high lossless image and video compression, unbreakable encryption, resistance to transmission errors, universally compatible data formats, self-organizing error-proof mass memories, immunity to the Internet's Quality of Service problems, and error-proof secure communication protocols. Legacy data transmission formats can be converted by simple software patches or integrated chipsets to be forwarded through any media - satellites, radio, Internet, cable - without needing to be reformatted. This may result in orders of magnitude improvements for all communication and archiving systems.

  6. Particle Swarm Optimization approach to defect detection in armour ceramics.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2017-03-01

    In this research, various extracted features were used in the development of an automated ultrasonic sensor based inspection system that enables defect classification in each ceramic component prior to despatch to the field. Classification is an important task and large number of irrelevant, redundant features commonly introduced to a dataset reduces the classifiers performance. Feature selection aims to reduce the dimensionality of the dataset while improving the performance of a classification system. In the context of a multi-criteria optimization problem (i.e. to minimize classification error rate and reduce number of features) such as one discussed in this research, the literature suggests that evolutionary algorithms offer good results. Besides, it is noted that Particle Swarm Optimization (PSO) has not been explored especially in the field of classification of high frequency ultrasonic signals. Hence, a binary coded Particle Swarm Optimization (BPSO) technique is investigated in the implementation of feature subset selection and to optimize the classification error rate. In the proposed method, the population data is used as input to an Artificial Neural Network (ANN) based classification system to obtain the error rate, as ANN serves as an evaluator of PSO fitness function. Copyright © 2016. Published by Elsevier B.V.

  7. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  8. Attitude and vibration control of a large flexible space-based antenna

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1982-01-01

    Control systems synthesis is considered for controlling the rigid body attitude and elastic motion of a large deployable space-based antenna. Two methods for control systems synthesis are considered. The first method utilizes the stability and robustness properties of the controller consisting of torque actuators and collocated attitude and rate sensors. The second method is based on the linear-quadratic-Gaussian control theory. A combination of the two methods, which results in a two level hierarchical control system, is also briefly discussed. The performance of the controllers is analyzed by computing the variances of pointing errors, feed misalignment errors and surface contour errors in the presence of sensor and actuator noise.

  9. Disturbance torque rejection properties of the NASA/JPL 70-meter antenna axis servos

    NASA Technical Reports Server (NTRS)

    Hill, R. E.

    1989-01-01

    Analytic methods for evaluating pointing errors caused by external disturbance torques are developed and applied to determine the effects of representative values of wind and friction torque. The expressions relating pointing errors to disturbance torques are shown to be strongly dependent upon the state estimator parameters, as well as upon the state feedback gain and the flow versus pressure characteristics of the hydraulic system. Under certain conditions, when control is derived from an uncorrected estimate of integral position error, the desired type 2 servo properties are not realized and finite steady-state position errors result. Methods for reducing these errors to negligible proportions through the proper selection of control gain and estimator correction parameters are demonstrated. The steady-state error produced by a disturbance torque is found to be directly proportional to the hydraulic internal leakage. This property can be exploited to provide a convenient method of determining system leakage from field measurements of estimator error, axis rate, and hydraulic differential pressure.

  10. Effect of Pointing Error on the BER Performance of an Optical CDMA FSO Link with SIK Receiver

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2017-12-01

    An analytical approach is presented for an optical code division multiple access (OCDMA) system over free space optical (FSO) channel considering the effect of pointing error between the transmitter and the receiver. Analysis is carried out with an optical sequence inverse keying (SIK) correlator receiver with intensity modulation and direct detection (IM/DD) to find the bit error rate (BER) with pointing error. The results are evaluated numerically in terms of signal-to-noise plus multi-access interference (MAI) ratio, BER and power penalty due to pointing error. It is noticed that the OCDMA FSO system is highly affected by pointing error with significant power penalty at a BER of 10-6 and 10-9. For example, penalty at BER 10-9 is found to be 9 dB corresponding to normalized pointing error of 1.4 for 16 users with processing gain of 256 and is reduced to 6.9 dB when the processing gain is increased to 1,024.

  11. Accurate acceleration of kinetic Monte Carlo simulations through the modification of rate constants.

    PubMed

    Chatterjee, Abhijit; Voter, Arthur F

    2010-05-21

    We present a novel computational algorithm called the accelerated superbasin kinetic Monte Carlo (AS-KMC) method that enables a more efficient study of rare-event dynamics than the standard KMC method while maintaining control over the error. In AS-KMC, the rate constants for processes that are observed many times are lowered during the course of a simulation. As a result, rare processes are observed more frequently than in KMC and the time progresses faster. We first derive error estimates for AS-KMC when the rate constants are modified. These error estimates are next employed to develop a procedure for lowering process rates with control over the maximum error. Finally, numerical calculations are performed to demonstrate that the AS-KMC method captures the correct dynamics, while providing significant CPU savings over KMC in most cases. We show that the AS-KMC method can be employed with any KMC model, even when no time scale separation is present (although in such cases no computational speed-up is observed), without requiring the knowledge of various time scales present in the system.

  12. Usefulness of biological fingerprint in magnetic resonance imaging for patient verification.

    PubMed

    Ueda, Yasuyuki; Morishita, Junji; Kudomi, Shohei; Ueda, Katsuhiko

    2016-09-01

    The purpose of our study is to investigate the feasibility of automated patient verification using multi-planar reconstruction (MPR) images generated from three-dimensional magnetic resonance (MR) imaging of the brain. Several anatomy-related MPR images generated from three-dimensional fast scout scan of each MR examination were used as biological fingerprint images in this study. The database of this study consisted of 730 temporal pairs of MR examination of the brain. We calculated the correlation value between current and prior biological fingerprint images of the same patient and also all combinations of two images for different patients to evaluate the effectiveness of our method for patient verification. The best performance of our system were as follows: a half-total error rate of 1.59 % with a false acceptance rate of 0.023 % and a false rejection rate of 3.15 %, an equal error rate of 1.37 %, and a rank-one identification rate of 98.6 %. Our method makes it possible to verify the identity of the patient using only some existing medical images without the addition of incidental equipment. Also, our method will contribute to patient misidentification error management caused by human errors.

  13. System selects framing rate for spectrograph camera

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Circuit using zero-order light is reflected to a photomultiplier in the incoming radiation of a spectrograph monitor to provide an error signal which controls the advancing and driving rate of the film through the camera.

  14. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    NASA Astrophysics Data System (ADS)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.

  15. Feedback on prescribing errors to junior doctors: exploring views, problems and preferred methods.

    PubMed

    Bertels, Jeroen; Almoudaris, Alex M; Cortoos, Pieter-Jan; Jacklin, Ann; Franklin, Bryony Dean

    2013-06-01

    Prescribing errors are common in hospital inpatients. However, the literature suggests that doctors are often unaware of their errors as they are not always informed of them. It has been suggested that providing more feedback to prescribers may reduce subsequent error rates. Only few studies have investigated the views of prescribers towards receiving such feedback, or the views of hospital pharmacists as potential feedback providers. Our aim was to explore the views of junior doctors and hospital pharmacists regarding feedback on individual doctors' prescribing errors. Objectives were to determine how feedback was currently provided and any associated problems, to explore views on other approaches to feedback, and to make recommendations for designing suitable feedback systems. A large London NHS hospital trust. To explore views on current and possible feedback mechanisms, self-administered questionnaires were given to all junior doctors and pharmacists, combining both 5-point Likert scale statements and open-ended questions. Agreement scores for statements regarding perceived prescribing error rates, opinions on feedback, barriers to feedback, and preferences for future practice. Response rates were 49% (37/75) for junior doctors and 57% (57/100) for pharmacists. In general, doctors did not feel threatened by feedback on their prescribing errors. They felt that feedback currently provided was constructive but often irregular and insufficient. Most pharmacists provided feedback in various ways; however some did not or were inconsistent. They were willing to provide more feedback, but did not feel it was always effective or feasible due to barriers such as communication problems and time constraints. Both professional groups preferred individual feedback with additional regular generic feedback on common or serious errors. Feedback on prescribing errors was valued and acceptable to both professional groups. From the results, several suggested methods of providing feedback on prescribing errors emerged. Addressing barriers such as the identification of individual prescribers would facilitate feedback in practice. Research investigating whether or not feedback reduces the subsequent error rate is now needed.

  16. Ka-Band Phased Array System Characterization

    NASA Technical Reports Server (NTRS)

    Acosta, R.; Johnson, S.; Sands, O.; Lambert, K.

    2001-01-01

    Phased Array Antennas (PAAs) using patch-radiating elements are projected to transmit data at rates several orders of magnitude higher than currently offered with reflector-based systems. However, there are a number of potential sources of degradation in the Bit Error Rate (BER) performance of the communications link that are unique to PAA-based links. Short spacing of radiating elements can induce mutual coupling between radiating elements, long spacing can induce grating lobes, modulo 2 pi phase errors can add to Inter Symbol Interference (ISI), phase shifters and power divider network introduce losses into the system. This paper describes efforts underway to test and evaluate the effects of the performance degrading features of phased-array antennas when used in a high data rate modulation link. The tests and evaluations described here uncover the interaction between the electrical characteristics of a PAA and the BER performance of a communication link.

  17. Investigation of the Usability of Computerized Critical Care Information Systems in Germany.

    PubMed

    von Dincklage, Falk; Suchodolski, Klaudiusz; Lichtner, Gregor; Friesdorf, Wolfgang; Podtschaske, Beatrice; Ragaller, Maximilian

    2017-01-01

    The term "usability" describes how effectively, efficiently, and with what level of user satisfaction an information system can be used to accomplish specific goals. Computerized critical care information systems (CCISs) with high usability increase quality of care and staff satisfaction, while reducing medication errors. Conversely, systems lacking usability can interrupt clinical workflow, facilitate errors, and increase charting time. The aim of this study was to investigate and compare usability across CCIS currently used in Germany. In this study, German intensive care unit (ICU) nurses and physicians completed a specialized, previously validated, web-based questionnaire. The questionnaire assessed CCIS usability based on three rating models: an overall rating of the systems, a model rating technical usability, and a model rating task-specific usability. We analyzed results from 535 survey participants and compared eight different CCIS commonly used in Germany. Our results showed that usability strongly differs across the compared systems. The system ICUData had the best overall rating and technical usability, followed by the platforms ICM and MetaVision. The same three systems performed best in the rating of task-specific usability without significant differences between each other. Across all systems, overall ratings were more dependent on ease-of-use aspects than on aspects of utility/functionality, and the general scope of the functions offered was rated better than how well the functions are realized. Our results suggest that manufacturers should shift some of their effort away from the development of new features and focus more on improving the ease-of-use and quality of existing features.

  18. A study of GPS measurement errors due to noise and multipath interference for CGADS

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.

    1996-01-01

    This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.

  19. An approach enabling adaptive FEC for OFDM in fiber-VLLC system

    NASA Astrophysics Data System (ADS)

    Wei, Yiran; He, Jing; Deng, Rui; Shi, Jin; Chen, Shenghai; Chen, Lin

    2017-12-01

    In this paper, we propose an orthogonal circulant matrix transform (OCT)-based adaptive frame-level-forward error correction (FEC) scheme for fiber-visible laser light communication (VLLC) system and experimentally demonstrate by Reed-Solomon (RS) Code. In this method, no extra bits are spent for adaptive message, except training sequence (TS), which is simultaneously used for synchronization and channel estimation. Therefore, RS-coding can be adaptively performed frames by frames via the last received codeword-error-rate (CER) feedback estimated by the TSs of the previous few OFDM frames. In addition, the experimental results exhibit that over 20 km standard single-mode fiber (SSMF) and 8 m visible light transmission, the costs of RS codewords are at most 14.12% lower than those of conventional adaptive subcarrier-RS-code based 16-QAM OFDM at bit error rate (BER) of 10-5.

  20. Error and attack tolerance of complex networks

    NASA Astrophysics Data System (ADS)

    Albert, Réka; Jeong, Hawoong; Barabási, Albert-László

    2000-07-01

    Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network. Complex communication networks display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web, the Internet, social networks and cells. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.

  1. Standardized error severity score (ESS) ratings to quantify risk associated with child restraint system (CRS) and booster seat misuse.

    PubMed

    Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley

    2017-11-17

    Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation performance data revealed several areas of misuse of the CRS/booster seat associated with high potential injury risk. Collectively, findings indicate that standardized ESS ratings are useful for estimating injury risk potential associated with real-world CRS and booster seat installation errors.

  2. Evaluating the Effective Factors for Reporting Medical Errors among Midwives Working at Teaching Hospitals Affiliated to Isfahan University of Medical Sciences.

    PubMed

    Khorasani, Fahimeh; Beigi, Marjan

    2017-01-01

    Recently, evaluation and accreditation system of hospitals has had a special emphasis on reporting malpractices and sharing errors or lessons learnt from errors, but still due to lack of promotion of systematic approach for solving problems from the same system, this issue has remained unattended. This study was conducted to determine the effective factors for reporting medical errors among midwives. This project was a descriptive cross-sectional observational study. Data gathering tools were a standard checklist and two researcher-made questionnaires. Sampling for this study was conducted from all the midwives who worked at teaching hospitals affiliated to Isfahan University of Medical Sciences through census method (convenient) and lasted for 3 months. Data were analyzed using descriptive and inferential statistics through SPSS 16. Results showed that 79.1% of the staff reported errors and the highest rate of errors was in the process of patients' tests. In this study, the mean score of midwives' knowledge about the errors was 79.1 and the mean score of their attitude toward reporting errors was 70.4. There was a direct relation between the score of errors' knowledge and attitude in the midwifery staff and reporting errors. Based on the results of this study about the appropriate knowledge and attitude of midwifery staff regarding errors and action toward reporting them, it is recommended to strengthen the system when it comes to errors and hospitals risks.

  3. TRAC based sensing for autonomous rendezvous

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.; Monford, Leo

    1991-01-01

    The Targeting Reflective Alignment Concept (TRAC) sensor is to be used in an effort to support an Autonomous Rendezvous and Docking (AR&D) flight experiment. The TRAC sensor uses a fixed-focus, fixed-iris CCD camera and a target that is a combination of active and passive components. The system experiment is anticipated to fly in 1994 using two Commercial Experiment Transporters (COMET's). The requirements for the sensor are: bearing error less than or equal to 0.075 deg; bearing error rate less than 0.3 deg/sec; attitude error less than 0.5 deg.; and attitude rate error less than 2.0 deg/sec. The range requirement depends on the range and the range rate of the vehicle. The active component of the target is several 'kilo-bright' LED's that can emit 2500 millicandela with 40 milliwatts of input power. Flashing the lights in a known pattern eliminates background illumination. The system should be able to rendezvous from 300 meters all the way to capture. A question that arose during the presentation: What is the life time of the LED's and their sensitivity to radiation? The LED's should be manufactured to Military Specifications, coated with silicon dioxide, and all other space qualified precautions should be taken. The LED's will not be on all the time so they should easily last the two-year mission.

  4. A high speed sequential decoder

    NASA Technical Reports Server (NTRS)

    Lum, H., Jr.

    1972-01-01

    The performance and theory of operation for the High Speed Hard Decision Sequential Decoder are delineated. The decoder is a forward error correction system which is capable of accepting data from binary-phase-shift-keyed and quadriphase-shift-keyed modems at input data rates up to 30 megabits per second. Test results show that the decoder is capable of maintaining a composite error rate of 0.00001 at an input E sub b/N sub o of 5.6 db. This performance has been obtained with minimum circuit complexity.

  5. Integrated Model for Performance Analysis of All-Optical Multihop Packet Switches

    NASA Astrophysics Data System (ADS)

    Jeong, Han-You; Seo, Seung-Woo

    2000-09-01

    The overall performance of an all-optical packet switching system is usually determined by two criteria, i.e., switching latency and packet loss rate. In some real-time applications, however, in which packets arriving later than a timeout period are discarded as loss, the packet loss rate becomes the most dominant criterion for system performance. Here we focus on evaluating the performance of all-optical packet switches in terms of the packet loss rate, which normally arises from the insufficient hardware or the degradation of an optical signal. Considering both aspects, we propose what we believe is a new analysis model for the packet loss rate that reflects the complicated interactions between physical impairments and system-level parameters. On the basis of the estimation model for signal quality degradation in a multihop path we construct an equivalent analysis model of a switching network for evaluating an average bit error rate. With the model constructed we then propose an integrated model for estimating the packet loss rate in three architectural examples of multihop packet switches, each of which is based on a different switching concept. We also derive the bounds on the packet loss rate induced by bit errors. Finally, it is verified through simulation studies that our analysis model accurately predicts system performance.

  6. Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.

    PubMed

    Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M

    2018-01-01

    Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p  < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.

  7. Review of medication errors that are new or likely to occur more frequently with electronic medication management systems.

    PubMed

    Van de Vreede, Melita; McGrath, Anne; de Clifford, Jan

    2018-05-14

    Objective. The aim of the present study was to identify and quantify medication errors reportedly related to electronic medication management systems (eMMS) and those considered likely to occur more frequently with eMMS. This included developing a new classification system relevant to eMMS errors. Methods. Eight Victorian hospitals with eMMS participated in a retrospective audit of reported medication incidents from their incident reporting databases between May and July 2014. Site-appointed project officers submitted deidentified incidents they deemed new or likely to occur more frequently due to eMMS, together with the Incident Severity Rating (ISR). The authors reviewed and classified incidents. Results. There were 5826 medication-related incidents reported. In total, 93 (47 prescribing errors, 46 administration errors) were identified as new or potentially related to eMMS. Only one ISR2 (moderate) and no ISR1 (severe or death) errors were reported, so harm to patients in this 3-month period was minimal. The most commonly reported error types were 'human factors' and 'unfamiliarity or training' (70%) and 'cross-encounter or hybrid system errors' (22%). Conclusions. Although the results suggest that the errors reported were of low severity, organisations must remain vigilant to the risk of new errors and avoid the assumption that eMMS is the panacea to all medication error issues. What is known about the topic? eMMS have been shown to reduce some types of medication errors, but it has been reported that some new medication errors have been identified and some are likely to occur more frequently with eMMS. There are few published Australian studies that have reported on medication error types that are likely to occur more frequently with eMMS in more than one organisation and that include administration and prescribing errors. What does this paper add? This paper includes a new simple classification system for eMMS that is useful and outlines the most commonly reported incident types and can inform organisations and vendors on possible eMMS improvements. The paper suggests a new classification system for eMMS medication errors. What are the implications for practitioners? The results of the present study will highlight to organisations the need for ongoing review of system design, refinement of workflow issues, staff education and training and reporting and monitoring of errors.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less

  9. Error detection capability of a novel transmission detector: a validation study for online VMAT monitoring.

    PubMed

    Pasler, Marlies; Michel, Kilian; Marrazzo, Livia; Obenland, Michael; Pallotta, Stefania; Björnsgard, Mari; Lutterbach, Johannes

    2017-09-01

    The purpose of this study was to characterize a new single large-area ionization chamber, the integral quality monitor system (iRT, Germany), for online and real-time beam monitoring. Signal stability, monitor unit (MU) linearity and dose rate dependence were investigated for static and arc deliveries and compared to independent ionization chamber measurements. The dose verification capability of the transmission detector system was evaluated by comparing calculated and measured detector signals for 15 volumetric modulated arc therapy plans. The error detection sensitivity was tested by introducing MLC position and linac output errors. Deviations in dose distributions between the original and error-induced plans were compared in terms of detector signal deviation, dose-volume histogram (DVH) metrics and 2D γ-evaluation (2%/2 mm and 3%/3 mm). The detector signal is linearly dependent on linac output and shows negligible (<0.4%) dose rate dependence up to 460 MU min -1 . Signal stability is within 1% for cumulative detector output; substantial variations were observed for the segment-by-segment signal. Calculated versus measured cumulative signal deviations ranged from  -0.16%-2.25%. DVH, mean 2D γ-value and detector signal evaluations showed increasing deviations with regard to the respective reference with growing MLC and dose output errors; good correlation between DVH metrics and detector signal deviation was found (e.g. PTV D mean : R 2   =  0.97). Positional MLC errors of 1 mm and errors in linac output of 2% were identified with the transmission detector system. The extensive tests performed in this investigation show that the new transmission detector provides a stable and sensitive cumulative signal output and is suitable for beam monitoring during patient treatment.

  10. Error detection capability of a novel transmission detector: a validation study for online VMAT monitoring

    NASA Astrophysics Data System (ADS)

    Pasler, Marlies; Michel, Kilian; Marrazzo, Livia; Obenland, Michael; Pallotta, Stefania; Björnsgard, Mari; Lutterbach, Johannes

    2017-09-01

    The purpose of this study was to characterize a new single large-area ionization chamber, the integral quality monitor system (iRT, Germany), for online and real-time beam monitoring. Signal stability, monitor unit (MU) linearity and dose rate dependence were investigated for static and arc deliveries and compared to independent ionization chamber measurements. The dose verification capability of the transmission detector system was evaluated by comparing calculated and measured detector signals for 15 volumetric modulated arc therapy plans. The error detection sensitivity was tested by introducing MLC position and linac output errors. Deviations in dose distributions between the original and error-induced plans were compared in terms of detector signal deviation, dose-volume histogram (DVH) metrics and 2D γ-evaluation (2%/2 mm and 3%/3 mm). The detector signal is linearly dependent on linac output and shows negligible (<0.4%) dose rate dependence up to 460 MU min-1. Signal stability is within 1% for cumulative detector output; substantial variations were observed for the segment-by-segment signal. Calculated versus measured cumulative signal deviations ranged from  -0.16%-2.25%. DVH, mean 2D γ-value and detector signal evaluations showed increasing deviations with regard to the respective reference with growing MLC and dose output errors; good correlation between DVH metrics and detector signal deviation was found (e.g. PTV D mean: R 2  =  0.97). Positional MLC errors of 1 mm and errors in linac output of 2% were identified with the transmission detector system. The extensive tests performed in this investigation show that the new transmission detector provides a stable and sensitive cumulative signal output and is suitable for beam monitoring during patient treatment.

  11. Error-Free Text Typing Performance of an Inductive Intra-Oral Tongue Computer Interface for Severely Disabled Individuals.

    PubMed

    Andreasen Struijk, Lotte N S; Bentsen, Bo; Gaihede, Michael; Lontis, Eugen R

    2017-11-01

    For severely paralyzed individuals, alternative computer interfaces are becoming increasingly essential for everyday life as social and vocational activities are facilitated by information technology and as the environment becomes more automatic and remotely controllable. Tongue computer interfaces have proven to be desirable by the users partly due to their high degree of aesthetic acceptability, but so far the mature systems have shown a relatively low error-free text typing efficiency. This paper evaluated the intra-oral inductive tongue computer interface (ITCI) in its intended use: Error-free text typing in a generally available text editing system, Word. Individuals with tetraplegia and able bodied individuals used the ITCI for typing using a MATLAB interface and for Word typing for 4 to 5 experimental days, and the results showed an average error-free text typing rate in Word of 11.6 correct characters/min across all participants and of 15.5 correct characters/min for participants familiar with tongue piercings. Improvements in typing rates between the sessions suggest that typing ratescan be improved further through long-term use of the ITCI.

  12. Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Bryant, Larry

    2014-01-01

    Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFE's as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.

  13. General Solution for Theoretical Packet Data Loss Rate

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin; Schlesinger, Adam

    2006-01-01

    Communications systems which transfer blocks ("frames") of data must use a marker ("frame synchronization pattern") for identifying where a block begins. A technique ("frame synchronization strategy") is used to locate the start of each frame and maintain synchronization as additional blocks are processed. A device which strips out the frame synchronization pattern [FSP] and provides an "end of frame" pulse is called a frame synchronizer. As clock and data errors are introduced into the system, the start-of-block marker becomes displaced and/or corrupted. The capability of the frame synchronizer to stay locked to the pattern under these conditions is a figure of merit for the frame synchronization strategy. It is important to select a strategy which will stay locked nearly all the time at bit error rates where the data is usable. ("Bit error rate" [BER] is the fraction of binary bits which are inverted by passage through a communication system.) The fraction of frames that are discarded because the frame synchronizer is not locked is called "Percent Data Loss" or "Packet Data Loss rate" [PDL]. A general approach for accurately predicting PDL given BER was developed in Theoretical Percent Data Loss Calculation and Measurement Accuracy, T. P. Kelly, LESC-30554, December 1992. Kelly gave a solution in terms of matrix equations, and only addressed "level" channel encoding. This paper goes on to give a closed-form polynomial solution for the most common class of frame synchronizer strategies, and will also address "mark" and "space" (differential) channel encoding, and burst error environments. The paper is divided into four sections and follows a logically ordered presentation, with results developed before they are evaluated. However, most readers will derive the greatest benefit from this paper by treating the results as reference material. The result developed for differential encoding can be extended to other applications (like block codes) where the probability is needed that a block contains only a certain number of errors.

  14. Information technology and medication safety: what is the benefit?

    PubMed Central

    Kaushal, R; Bates, D

    2002-01-01

    

 Medication errors occur frequently and have significant clinical and financial consequences. Several types of information technologies can be used to decrease rates of medication errors. Computerized physician order entry with decision support significantly reduces serious inpatient medication error rates in adults. Other available information technologies that may prove effective for inpatients include computerized medication administration records, robots, automated pharmacy systems, bar coding, "smart" intravenous devices, and computerized discharge prescriptions and instructions. In outpatients, computerization of prescribing and patient oriented approaches such as personalized web pages and delivery of web based information may be important. Public and private mandates for information technology interventions are growing, but further development, application, evaluation, and dissemination are required. PMID:12486992

  15. An observational study of drug administration errors in a Malaysian hospital (study of drug administration errors).

    PubMed

    Chua, S S; Tea, M H; Rahman, M H A

    2009-04-01

    Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.

  16. Technical Note: Error metrics for estimating the accuracy of needle/instrument placement during transperineal magnetic resonance/ultrasound-guided prostate interventions.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Villarini, Barbara; Rodell, Rachael; Martin, Paul; Han, Lianghao; Donaldson, Ian; Ahmed, Hashim U; Moore, Caroline M; Emberton, Mark; Barratt, Dean C

    2018-04-01

    Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems. © 2018 American Association of Physicists in Medicine.

  17. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  18. System and method for correcting attitude estimation

    NASA Technical Reports Server (NTRS)

    Josselson, Robert H. (Inventor)

    2010-01-01

    A system includes an angular rate sensor disposed in a vehicle for providing angular rates of the vehicle, and an instrument disposed in the vehicle for providing line-of-sight control with respect to a line-of-sight reference. The instrument includes an integrator which is configured to integrate the angular rates of the vehicle to form non-compensated attitudes. Also included is a compensator coupled across the integrator, in a feed-forward loop, for receiving the angular rates of the vehicle and outputting compensated angular rates of the vehicle. A summer combines the non-compensated attitudes and the compensated angular rates of the to vehicle to form estimated vehicle attitudes for controlling the instrument with respect to the line-of-sight reference. The compensator is configured to provide error compensation to the instrument free-of any feedback loop that uses an error signal. The compensator may include a transfer function providing a fixed gain to the received angular rates of the vehicle. The compensator may, alternatively, include a is transfer function providing a variable gain as a function of frequency to operate on the received angular rates of the vehicle.

  19. The "Measuring Outcomes of Clinical Connectivity" (MOCC) trial: investigating data entry errors in the Electronic Primary Care Research Network (ePCRN).

    PubMed

    Fontaine, Patricia; Mendenhall, Tai J; Peterson, Kevin; Speedie, Stuart M

    2007-01-01

    The electronic Primary Care Research Network (ePCRN) enrolled PBRN researchers in a feasibility trial to test the functionality of the network's electronic architecture and investigate error rates associated with two data entry strategies used in clinical trials. PBRN physicians and research assistants who registered with the ePCRN were eligible to participate. After online consent and randomization, participants viewed simulated patient records, presented as either abstracted data (short form) or progress notes (long form). Participants transcribed 50 data elements onto electronic case report forms (CRFs) without integrated field restrictions. Data errors were analyzed. Ten geographically dispersed PBRNs enrolled 100 members and completed the study in less than 7 weeks. The estimated overall error rate if field restrictions had been applied was 2.3%. Participants entering data from the short form had a higher rate of correctly entered data fields (94.5% vs 90.8%, P = .004) and significantly more error-free records (P = .003). Feasibility outcomes integral to completion of an Internet-based, multisite study were successfully achieved. Further development of programmable electronic safeguards is indicated. The error analysis conducted in this study will aid design of specific field restrictions for electronic CRFs, an important component of clinical trial management systems.

  20. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed. These should be reported, separately from the match probability, when requested by the court or when there are internal or external indications for error. It should also be made clear that there are various other issues to consider, like DNA transfer. Forensic statistical models, in particular Bayesian networks, may be useful to take the various uncertainties into account and demonstrate their effects on the evidential value of the forensic DNA results. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches

    NASA Technical Reports Server (NTRS)

    Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.

  2. Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda; Pemmaraju, Surya

    1992-01-01

    Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.

  3. Quantification and characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wood, Christopher J.; Gambetta, Jay M.

    2018-03-01

    We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce metrics for quantifying the coherent and incoherent properties of the resulting errors and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities, the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set.

  4. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.

  5. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    NASA Astrophysics Data System (ADS)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  6. Development of a wideband pulse quaternary modulation system. [for an operational 400 Mbps baseband laser communication system

    NASA Technical Reports Server (NTRS)

    Federhofer, J. A.

    1974-01-01

    Laboratory data verifying the pulse quaternary modulation (PQM) theoretical predictions is presented. The first laboratory PQM laser communication system was successfully fabricated, integrated, tested and demonstrated. System bit error rate tests were performed and, in general, indicated approximately a 2 db degradation from the theoretically predicted results. These tests indicated that no gross errors were made in the initial theoretical analysis of PQM. The relative ease with which the entire PQM laboratory system was integrated and tested indicates that PQM is a viable candidate modulation scheme for an operational 400 Mbps baseband laser communication system.

  7. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    NASA Technical Reports Server (NTRS)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  8. Error Analysis of Magnetohydrodynamic Angular Rate Sensor Combing with Coriolis Effect at Low Frequency.

    PubMed

    Ji, Yue; Xu, Mengjie; Li, Xingfei; Wu, Tengfei; Tuo, Weixiao; Wu, Jun; Dong, Jiuzhi

    2018-06-13

    The magnetohydrodynamic (MHD) angular rate sensor (ARS) with low noise level in ultra-wide bandwidth is developed in lasing and imaging applications, especially the line-of-sight (LOS) system. A modified MHD ARS combined with the Coriolis effect was studied in this paper to expand the sensor’s bandwidth at low frequency (<1 Hz), which is essential for precision LOS pointing and wide-bandwidth LOS jitter suppression. The model and the simulation method were constructed and a comprehensive solving method based on the magnetic and electric interaction methods was proposed. The numerical results on the Coriolis effect and the frequency response of the modified MHD ARS were detailed. In addition, according to the experimental results of the designed sensor consistent with the simulation results, an error analysis of model errors was discussed. Our study provides an error analysis method of MHD ARS combined with the Coriolis effect and offers a framework for future studies to minimize the error.

  9. Air Ground Data Link VHF Airline Communications and Reporting System (ACARS) Preliminary Test Report

    DOT National Transportation Integrated Search

    1995-02-01

    An effort was conducted to determine actual ground-to-air, and air-to-ground : performance of the Airline Communications and Reporting system (ACARS), Very : High Frequency (VHF) Data Link System. Parameters of system throughput, error : rates, and a...

  10. Ultrasound transducer function: annual testing is not sufficient.

    PubMed

    Mårtensson, Mattias; Olsson, Mats; Brodin, Lars-Åke

    2010-10-01

    The objective was to follow-up the study 'High incidence of defective ultrasound transducers in use in routine clinical practice' and evaluate if annual testing is good enough to reduce the incidence of defective ultrasound transducers in routine clinical practice to an acceptable level. A total of 299 transducers were tested in 13 clinics at five hospitals in the Stockholm area. Approximately 7000-15,000 ultrasound examinations are carried out at these clinics every year. The transducers tested in the study had been tested and classified as fully operational 1 year before and since then been in normal use in the routine clinical practice. The transducers were tested with the Sonora FirstCall Test System. There were 81 (27.1%) defective transducers found; giving a 95% confidence interval ranging from 22.1 to 32.1%. The most common transducer errors were 'delamination' of the ultrasound lens and 'break in the cable' which together constituted 82.7% of all transducer errors found. The highest error rate was found at the radiological clinics with a mean error rate of 36.0%. There was a significant difference in error rate between two observed ways the clinics handled the transducers. There was no significant difference in the error rates of the transducer brands or the transducers models. Annual testing is not sufficient to reduce the incidence of defective ultrasound transducers in routine clinical practice to an acceptable level and it is strongly advisable to create a user routine that minimizes the handling of the transducers.

  11. A Comparative Study of Alternative Controls and Displays for by the Severely Physically Handicapped

    NASA Technical Reports Server (NTRS)

    Williams, D.; Simpson, C.; Barker, M.

    1984-01-01

    A modification of a row/column scanning system was investigated in order to increase the speed and accuracy with which communication aids can be accessed with one or two switches. A selection algorithm was developed and programmed in BASIC to automatically select individuals with the characteristic difficulty in controlling time dependent control and display systems. Four systems were compared: (1) row/column directed scan (2 switches); (2) row/column auto scan (1 switch); (3) row auto scan (1 switch); and (4) column auto scan (1 switch). For this sample population, there were no significant differences among systems for scan time to select the correct target. The row/column auto scan system resulted in significantly more errors than any of the other three systems. Thus, the most widely prescribed system for severely physically disabled individuals turns out for this group to have a higher error rate and no faster communication rate than three other systems that have been considered inappropriate for this group.

  12. Residents' numeric inputting error in computerized physician order entry prescription.

    PubMed

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Evaluation of a UMLS Auditing Process of Semantic Type Assignments

    PubMed Central

    Gu, Huanying; Hripcsak, George; Chen, Yan; Morrey, C. Paul; Elhanan, Gai; Cimino, James J.; Geller, James; Perl, Yehoshua

    2007-01-01

    The UMLS is a terminological system that integrates many source terminologies. Each concept in the UMLS is assigned one or more semantic types from the Semantic Network, an upper level ontology for biomedicine. Due to the complexity of the UMLS, errors exist in the semantic type assignments. Finding assignment errors may unearth modeling errors. Even with sophisticated tools, discovering assignment errors requires manual review. In this paper we describe the evaluation of an auditing project of UMLS semantic type assignments. We studied the performance of the auditors who reviewed potential errors. We found that four auditors, interacting according to a multi-step protocol, identified a high rate of errors (one or more errors in 81% of concepts studied) and that results were sufficiently reliable (0.67 to 0.70) for the two most common types of errors. However, reliability was low for each individual auditor, suggesting that review of potential errors is resource-intensive. PMID:18693845

  14. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    PubMed

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  15. Quality in the Basic Grant Delivery System: Volume 2, Corrective Actions.

    ERIC Educational Resources Information Center

    Advanced Technology, Inc., McLean, VA.

    Alternative management procedures are recommended that may lower the rate and magnitude of errors in the award of the Basic Educational Opportunity Grants (BEOGs), or Pell Grants. The recommendations are part of the BEOG quality control project and are based on a review of current (1980-1981) levels, distribution, and significance of error in the…

  16. Factors that influence the generation of autobiographical memory conjunction errors

    PubMed Central

    Devitt, Aleea L.; Monk-Fromont, Edwin; Schacter, Daniel L.; Addis, Donna Rose

    2015-01-01

    The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory may be incorrectly incorporated into another, forming autobiographical memory conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of autobiographical memory conjunction errors. PMID:25611492

  17. Experimental investigation of observation error in anuran call surveys

    USGS Publications Warehouse

    McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.

    2010-01-01

    Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.

  18. Factors that influence the generation of autobiographical memory conjunction errors.

    PubMed

    Devitt, Aleea L; Monk-Fromont, Edwin; Schacter, Daniel L; Addis, Donna Rose

    2016-01-01

    The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory (AM) may be incorrectly incorporated into another, forming AM conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of AM conjunction errors.

  19. Homeostatic Regulation of Memory Systems and Adaptive Decisions

    PubMed Central

    Mizumori, Sheri JY; Jo, Yong Sang

    2013-01-01

    While it is clear that many brain areas process mnemonic information, understanding how their interactions result in continuously adaptive behaviors has been a challenge. A homeostatic-regulated prediction model of memory is presented that considers the existence of a single memory system that is based on a multilevel coordinated and integrated network (from cells to neural systems) that determines the extent to which events and outcomes occur as predicted. The “multiple memory systems of the brain” have in common output that signals errors in the prediction of events and/or their outcomes, although these signals differ in terms of what the error signal represents (e.g., hippocampus: context prediction errors vs. midbrain/striatum: reward prediction errors). The prefrontal cortex likely plays a pivotal role in the coordination of prediction analysis within and across prediction brain areas. By virtue of its widespread control and influence, and intrinsic working memory mechanisms. Thus, the prefrontal cortex supports the flexible processing needed to generate adaptive behaviors and predict future outcomes. It is proposed that prefrontal cortex continually and automatically produces adaptive responses according to homeostatic regulatory principles: prefrontal cortex may serve as a controller that is intrinsically driven to maintain in prediction areas an experience-dependent firing rate set point that ensures adaptive temporally and spatially resolved neural responses to future prediction errors. This same drive by prefrontal cortex may also restore set point firing rates after deviations (i.e. prediction errors) are detected. In this way, prefrontal cortex contributes to reducing uncertainty in prediction systems. An emergent outcome of this homeostatic view may be the flexible and adaptive control that prefrontal cortex is known to implement (i.e. working memory) in the most challenging of situations. Compromise to any of the prediction circuits should result in rigid and suboptimal decision making and memory as seen in addiction and neurological disease. © 2013 The Authors. Hippocampus Published by Wiley Periodicals, Inc. PMID:23929788

  20. Homeostatic regulation of memory systems and adaptive decisions.

    PubMed

    Mizumori, Sheri J Y; Jo, Yong Sang

    2013-11-01

    While it is clear that many brain areas process mnemonic information, understanding how their interactions result in continuously adaptive behaviors has been a challenge. A homeostatic-regulated prediction model of memory is presented that considers the existence of a single memory system that is based on a multilevel coordinated and integrated network (from cells to neural systems) that determines the extent to which events and outcomes occur as predicted. The "multiple memory systems of the brain" have in common output that signals errors in the prediction of events and/or their outcomes, although these signals differ in terms of what the error signal represents (e.g., hippocampus: context prediction errors vs. midbrain/striatum: reward prediction errors). The prefrontal cortex likely plays a pivotal role in the coordination of prediction analysis within and across prediction brain areas. By virtue of its widespread control and influence, and intrinsic working memory mechanisms. Thus, the prefrontal cortex supports the flexible processing needed to generate adaptive behaviors and predict future outcomes. It is proposed that prefrontal cortex continually and automatically produces adaptive responses according to homeostatic regulatory principles: prefrontal cortex may serve as a controller that is intrinsically driven to maintain in prediction areas an experience-dependent firing rate set point that ensures adaptive temporally and spatially resolved neural responses to future prediction errors. This same drive by prefrontal cortex may also restore set point firing rates after deviations (i.e. prediction errors) are detected. In this way, prefrontal cortex contributes to reducing uncertainty in prediction systems. An emergent outcome of this homeostatic view may be the flexible and adaptive control that prefrontal cortex is known to implement (i.e. working memory) in the most challenging of situations. Compromise to any of the prediction circuits should result in rigid and suboptimal decision making and memory as seen in addiction and neurological disease. Copyright © 2013 Wiley Periodicals, Inc.

  1. A real-time heat strain risk classifier using heart rate and skin temperature.

    PubMed

    Buller, Mark J; Latzka, William A; Yokota, Miyo; Tharion, William J; Moran, Daniel S

    2008-12-01

    Heat injury is a real concern to workers engaged in physically demanding tasks in high heat strain environments. Several real-time physiological monitoring systems exist that can provide indices of heat strain, e.g. physiological strain index (PSI), and provide alerts to medical personnel. However, these systems depend on core temperature measurement using expensive, ingestible thermometer pills. Seeking a better solution, we suggest the use of a model which can identify the probability that individuals are 'at risk' from heat injury using non-invasive measures. The intent is for the system to identify individuals who need monitoring more closely or who should apply heat strain mitigation strategies. We generated a model that can identify 'at risk' (PSI 7.5) workers from measures of heart rate and chest skin temperature. The model was built using data from six previously published exercise studies in which some subjects wore chemical protective equipment. The model has an overall classification error rate of 10% with one false negative error (2.7%), and outperforms an earlier model and a least squares regression model with classification errors of 21% and 14%, respectively. Additionally, the model allows the classification criteria to be adjusted based on the task and acceptable level of risk. We conclude that the model could be a valuable part of a multi-faceted heat strain management system.

  2. In Vivo Characterization of a Wireless Telemetry Module for a Capsule Endoscopy System Utilizing a Conformal Antenna.

    PubMed

    Faerber, Julia; Cummins, Gerard; Pavuluri, Sumanth Kumar; Record, Paul; Rodriguez, Adrian R Ayastuy; Lay, Holly S; McPhillips, Rachael; Cox, Benjamin F; Connor, Ciaran; Gregson, Rachael; Clutton, Richard Eddie; Khan, Sadeque Reza; Cochran, Sandy; Desmulliez, Marc P Y

    2018-02-01

    This paper describes the design, fabrication, packaging, and performance characterization of a conformal helix antenna created on the outside of a capsule endoscope designed to operate at a carrier frequency of 433 MHz within human tissue. Wireless data transfer was established between the integrated capsule system and an external receiver. The telemetry system was tested within a tissue phantom and in vivo porcine models. Two different types of transmission modes were tested. The first mode, replicating normal operating conditions, used data packets at a steady power level of 0 dBm, while the capsule was being withdrawn at a steady rate from the small intestine. The second mode, replicating the worst-case clinical scenario of capsule retention within the small bowel, sent data with stepwise increasing power levels of -10, 0, 6, and 10 dBm, with the capsule fixed in position. The temperature of the tissue surrounding the external antenna was monitored at all times using thermistors embedded within the capsule shell to observe potential safety issues. The recorded data showed, for both modes of operation, a low error transmission of 10 -3 packet error rate and 10 -5 bit error rate and no temperature increase of the tissue according to IEEE standards.

  3. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  4. MBM fuel feeding system design and evaluation for FBG pilot plant.

    PubMed

    Campbell, William A; Fonstad, Terry; Pugsley, Todd; Gerspacher, Regan

    2012-06-01

    A biomass fuel feeding system has been designed, constructed and evaluated for a fluidized bed gasifier (FBG) pilot plant at the University of Saskatchewan (Saskatoon, SK, Canada). The system was designed for meat and bone meal (MBM) to be injected into the gasifier at a mass flow-rate range of 1-5 g/s. The designed system consists of two stages of screw conveyors, including a metering stage which controlled the flow-rate of fuel, a rotary airlock and an injection conveyor stage, which delivered that fuel at a consistent rate to the FBG. The rotary airlock which was placed between these conveyors, proved unable to maintain a pressure seal, thus the entire conveying system was sealed and pressurized. A pneumatic injection nozzle was also fabricated, tested and fitted to the end of the injection conveyor for direct injection and dispersal into the fluidized bed. The 150 mm metering screw conveyor was shown to effectively control the mass output rate of the system, across a fuel output range of 1-25 g/s, while the addition of the 50mm injection screw conveyor reduced the irregularity (error) of the system output rate from 47% to 15%. Although material plugging was found to be an issue in the inlet hopper to the injection conveyor, the addition of air sparging ports and a system to pulse air into those ports was found to successfully eliminate this issue. The addition of the pneumatic injection nozzle reduced the output irregularity further to 13%, with an air supply of 50 slpm as the minimum air supply to drive this injector. After commissioning of this final system to the FBG reactor, the injection nozzle was found to plug with char however, and was subsequently removed from the system. Final operation of the reactor continues satisfactorily with the two screw conveyors operating at matching pressure with the fluidized bed, with the output rate of the system estimated based on system characteristic equations, and confirmed by static weight measurements made before and after testing. The error rate by this method is reported to be approximately 10%, which is slightly better than the estimated error rate of 15% for the conveyor system. The reliability of this measurement prediction method relies upon the relative consistency of the physical properties of MBM with respect to its bulk density and feeding characteristics. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  6. Nuclear Reaction Models Responsible for Simulation of Neutron-induced Soft Errors in Microelectronics

    NASA Astrophysics Data System (ADS)

    Watanabe, Y.; Abe, S.

    2014-06-01

    Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.

  7. Nurses' attitudes and perceived barriers to the reporting of medication administration errors.

    PubMed

    Yung, Hai-Peng; Yu, Shu; Chu, Chi; Hou, I-Ching; Tang, Fu-In

    2016-07-01

    (1) To explore the attitudes and perceived barriers to reporting medication administration errors and (2) to understand the characteristics of - and nurses' feelings - about error reports. Under-reporting of medication administration errors is a global concern related to the safety of patient care. Understanding nurses' attitudes and perceived barriers to error reporting is the initial step to increasing the reporting rate. A cross-sectional, descriptive survey with a self-administered questionnaire was completed by the nurses of a medical centre hospital in Taiwan. A total of 306 nurses participated in the study. Nurses' attitudes towards medication administration error reporting were inclined towards positive. The major perceived barrier was fear of the consequences after reporting. The results demonstrated that 88.9% of medication administration errors were reported orally, whereas 19.0% were reported through the hospital internet system. Self-recrimination was the common feeling of nurses after the commission of an medication administration error. Even if hospital management encourages errors to be reported without recrimination, nurses' attitudes toward medication administration error reporting are not very positive and fear is the most prominent barrier contributing to underreporting. Nursing managers should establish anonymous reporting systems and counselling classes to create a secure atmosphere to reduce nurses' fear and provide incentives to encourage reporting. © 2016 John Wiley & Sons Ltd.

  8. Errors in preparation and administration of parenteral drugs in neonatology: evaluation and corrective actions.

    PubMed

    Hasni, Nesrine; Ben Hamida, Emira; Ben Jeddou, Khouloud; Ben Hamida, Sarra; Ayadi, Imene; Ouahchi, Zeineb; Marrakchi, Zahra

    2016-12-01

    The medication iatrogenic risk is quite unevaluated in neonatology Objective: Assessment of errors that occurred during the preparation and administration of injectable medicines in a neonatal unit in order to implement corrective actions to reduce the occurrence of these errors. A prospective, observational study was performed in a neonatal unit over a period of one month. The practice of preparing and administering injectable medications were identified through a standardized data collection form. These practices were compared with summaries of the characteristics of each product (RCP) and the bibliography. One hundred preparations were observed of 13 different drugs. 85 errors during preparations and administration steps were detected. These errors were divided into preparation errors in 59% of cases such as changing the dilution protocol (32%), the use of bad solvent (11%) and administration errors in 41% of cases as errors timing of administration (18%) or omission of administration (9%). This study showed a high rate of errors during stages of preparation and administration of injectable drugs. In order to optimize the care of newborns and reduce the risk of medication errors, corrective actions have been implemented through the establishment of a quality assurance system which consisted of the development of injectable drugs preparation procedures, the introduction of a labeling system and staff training.

  9. Flight test results of the strapdown ring laser gyro tetrad inertial navigation system

    NASA Technical Reports Server (NTRS)

    Carestia, R. A.; Hruby, R. J.; Bjorkman, W. S.

    1983-01-01

    A helicopter flight test program undertaken to evaluate the performance of Tetrad (a strap down, laser gyro, inertial navigation system) is described. The results of 34 flights show a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n. mi., with a standard deviation of 1.48 n. mi.; and a modeled mean position error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. No laser gyro or accelerometer failures were detected during the flight tests. Off line parity residual studies used simulated failures with the prerecorded flight test and laboratory test data. The airborne Tetrad system's failure--detection logic, exercised during the tests, successfully demonstrated the detection of simulated ""hard'' failures and the system's ability to continue successfully to navigate by removing the simulated faulted sensor from the computations. Tetrad's four ring laser gyros provided reliable and accurate angular rate sensing during the 4 yr of the test program, and no sensor failures were detected during the evaluation of free inertial navigation performance.

  10. Performance of MIMO-OFDM using convolution codes with QAM modulation

    NASA Astrophysics Data System (ADS)

    Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa

    2014-04-01

    Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.

  11. Effects of amplitude distortions and IF equalization on satellite communication system bit-error rate performance

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.

    1990-01-01

    Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.

  12. Differentially coherent quadrature-quadrature phase shift keying (Q2PSK)

    NASA Astrophysics Data System (ADS)

    Saha, Debabrata; El-Ghandour, Osama

    The quadrature-quadrature phase-shift-keying (Q2PSK) signaling scheme uses the vertices of a hypercube of dimension four. A generalized Q2PSK signaling format for differentially coherent detection at the receiver is considered. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. The symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/Nb. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK.

  13. An experiment in software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.

  14. Frequency and Severity of Parenteral Nutrition Medication Errors at a Large Children's Hospital After Implementation of Electronic Ordering and Compounding.

    PubMed

    MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery

    2016-04-01

    The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process: prescribing, transcription, preparation, and administration. There were no transcription errors, and most (95%) errors occurred during administration. We conclude that PN practices that conferred a meaningful cost reduction and a lower error rate (2.7/1000 PN) than reported in the literature (15.6/1000 PN) were ascribed to the development and implementation of practices that conform to national PN guidelines and recommendations. Electronic ordering and compounding programs eliminated all transcription and related opportunities for errors. © 2015 American Society for Parenteral and Enteral Nutrition.

  15. Optical communications and a comparison of optical technologies for a high data rate return link from Mars

    NASA Technical Reports Server (NTRS)

    Spence, Rodney L.

    1993-01-01

    The important principles of direct- and heterodyne-detection optical free-space communications are reviewed. Signal-to-noise-ratio (SNR) and bit-error-rate (BER) expressions are derived for both the direct-detection and heterodyne-detection optical receivers. For the heterodyne system, performance degradation resulting from received-signal and local oscillator-beam misalignment and laser phase noise is analyzed. Determination of interfering background power from local and extended background sources is discussed. The BER performance of direct- and heterodyne-detection optical links in the presence of Rayleigh-distributed random pointing and tracking errors is described. Finally, several optical systems employing Nd:YAG, GaAs, and CO2 laser sources are evaluated and compared to assess their feasibility in providing high-data-rate (10- to 1000-Mbps) Mars-to-Earth communications. It is shown that the root mean square (rms) pointing and tracking accuracy is a critical factor in defining the system transmitting laser-power requirements and telescope size and that, for a given rms error, there is an optimum telescope aperture size that minimizes the required power. The results of the analysis conducted indicate that, barring the achievement of extremely small rms pointing and tracking errors (less than 0.2 microrad), the two most promising types of optical systems are those that use an Nd:YAG laser (lambda = 1.064 microns) and high-order pulse position modulator (PPM) and direct detection, and those that use a CO2 laser (lambda = 10.6 microns) and phase shifting keying homodyne modulation and coherent detection. For example, for a PPM order of M = 64 and an rms pointing accuracy of 0.4 microrad, an Nd:YAG system can be used to implement a 100-Mbps Mars link with a 40-cm transmitting telescope, a 20-W laser, and a 10-m receiving photon bucket. Under the same conditions, a CO2 system would require 3-m transmitting and receiving telescopes and a 32-W laser to implement such a link. Other types of optical systems, such as a semiconductor laser systems, are impractical in the presence of large rms pointing errors because of the high power requirements of the 100-Mbps Mars link, even when optimal-size telescopes are used.

  16. Droplet-counting Microtitration System for Precise On-site Analysis.

    PubMed

    Kawakubo, Susumu; Omori, Taichi; Suzuki, Yasutada; Ueta, Ikuo

    2018-01-01

    A new microtitration system based on the counting of titrant droplets has been developed for precise on-site analysis. The dropping rate was controlled by inserting a capillary tube as a flow resistance in a laboratory-made micropipette. The error of titration was 3% in a simulated titration with 20 droplets. The pre-addition of a titrant was proposed for precise titration within an error of 0.5%. The analytical performances were evaluated for chelate titration, redox titration and acid-base titration.

  17. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  18. Development of a Work Control System for Propulsion Testing at NASA Stennis

    NASA Technical Reports Server (NTRS)

    Messer, Elizabeth A.

    2005-01-01

    This paper will explain the requirements and steps taken to develop the current Propulsion Test Directorate electronic work control system for Test Operations. The PTD Work Control System includes work authorization and technical instruction documents, such as test preparation sheets, discrepancy reports, test requests, pre-test briefing reports, and other test operations supporting tools. The environment that existed in the E-Complex test areas in the late 1990's was one of enormous growth which brought people of diverse backgrounds together for the sole purpose of testing propulsion hardware. The problem that faced us was that these newly formed teams did not have a consistent and clearly understood method for writing, performing or verifying work. A paper system was developed that would allow the teams to use the same forms, but this still presented problems in the large amount of errors occurring, such as lost paperwork and inconsistent implementation. In a sampling of errors in August 1999, the paper work control system encountered 250 errors out of 230 documents released and completed, for an error rate of 111%.

  19. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  20. SITE project. Phase 1: Continuous data bit-error-rate testing

    NASA Technical Reports Server (NTRS)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-01-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  1. Randomised trial comparing the recording ability of a novel, electronic emergency documentation system with the AHA paper cardiac arrest record.

    PubMed

    Grigg, Eliot; Palmer, Andrew; Grigg, Jeffrey; Oppenheimer, Peter; Wu, Tim; Roesler, Axel; Nair, Bala; Ross, Brian

    2014-10-01

    To evaluate the ability of an electronic system created at the University of Washington to accurately document prerecorded VF and pulseless electrical activity (PEA) cardiac arrest scenarios compared with the American Heart Association paper cardiac arrest record. 16 anaesthesiology residents were randomly assigned to view one of two prerecorded, simulated VF and PEA scenarios and asked to document the event with either the paper or electronic system. Each subject then repeated the process with the other video and documentation method. Five types of documentation errors were defined: (1) omission, (2) specification, (3) timing, (4) commission and (5) noise. The mean difference in errors between the paper and electronic methods was analysed using a single factor repeated measures ANOVA model. Compared with paper records, the electronic system omitted 6.3 fewer events (95% CI -10.1 to -2.5, p=0.003), which represents a 28% reduction in omission errors. Users recorded 2.9 fewer noise items (95% CI -5.3 to -0.6, p=0.003) when compared with paper, representing a 36% decrease in redundant or irrelevant information. The rate of timing (Δ=-3.2, 95% CI -9.3 to 3.0, p=0.286) and commission (Δ=-4.4, 95% CI -9.4 to 0.5, p=0.075) errors were similar between the electronic system and paper, while the rate of specification errors were about a third lower for the electronic system when compared with the paper record (Δ=-3.2, 95% CI -6.3 to -0.2, p=0.037). Compared with paper documentation, documentation with the electronic system captured 24% more critical information during a simulated medical emergency without loss in data quality. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. Using Gaussian mixture models to detect and classify dolphin whistles and pulses.

    PubMed

    Peso Parada, Pablo; Cardenal-López, Antonio

    2014-06-01

    In recent years, a number of automatic detection systems for free-ranging cetaceans have been proposed that aim to detect not just surfaced, but also submerged, individuals. These systems are typically based on pattern-recognition techniques applied to underwater acoustic recordings. Using a Gaussian mixture model, a classification system was developed that detects sounds in recordings and classifies them as one of four types: background noise, whistles, pulses, and combined whistles and pulses. The classifier was tested using a database of underwater recordings made off the Spanish coast during 2011. Using cepstral-coefficient-based parameterization, a sound detection rate of 87.5% was achieved for a 23.6% classification error rate. To improve these results, two parameters computed using the multiple signal classification algorithm and an unpredictability measure were included in the classifier. These parameters, which helped to classify the segments containing whistles, increased the detection rate to 90.3% and reduced the classification error rate to 18.1%. Finally, the potential of the multiple signal classification algorithm and unpredictability measure for estimating whistle contours and classifying cetacean species was also explored, with promising results.

  3. Achieving High Reliability in Histology:  An Improvement Series to Reduce Errors.

    PubMed

    Heher, Yael K; Chen, Yigu; Pyatibrat, Sergey; Yoon, Edward; Goldsmith, Jeffrey D; Sands, Kenneth E

    2016-11-01

    Despite sweeping medical advances in other fields, histology processes have by and large remained constant over the past 175 years. Patient label identification errors are a known liability in the laboratory and can be devastating, resulting in incorrect diagnoses and inappropriate treatment. The objective of this study was to identify vulnerable steps in the histology workflow and reduce the frequency of labeling errors (LEs). In this 36-month study period, a numerical step key (SK) was developed to capture LEs. The two most prevalent root causes were targeted for Lean workflow redesign: manual slide printing and microtome cutting. The numbers and rates of LEs before and after interventions were compared to evaluate the effectiveness of interventions. Following the adoption of a barcode-enabled laboratory information system, the error rate decreased from a baseline of 1.03% (794 errors in 76,958 cases) to 0.28% (107 errors in 37,880 cases). After the implementation of an innovative ice tool box, allowing single-piece workflow for histology microtome cutting, the rate came down to 0.22% (119 errors in 54,342 cases). The study pointed out the importance of tracking and understanding LEs by using a simple numerical SK and quantified the effectiveness of two customized Lean interventions. Overall, a 78.64% reduction in LEs and a 35.28% reduction in time spent on rework have been observed since the study began. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  4. SU-F-T-251: The Quality Assurance for the Heavy Patient Load Department in the Developing Country: The Primary Experience of An Entire Workflow QA Process Management in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, J; Wang, J; Peng, J

    Purpose: To implement an entire workflow quality assurance (QA) process in the radiotherapy department and to reduce the error rates of radiotherapy based on the entire workflow management in the developing country. Methods: The entire workflow QA process management starts from patient registration to the end of last treatment including all steps through the entire radiotherapy process. Error rate of chartcheck is used to evaluate the the entire workflow QA process. Two to three qualified senior medical physicists checked the documents before the first treatment fraction of every patient. Random check of the treatment history during treatment was also performed.more » A total of around 6000 patients treatment data before and after implementing the entire workflow QA process were compared from May, 2014 to December, 2015. Results: A systemic checklist was established. It mainly includes patient’s registration, treatment plan QA, information exporting to OIS(Oncology Information System), documents of treatment QAand QA of the treatment history. The error rate derived from the chart check decreases from 1.7% to 0.9% after our the entire workflow QA process. All checked errors before the first treatment fraction were corrected as soon as oncologist re-confirmed them and reinforce staff training was accordingly followed to prevent those errors. Conclusion: The entire workflow QA process improved the safety, quality of radiotherapy in our department and we consider that our QA experience can be applicable for the heavily-loaded radiotherapy departments in developing country.« less

  5. UAS Well Clear Recovery Against Non-Cooperative Intruders Using Vertical Maneuvers

    NASA Technical Reports Server (NTRS)

    Cone, Andrew C.; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2017-01-01

    This paper documents a study that drove the development of a mathematical expression in the detect-and-avoid (DAA) minimum operational performance standards (MOPS) for unmanned aircraft systems (UAS). This equation describes the conditions under which vertical maneuver guidance should be provided during recovery of DAA well clear separation with a non-cooperative VFR aircraft. Although the original hypothesis was that vertical maneuvers for DAA well clear recovery should only be offered when sensor vertical rate errors are small, this paper suggests that UAS climb and descent performance should be considered-in addition to sensor errors for vertical position and vertical rate-when determining whether to offer vertical guidance. A fast-time simulation study involving 108,000 encounters between a UAS and a non-cooperative visual-flight-rules aircraft was conducted. Results are presented showing that, when vertical maneuver guidance for DAA well clear recovery was suppressed, the minimum vertical separation increased by roughly 50 feet (or horizontal separation by 500 to 800 feet). However, the percentage of encounters that had a risk of collision when performing vertical well clear recovery maneuvers was reduced as UAS vertical rate performance increased and sensor vertical rate errors decreased. A class of encounter is identified for which vertical-rate error had a large effect on the efficacy of horizontal maneuvers due to the difficulty of making the correct left/right turn decision: crossing conflict with intruder changing altitude. Overall, these results support logic that would allow vertical maneuvers when UAS vertical performance is sufficient to avoid the intruder, based on the intruder's estimated vertical position and vertical rate, as well as the vertical rate error of the UAS' sensor.

  6. Force Analysis and Energy Operation of Chaotic System of Permanent-Magnet Synchronous Motor

    NASA Astrophysics Data System (ADS)

    Qi, Guoyuan; Hu, Jianbing

    2017-12-01

    The disadvantage of a nondimensionalized model of a permanent-magnet synchronous Motor (PMSM) is identified. The original PMSM model is transformed into a Kolmogorov system to aid dynamic force analysis. The vector field of the PMSM is analogous to the force field including four types of torque — inertial, internal, dissipative, and generalized external. Using the feedback thought, the error torque between external torque and dissipative torque is identified. The pitchfork bifurcation of the PMSM is performed. Four forms of energy are identified for the system — kinetic, potential, dissipative, and supplied. The physical interpretations of the decomposition of force and energy exchange are given. Casimir energy is stored energy, and its rate of change is the error power between the dissipative energy and the energy supplied to the motor. Error torque and error power influence the different types of dynamic modes. The Hamiltonian energy and Casimir energy are compared to find the function of each in producing the dynamic modes. A supremum bound for the chaotic attractor is proposed using the error power and Lagrange multiplier.

  7. When Rating Systems Do Not Rate: Evaluating ERA's Performance

    ERIC Educational Resources Information Center

    Henman, Paul; Brown, Scott D.; Dennis, Simon

    2017-01-01

    In 2015, the Australian Government's Excellence in Research for Australia (ERA) assessment of research quality declined to rate 1.5 per cent of submissions from universities. The public debate focused on practices of gaming or "coding errors" within university submissions as the reason for this outcome. The issue was about the…

  8. A nudging data assimilation algorithm for the identification of groundwater pumping

    NASA Astrophysics Data System (ADS)

    Cheng, Wei-Chen; Kendall, Donald R.; Putti, Mario; Yeh, William W.-G.

    2009-08-01

    This study develops a nudging data assimilation algorithm for estimating unknown pumping from private wells in an aquifer system using measured data of hydraulic head. The proposed algorithm treats the unknown pumping as an additional sink term in the governing equation of groundwater flow and provides a consistent physical interpretation for pumping rate identification. The algorithm identifies the unknown pumping and, at the same time, reduces the forecast error in hydraulic heads. We apply the proposed algorithm to the Las Posas Groundwater Basin in southern California. We consider the following three pumping scenarios: constant pumping rates, spatially varying pumping rates, and temporally varying pumping rates. We also study the impact of head measurement errors on the proposed algorithm. In the case study we seek to estimate the six unknown pumping rates from private wells using head measurements from four observation wells. The results show an excellent rate of convergence for pumping estimation. The case study demonstrates the applicability, accuracy, and efficiency of the proposed data assimilation algorithm for the identification of unknown pumping in an aquifer system.

  9. A nudging data assimilation algorithm for the identification of groundwater pumping

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Kendall, D. R.; Putti, M.; Yeh, W. W.

    2008-12-01

    This study develops a nudging data assimilation algorithm for estimating unknown pumping from private wells in an aquifer system using measurement data of hydraulic head. The proposed algorithm treats the unknown pumping as an additional sink term in the governing equation of groundwater flow and provides a consistently physical interpretation for pumping rate identification. The algorithm identifies unknown pumping and, at the same time, reduces the forecast error in hydraulic heads. We apply the proposed algorithm to the Las Posas Groundwater Basin in southern California. We consider the following three pumping scenarios: constant pumping rate, spatially varying pumping rates, and temporally varying pumping rates. We also study the impact of head measurement errors on the proposed algorithm. In the case study, we seek to estimate the six unknown pumping rates from private wells using head measurements from four observation wells. The results show excellent rate of convergence for pumping estimation. The case study demonstrates the applicability, accuracy, and efficiency of the proposed data assimilation algorithm for the identification of unknown pumping in an aquifer system.

  10. Assessing the accuracy and feasibility of a refractive error screening program conducted by school teachers in pre-primary and primary schools in Thailand.

    PubMed

    Teerawattananon, Kanlaya; Myint, Chaw-Yin; Wongkittirux, Kwanjai; Teerawattananon, Yot; Chinkulkitnivat, Bunyong; Orprayoon, Surapong; Kusakul, Suwat; Tengtrisorn, Supaporn; Jenchitr, Watanee

    2014-01-01

    As part of the development of a system for the screening of refractive error in Thai children, this study describes the accuracy and feasibility of establishing a program conducted by teachers. To assess the accuracy and feasibility of screening by teachers. A cross-sectional descriptive and analytical study was conducted in 17 schools in four provinces representing four geographic regions in Thailand. A two-staged cluster sampling was employed to compare the detection rate of refractive error among eligible students between trained teachers and health professionals. Serial focus group discussions were held for teachers and parents in order to understand their attitude towards refractive error screening at schools and the potential success factors and barriers. The detection rate of refractive error screening by teachers among pre-primary school children is relatively low (21%) for mild visual impairment but higher for moderate visual impairment (44%). The detection rate for primary school children is high for both levels of visual impairment (52% for mild and 74% for moderate). The focus group discussions reveal that both teachers and parents would benefit from further education regarding refractive errors and that the vast majority of teachers are willing to conduct a school-based screening program. Refractive error screening by health professionals in pre-primary and primary school children is not currently implemented in Thailand due to resource limitations. However, evidence suggests that a refractive error screening program conducted in schools by teachers in the country is reasonable and feasible because the detection and treatment of refractive error in very young generations is important and the screening program can be implemented and conducted with relatively low costs.

  11. Analysis of space telescope data collection system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    An analysis of the expected performance for the Multiple Access (MA) system is provided. The analysis covers the expected bit error rate performance, the effects of synchronization loss, the problem of self-interference, and the problem of phase ambiguity. The problem of false acceptance of a command word due to data inversion is discussed. A mathematical determination of the probability of accepting an erroneous command word due to a data inversion is presented. The problem is examined for three cases: (1) a data inversion only, (2) a data inversion and a random error within the same command word, and a block (up to 256 48-bit words) containing both a data inversion and a random error.

  12. Synthesis and analysis of precise spaceborne laser ranging systems, volume 1. [link analysis

    NASA Technical Reports Server (NTRS)

    Paddon, E. A.

    1977-01-01

    Measurement accuracy goals of 2 cm rms range estimation error and 0.003 cm/sec rms range rate estimation error, with no more than 1 cm (range) static bias error are requirements for laser measurement systems to be used in planned space-based earth physics investigations. Constraints and parameters were defined for links between a high altitude, transmit/receive satellite (HATRS), and one of three targets: a low altitude target satellite, passive (LATS), and active low altitude target, and a ground-based target, as well as with operations with a primary transmit/receive terminal intended to be carried as a shuttle payload, in conjunction with the Spacelab program.

  13. Spaceflight Ka-Band High-Rate Radiation-Hard Modulator

    NASA Technical Reports Server (NTRS)

    Jaso, Jeffery M.

    2011-01-01

    A document discusses the creation of a Ka-band modulator developed specifically for the NASA/GSFC Solar Dynamics Observatory (SDO). This flight design consists of a high-bandwidth, Quadriphase Shift Keying (QPSK) vector modulator with radiation-hardened, high-rate driver circuitry that receives I and Q channel data. The radiationhard design enables SDO fs Ka-band communications downlink system to transmit 130 Mbps (300 Msps after data encoding) of science instrument data to the ground system continuously throughout the mission fs minimum life of five years. The low error vector magnitude (EVM) of the modulator lowers the implementation loss of the transmitter in which it is used, thereby increasing the overall communication system link margin. The modulator comprises a component within the SDO transmitter, and meets the following specifications over a 0 to 40 C operational temperature range: QPSK/OQPSK modulator, 300-Msps symbol rate, 26.5-GHz center frequency, error vector magnitude less than or equal to 10 percent rms, and compliance with the NTIA (National Telecommunications and Information Administration) spectral mask.

  14. Nursing home levels of care: reimbursement of resident specific costs.

    PubMed

    Willemain, T R

    1980-01-01

    The companion paper on nursing home levels of care (Bishop, Plough and Willemain, 1980) recommended a "split-rate" approach to nursing home reimbursement that would distinguish between fixed and variable costs. This paper examines three alternative treatments of the variable cost component of the rate: a two-level system similar to the distinction between skilled and intermediate care facilities, an individualized ("patient-centered") system, and a system that assigns a single facility-specific rate that depends on the facility's case-mix ("case-mix reimbursement"). The aim is to better understand the theoretical strengths and weaknesses of these three approaches. The comparison of reimbursement alternatives is framed in terms of minimizing reimbursement error, meaning overpayment and underpayment. We develop a conceptual model of reimbursement error that stresses that the features of the reimbursement scheme are only some of the factors contributing to over- and underpayment. The conceptual model is translated into a computer program for quantitative comparison of the alternatives.

  15. High-speed phosphor-LED wireless communication system utilizing no blue filter

    NASA Astrophysics Data System (ADS)

    Yeh, C. H.; Chow, C. W.; Chen, H. Y.; Chen, J.; Liu, Y. L.; Wu, Y. F.

    2014-09-01

    In this paper, we propose and investigate an adaptively 84.44 to 190 Mb/s phosphor-LED visible light communication (VLC) system at a practical transmission distance. Here, we utilize the orthogonal-frequency-division-multiplexing quadrature-amplitude-modulation (OFDM-QAM) modulation with power/bit-loading algorithm in proposed VLC system. In the experiment, the optimal analogy pre-equalization design is also performed at LED-Tx side and no blue filter is used at the Rx side for extending the modulation bandwidth from 1 MHz to 30 MHz. In addition, the corresponding free space transmission lengths are between 75 cm and 2 m under various data rates of proposed VLC. And the measured bit error rates (BERs) of < 3.8×10-3 [forward error correction (FEC) limit] at different transmission lengths and measured data rates can be also obtained. Finally, we believe that our proposed scheme could be another alternative VLC implementation in practical distance, supporting < 100 Mb/s, using commercially available LED and PD (without optical blue filtering) and compact size.

  16. On the capacity of MIMO-OFDM based diversity and spatial multiplexing in Radio-over-Fiber system

    NASA Astrophysics Data System (ADS)

    El Yahyaoui, Moussa; El Moussati, Ali; El Zein, Ghaïs

    2017-11-01

    This paper proposes a realistic and global simulation to predict the behavior of a Radio over Fiber (RoF) system before its realization. In this work we consider a 2 × 2 Multiple-Input Multiple-Output (MIMO) Orthogonal Frequency Division Multiplexing (OFDM) RoF system at 60 GHz. This system is based on Spatial Diversity (SD) which increases reliability (decreases probability of error) and Spatial Multiplexing (SMX) which increases data rate, but not necessarily reliability. The 60 GHz MIMO channel model employed in this work based on a lot of measured data and statistical analysis named Triple-S and Valenzuela (TSV) model. To the authors best knowledge; it is the first time that this type of TSV channel model has been employed for 60 GHz MIMO-RoF system. We have evaluated and compared the performance of this system according to the diversity technique, modulation schemes, and channel coding rate for Line-Of-Sight (LOS) desktop environment. The SMX coded is proposed as an intermediate system to improve the Signal to Noise Ratio (SNR) and the data rate. The resulting 2 × 2 MIMO-OFDM SMX system achieves a higher data rate up to 70 Gb/s with 64QAM and Forward Error Correction (FEC) limit of 10-3 over 25-km fiber transmission followed by 3-m wireless transmission using 7 GHz bandwidth of millimeter wave band.

  17. A statistical study of radio-source structure effects on astrometric very long baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.

    1989-01-01

    Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.

  18. Channel Modeling

    NASA Astrophysics Data System (ADS)

    Schmitz, Arne; Schinnenburg, Marc; Gross, James; Aguiar, Ana

    For any communication system the Signal-to-Interference-plus-Noise-Ratio of the link is a fundamental metric. Recall (cf. Chapter 9) that the SINR is defined as the ratio between the received power of the signal of interest and the sum of all "disturbing" power sources (i.e. interference and noise). From information theory it is known that a higher SINR increases the maximum possible error-free transmission rate (referred to as Shannon capacity [417] of any communication system and vice versa). Conversely, the higher the SINR, the lower will be the bit error rate in practical systems. While one aspect of the SINR is the sum of all distracting power sources, another issue is the received power. This depends on the transmitted power, the used antennas, possibly on signal processing techniques and ultimately on the channel gain between transmitter and receiver.

  19. Neural Network and Letter Recognition.

    NASA Astrophysics Data System (ADS)

    Lee, Hue Yeon

    Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C -layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the 'Gabor' transform. Pattern dependent choice of center and wavelengths of 'Gabor' filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets. The correct recognition rate of the system increases with the number of training sets and eventually saturates at a certain value. Similar recognition rates are obtained for the above three different learning algorithms. The minimum error rate, 4.9% is achieved for alphanumeric sets when 50 sets are trained. With the ambiguity resolver, it is reduced to 2.5%. In case that only numeral sets are trained and tested, 2.0% error rate is achieved. When only alphabet sets are considered, the error rate is reduced to 1.1%.

  20. The advanced receiver 2: Telemetry test results in CTA 21

    NASA Technical Reports Server (NTRS)

    Hinedi, S.; Bevan, R.; Marina, M.

    1991-01-01

    Telemetry tests with the Advanced Receiver II (ARX II) in Compatibility Test Area 21 are described. The ARX II was operated in parallel with a Block-III Receiver/baseband processor assembly combination (BLK-III/BPA) and a Block III Receiver/subcarrier demodulation assembly/symbol synchronization assembly combination (BLK-III/SDA/SSA). The telemetry simulator assembly provided the test signal for all three configurations, and the symbol signal to noise ratio as well as the symbol error rates were measured and compared. Furthermore, bit error rates were also measured by the system performance test computer for all three systems. Results indicate that the ARX-II telemetry performance is comparable and sometimes superior to the BLK-III/BPA and BLK-III/SDA/SSA combinations.

  1. Arduino-based noise robust online heart-rate detection.

    PubMed

    Das, Sangita; Pal, Saurabh; Mitra, Madhuchhanda

    2017-04-01

    This paper introduces a noise robust real time heart rate detection system from electrocardiogram (ECG) data. An online data acquisition system is developed to collect ECG signals from human subjects. Heart rate is detected using window-based autocorrelation peak localisation technique. A low-cost Arduino UNO board is used to implement the complete automated process. The performance of the system is compared with PC-based heart rate detection technique. Accuracy of the system is validated through simulated noisy ECG data with various levels of signal to noise ratio (SNR). The mean percentage error of detected heart rate is found to be 0.72% for the noisy database with five different noise levels.

  2. Validating Emergency Department Vital Signs Using a Data Quality Engine for Data Warehouse

    PubMed Central

    Genes, N; Chandra, D; Ellis, S; Baumlin, K

    2013-01-01

    Background : Vital signs in our emergency department information system were entered into free-text fields for heart rate, respiratory rate, blood pressure, temperature and oxygen saturation. Objective : We sought to convert these text entries into a more useful form, for research and QA purposes, upon entry into a data warehouse. Methods : We derived a series of rules and assigned quality scores to the transformed values, conforming to physiologic parameters for vital signs across the age range and spectrum of illness seen in the emergency department. Results : Validating these entries revealed that 98% of free-text data had perfect quality scores, conforming to established vital sign parameters. Average vital signs varied as expected by age. Degradations in quality scores were most commonly attributed logging temperature in Fahrenheit instead of Celsius; vital signs with this error could still be transformed for use. Errors occurred more frequently during periods of high triage, though error rates did not correlate with triage volume. Conclusions : In developing a method for importing free-text vital sign data from our emergency department information system, we now have a data warehouse with a broad array of quality-checked vital signs, permitting analysis and correlation with demographics and outcomes. PMID:24403981

  3. Validating emergency department vital signs using a data quality engine for data warehouse.

    PubMed

    Genes, N; Chandra, D; Ellis, S; Baumlin, K

    2013-01-01

    Vital signs in our emergency department information system were entered into free-text fields for heart rate, respiratory rate, blood pressure, temperature and oxygen saturation. We sought to convert these text entries into a more useful form, for research and QA purposes, upon entry into a data warehouse. We derived a series of rules and assigned quality scores to the transformed values, conforming to physiologic parameters for vital signs across the age range and spectrum of illness seen in the emergency department. Validating these entries revealed that 98% of free-text data had perfect quality scores, conforming to established vital sign parameters. Average vital signs varied as expected by age. Degradations in quality scores were most commonly attributed logging temperature in Fahrenheit instead of Celsius; vital signs with this error could still be transformed for use. Errors occurred more frequently during periods of high triage, though error rates did not correlate with triage volume. In developing a method for importing free-text vital sign data from our emergency department information system, we now have a data warehouse with a broad array of quality-checked vital signs, permitting analysis and correlation with demographics and outcomes.

  4. Emergency Multiengine Aircraft System for Lateral Control Using Differential Thrust Control of Wing Engines

    NASA Technical Reports Server (NTRS)

    Burken, John J. (Inventor); Burcham, Frank W., Jr. (Inventor); Bull, John (Inventor)

    2000-01-01

    Development of an emergency flight control system is disclosed for lateral control using only differential engine thrust modulation of multiengine aircraft is currently underway. The multiengine has at least two engines laterally displaced to the left and right from the axis of the aircraft. In response to a heading angle command psi(sub c) is to be tracked. By continually sensing the heading angle psi of the aircraft and computing a heading error signal psi(sub e) as a function of the difference between the heading angle command psi(sub c) and the sensed heading angle psi, a track control signal is developed with compensation as a function of sensed bank angle phi. Bank angle rate phi, or roll rate p, yaw rate tau, and true velocity produce an aircraft thrust control signal ATC(sub psi(L,R)). The thrust control signal is differentially applied to the left and right engines, with equal amplitude and opposite sign, such that a negative sign is applied to the control signal on the side of the aircraft. A turn is required to reduce the error signal until the heading feedback reduces the error to zero.

  5. Evaluation of voice codecs for the Australian mobile satellite system

    NASA Technical Reports Server (NTRS)

    Bundrock, Tony; Wilkinson, Mal

    1990-01-01

    The evaluation procedure to choose a low bit rate voice coding algorithm is described for the Australian land mobile satellite system. The procedure is designed to assess both the inherent quality of the codec under 'normal' conditions and its robustness under 'severe' conditions. For the assessment, normal conditions were chosen to be random bit error rate with added background acoustic noise and the severe condition is designed to represent burst error conditions when mobile satellite channel suffers from signal fading due to roadside vegetation. The assessment is divided into two phases. First, a reduced set of conditions is used to determine a short list of candidate codecs for more extensive testing in the second phase. The first phase conditions include quality and robustness and codecs are ranked with a 60:40 weighting on the two. Second, the short listed codecs are assessed over a range of input voice levels, BERs, background noise conditions, and burst error distributions. Assessment is by subjective rating on a five level opinion scale and all results are then used to derive a weighted Mean Opinion Score using appropriate weights for each of the test conditions.

  6. Simple Sample Preparation Method for Direct Microbial Identification and Susceptibility Testing From Positive Blood Cultures.

    PubMed

    Pan, Hong-Wei; Li, Wei; Li, Rong-Guo; Li, Yong; Zhang, Yi; Sun, En-Hua

    2018-01-01

    Rapid identification and determination of the antibiotic susceptibility profiles of the infectious agents in patients with bloodstream infections are critical steps in choosing an effective targeted antibiotic for treatment. However, there has been minimal effort focused on developing combined methods for the simultaneous direct identification and antibiotic susceptibility determination of bacteria in positive blood cultures. In this study, we constructed a lysis-centrifugation-wash procedure to prepare a bacterial pellet from positive blood cultures, which can be used directly for identification by matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry (MALDI-TOF MS) and antibiotic susceptibility testing by the Vitek 2 system. The method was evaluated using a total of 129 clinical bacteria-positive blood cultures. The whole sample preparation process could be completed in <15 min. The correct rate of direct MALDI-TOF MS identification was 96.49% for gram-negative bacteria and 97.22% for gram-positive bacteria. Vitek 2 antimicrobial susceptibility testing of gram-negative bacteria showed an agreement rate of antimicrobial categories of 96.89% with a minor error, major error, and very major error rate of 2.63, 0.24, and 0.24%, respectively. Category agreement of antimicrobials against gram-positive bacteria was 92.81%, with a minor error, major error, and very major error rate of 4.51, 1.22, and 1.46%, respectively. These results indicated that our direct antibiotic susceptibility analysis method worked well compared to the conventional culture-dependent laboratory method. Overall, this fast, easy, and accurate method can facilitate the direct identification and antibiotic susceptibility testing of bacteria in positive blood cultures.

  7. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  8. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  9. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  10. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  11. Global minimum profile error (GMPE) - a least-squares-based approach for extracting macroscopic rate coefficients for complex gas-phase chemical reactions.

    PubMed

    Duong, Minh V; Nguyen, Hieu T; Mai, Tam V-T; Huynh, Lam K

    2018-01-03

    Master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) has shown to be a powerful framework for modeling kinetic and dynamic behaviors of a complex gas-phase chemical system on a complicated multiple-species and multiple-channel potential energy surface (PES) for a wide range of temperatures and pressures. Derived from the ME time-resolved species profiles, the macroscopic or phenomenological rate coefficients are essential for many reaction engineering applications including those in combustion and atmospheric chemistry. Therefore, in this study, a least-squares-based approach named Global Minimum Profile Error (GMPE) was proposed and implemented in the MultiSpecies-MultiChannel (MSMC) code (Int. J. Chem. Kinet., 2015, 47, 564) to extract macroscopic rate coefficients for such a complicated system. The capability and limitations of the new approach were discussed in several well-defined test cases.

  12. Entangled quantum key distribution over two free-space optical links.

    PubMed

    Erven, C; Couteau, C; Laflamme, R; Weihs, G

    2008-10-13

    We report on the first real-time implementation of a quantum key distribution (QKD) system using entangled photon pairs that are sent over two free-space optical telescope links. The entangled photon pairs are produced with a type-II spontaneous parametric down-conversion source placed in a central, potentially untrusted, location. The two free-space links cover a distance of 435 m and 1,325 m respectively, producing a total separation of 1,575 m. The system relies on passive polarization analysis units, GPS timing receivers for synchronization, and custom written software to perform the complete QKD protocol including error correction and privacy amplification. Over 6.5 hours during the night, we observed an average raw key generation rate of 565 bits/s, an average quantum bit error rate (QBER) of 4.92%, and an average secure key generation rate of 85 bits/s.

  13. Chaos-on-a-chip secures data transmission in optical fiber links.

    PubMed

    Argyris, Apostolos; Grivas, Evangellos; Hamacher, Michael; Bogris, Adonis; Syvridis, Dimitris

    2010-03-01

    Security in information exchange plays a central role in the deployment of modern communication systems. Besides algorithms, chaos is exploited as a real-time high-speed data encryption technique which enhances the security at the hardware level of optical networks. In this work, compact, fully controllable and stably operating monolithic photonic integrated circuits (PICs) that generate broadband chaotic optical signals are incorporated in chaos-encoded optical transmission systems. Data sequences with rates up to 2.5 Gb/s with small amplitudes are completely encrypted within these chaotic carriers. Only authorized counterparts, supplied with identical chaos generating PICs that are able to synchronize and reproduce the same carriers, can benefit from data exchange with bit-rates up to 2.5Gb/s with error rates below 10(-12). Eavesdroppers with access to the communication link experience a 0.5 probability to detect correctly each bit by direct signal detection, while eavesdroppers supplied with even slightly unmatched hardware receivers are restricted to data extraction error rates well above 10(-3).

  14. An educational and audit tool to reduce prescribing error in intensive care.

    PubMed

    Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D

    2008-10-01

    To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.

  15. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    PubMed

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  16. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  17. Medication errors room: a simulation to assess the medical, nursing and pharmacy staffs' ability to identify errors related to the medication-use system.

    PubMed

    Daupin, Johanne; Atkinson, Suzanne; Bédard, Pascal; Pelchat, Véronique; Lebel, Denis; Bussières, Jean-François

    2016-12-01

    The medication-use system in hospitals is very complex. To improve the health professionals' awareness of the risks of errors related to the medication-use system, a simulation of medication errors was created. The main objective was to assess the medical, nursing and pharmacy staffs' ability to identify errors related to the medication-use system using a simulation. The secondary objective was to assess their level of satisfaction. This descriptive cross-sectional study was conducted in a 500-bed mother-and-child university hospital. A multidisciplinary group set up 30 situations and replicated a patient room and a care unit pharmacy. All hospital staff, including nurses, physicians, pharmacists and pharmacy technicians, was invited. Participants had to detect if a situation contained an error and fill out a response grid. They also answered a satisfaction survey. The simulation was held during 100 hours. A total of 230 professionals visited the simulation, 207 handed in a response grid and 136 answered the satisfaction survey. The participants' overall rate of correct answers was 67.5% ± 13.3% (4073/6036). Among the least detected errors were situations involving a Y-site infusion incompatibility, an oral syringe preparation and the patient's identification. Participants mainly considered the simulation as effective in identifying incorrect practices (132/136, 97.8%) and relevant to their practice (129/136, 95.6%). Most of them (114/136; 84.4%) intended to change their practices in view of their exposure to the simulation. We implemented a realistic medication-use system errors simulation in a mother-child hospital, with a wide audience. This simulation was an effective, relevant and innovative tool to raise the health care professionals' awareness of critical processes. © 2016 John Wiley & Sons, Ltd.

  18. Experimental results of 5-Gbps free-space coherent optical communications with adaptive optics

    NASA Astrophysics Data System (ADS)

    Chen, Mo; Liu, Chao; Rui, Daoman; Xian, Hao

    2018-07-01

    In a free-space optical communication system with fiber optical components, the received signal beam must be coupled into a single-mode fiber (SMF) before being amplified and detected. The impacts analysis of tracking errors and wavefront distortion on SMF coupling show that under the condition of relatively strong turbulence, only the tracking errors compensation is not enough, and turbulence wavefront aberration is required to be corrected. Based on our previous study and design of SMF coupling system with a 137-element continuous surface deformable mirror AO unit, we perform an experiment of a 5-Gbps Free-space Coherent Optical Communication (FSCOC) system, in which the eye pattern and Bit-error Rate (BER) are displayed. The comparative results are shown that the influence of the atmospheric is fatal in FSCOC systems. The BER of coherent communication is under 10-6 with AO compensation, which drops significantly compared with the BER without AO correction.

  19. Knowledge of healthcare professionals about medication errors in hospitals

    PubMed Central

    Abdel-Latif, Mohamed M. M.

    2016-01-01

    Context: Medication errors are the most common types of medical errors in hospitals and leading cause of morbidity and mortality among patients. Aims: The aim of the present study was to assess the knowledge of healthcare professionals about medication errors in hospitals. Settings and Design: A self-administered questionnaire was distributed to randomly selected healthcare professionals in eight hospitals in Madinah, Saudi Arabia. Subjects and Methods: An 18-item survey was designed and comprised questions on demographic data, knowledge of medication errors, availability of reporting systems in hospitals, attitudes toward error reporting, causes of medication errors. Statistical Analysis Used: Data were analyzed with Statistical Package for the Social Sciences software Version 17. Results: A total of 323 of healthcare professionals completed the questionnaire with 64.6% response rate of 138 (42.72%) physicians, 34 (10.53%) pharmacists, and 151 (46.75%) nurses. A majority of the participants had a good knowledge about medication errors concept and their dangers on patients. Only 68.7% of them were aware of reporting systems in hospitals. Healthcare professionals revealed that there was no clear mechanism available for reporting of errors in most hospitals. Prescribing (46.5%) and administration (29%) errors were the main causes of errors. The most frequently encountered medication errors were anti-hypertensives, antidiabetics, antibiotics, digoxin, and insulin. Conclusions: This study revealed differences in the awareness among healthcare professionals toward medication errors in hospitals. The poor knowledge about medication errors emphasized the urgent necessity to adopt appropriate measures to raise awareness about medication errors in Saudi hospitals. PMID:27330261

  20. A compact presentation of DSN array telemetry performance

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1982-01-01

    The telemetry performance of an arrayed receiver system, including radio losses, is often given by a family of curves giving bit error rate vs bit SNR, with tracking loop SNR at one receiver held constant along each curve. This study shows how to process this information into a more compact, useful format in which the minimal total signal power and optimal carrier suppression, for a given fixed bit error rate, are plotted vs data rate. Examples for baseband-only combining are given. When appropriate dimensionless variables are used for plotting, receiver arrays with different numbers of antennas and different threshold tracking loop bandwidths look much alike, and a universal curve for optimal carrier suppression emerges.

  1. Nuclear Reaction Models Responsible for Simulation of Neutron-induced Soft Errors in Microelectronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, Y., E-mail: watanabe@aees.kyushu-u.ac.jp; Abe, S.

    Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant sourcemore » of soft errors regardless of design rule.« less

  2. High performance interconnection between high data rate networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.

    1992-01-01

    The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.

  3. Errors of car wheels rotation rate measurement using roller follower on test benches

    NASA Astrophysics Data System (ADS)

    Potapov, A. S.; Svirbutovich, O. A.; Krivtsov, S. N.

    2018-03-01

    The article deals with rotation rate measurement errors, which depend on the motor vehicle rate, on the roller, test benches. Monitoring of the vehicle performance under operating conditions is performed on roller test benches. Roller test benches are not flawless. They have some drawbacks affecting the accuracy of vehicle performance monitoring. Increase in basic velocity of the vehicle requires increase in accuracy of wheel rotation rate monitoring. It determines the degree of accuracy of mode identification for a wheel of the tested vehicle. To ensure measurement accuracy for rotation velocity of rollers is not an issue. The problem arises when measuring rotation velocity of a car wheel. The higher the rotation velocity of the wheel is, the lower the accuracy of measurement is. At present, wheel rotation frequency monitoring on roller test benches is carried out by following-up systems. Their sensors are rollers following wheel rotation. The rollers of the system are not kinematically linked to supporting rollers of the test bench. The roller follower is forced against the wheels of the tested vehicle by means of a spring-lever mechanism. Experience of the test bench equipment operation has shown that measurement accuracy is satisfactory at small rates of vehicles diagnosed on roller test benches. With a rising diagnostics rate, rotation velocity measurement errors occur in both braking and pulling modes because a roller spins about a tire tread. The paper shows oscillograms of changes in wheel rotation velocity and rotation velocity measurement system’s signals when testing a vehicle on roller test benches at specified rates.

  4. Error Recovery in the Time-Triggered Paradigm with FTT-CAN.

    PubMed

    Marques, Luis; Vasconcelos, Verónica; Pedreiras, Paulo; Almeida, Luís

    2018-01-11

    Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots.

  5. Error Recovery in the Time-Triggered Paradigm with FTT-CAN

    PubMed Central

    Pedreiras, Paulo; Almeida, Luís

    2018-01-01

    Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots. PMID:29324723

  6. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  7. Fault detection and isolation in motion monitoring system.

    PubMed

    Kim, Duk-Jin; Suk, Myoung Hoon; Prabhakaran, B

    2012-01-01

    Pervasive computing becomes very active research field these days. A watch that can trace human movement to record motion boundary as well as to study of finding social life pattern by one's localized visiting area. Pervasive computing also helps patient monitoring. A daily monitoring system helps longitudinal study of patient monitoring such as Alzheimer's and Parkinson's or obesity monitoring. Due to the nature of monitoring sensor (on-body wireless sensor), however, signal noise or faulty sensors errors can be present at any time. Many research works have addressed these problems any with a large amount of sensor deployment. In this paper, we present the faulty sensor detection and isolation using only two on-body sensors. We have been investigating three different types of sensor errors: the SHORT error, the CONSTANT error, and the NOISY SENSOR error (see more details on section V). Our experimental results show that the success rate of isolating faulty signals are an average of over 91.5% on fault type 1, over 92% on fault type 2, and over 99% on fault type 3 with the fault prior of 30% sensor errors.

  8. Portable and Error-Free DNA-Based Data Storage.

    PubMed

    Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica

    2017-07-10

    DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.

  9. Analyzing Effect of System Inertia on Grid Frequency Forecasting Usnig Two Stage Neuro-Fuzzy System

    NASA Astrophysics Data System (ADS)

    Chourey, Divyansh R.; Gupta, Himanshu; Kumar, Amit; Kumar, Jitesh; Kumar, Anand; Mishra, Anup

    2018-04-01

    Frequency forecasting is an important aspect of power system operation. The system frequency varies with load-generation imbalance. Frequency variation depends upon various parameters including system inertia. System inertia determines the rate of fall of frequency after the disturbance in the grid. Though, inertia of the system is not considered while forecasting the frequency of power system during planning and operation. This leads to significant errors in forecasting. In this paper, the effect of inertia on frequency forecasting is analysed for a particular grid system. In this paper, a parameter equivalent to system inertia is introduced. This parameter is used to forecast the frequency of a typical power grid for any instant of time. The system gives appreciable result with reduced error.

  10. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    PubMed

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  11. Development of an air flow thermal balance calorimeter

    NASA Technical Reports Server (NTRS)

    Sherfey, J. M.

    1972-01-01

    An air flow calorimeter, based on the idea of balancing an unknown rate of heat evolution with a known rate of heat evolution, was developed. Under restricted conditions, the prototype system is capable of measuring thermal wattages from 10 milliwatts to 1 watt, with an error no greater than 1 percent. Data were obtained which reveal system weaknesses and point to modifications which would effect significant improvements.

  12. Historical shoreline mapping (II): Application of the Digital Shoreline Mapping and Analysis Systems (DSMS/DSAS) to shoreline change mapping in Puerto Rico

    USGS Publications Warehouse

    Thieler, E. Robert; Danforth, William W.

    1994-01-01

    A new, state-of-the-art method for mapping historical shorelines from maps and aerial photographs, the Digital Shoreline Mapping System (DSMS), has been developed. The DSMS is a freely available, public domain software package that meets the cartographic and photogrammetric requirements of precise coastal mapping, and provides a means to quantify and analyze different sources of error in the mapping process. The DSMS is also capable of resolving imperfections in aerial photography that commonly are assumed to be nonexistent. The DSMS utilizes commonly available computer hardware and software, and permits the entire shoreline mapping process to be executed rapidly by a single person in a small lab. The DSMS generates output shoreline position data that are compatible with a variety of Geographic Information Systems (GIS). A second suite of programs, the Digital Shoreline Analysis System (DSAS) has been developed to calculate shoreline rates-of-change from a series of shoreline data residing in a GIS. Four rate-of-change statistics are calculated simultaneously (end-point rate, average of rates, linear regression and jackknife) at a user-specified interval along the shoreline using a measurement baseline approach. An example of DSMS and DSAS application using historical maps and air photos of Punta Uvero, Puerto Rico provides a basis for assessing the errors associated with the source materials as well as the accuracy of computed shoreline positions and erosion rates. The maps and photos used here represent a common situation in shoreline mapping: marginal-quality source materials. The maps and photos are near the usable upper limit of scale and accuracy, yet the shoreline positions are still accurate ±9.25 m when all sources of error are considered. This level of accuracy yields a resolution of ±0.51 m/yr for shoreline rates-of-change in this example, and is sufficient to identify the short-term trend (36 years) of shoreline change in the study area.

  13. Da Vinci robot error and failure rates: single institution experience on a single three-arm robot unit of more than 700 consecutive robot-assisted laparoscopic radical prostatectomies.

    PubMed

    Zorn, Kevin C; Gofrit, Ofer N; Orvieto, Marcelo A; Mikhail, Albert A; Galocy, R Matthew; Shalhav, Arieh L; Zagaja, Gregory P

    2007-11-01

    Previous reports have suggested that a 2% to 5% device failure rate (FR) be quoted when counseling patients about robot-assisted laparoscopic radical prostatectomy (RLRP). We sought to evaluate our FR on the da Vinci system. Since February 2003, more than 800 RLRPs have been performed at our institution using a single three-armed robotic unit. A prospective database was analyzed to determine the device FR and whether it resulted in case abortion or open conversion. Intuitive Surgical Systems provided data concerning the system's performance, including its fault rate. Error messages were classified as recoverable and non-recoverable faults. Between February 2003 and November 2006, 725 RLRP cases were available for evaluation. There were no intraoperative device failures that resulted in a case conversion. Technical errors resulting in surgeon handicap occurred in 3 cases (0.4%). Four patients (0.5%) had their procedures aborted secondary to system failure at initial set-up prior to patient entrance to the operating room. Data analysis retrieved from the da Vinci console reported on a total of 807 procedures since 2003. Only 4 cases (0.4%) were reported from the Intuitive Surgical database to result in either an aborted or a converted case, which compares favorably with our results. Since the last computer system upgrade (September 2005), the mean recoverable and non-recoverable fault rates per procedure were 0.21 and 0.05, respectively. For all the advanced features the da Vinci system offers, it is surprisingly reliable. Throughout our RLRP experience, device failure resulted in case conversion, procedure abortion, and surgeon handicap in 0, 0.5%, and 0.4% of procedures, respectively. As such, a lowered device FR of 0.5% should be used when counseling patients undergoing RLRP. To avoid futile general anesthesia, a policy should be enforced to ensure that the da Vinci system is completely set up before the patient enters the operating room.

  14. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel

    NASA Astrophysics Data System (ADS)

    Fonseca, Gabriel P.; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R.; Lutgens, Ludy; Vanneste, Ben G. L.; Voncken, Robert; Van Limbergen, Evert J.; Reniers, Brigitte; Verhaegen, Frank

    2017-07-01

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  15. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel.

    PubMed

    Fonseca, Gabriel P; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R; Lutgens, Ludy; Vanneste, Ben G L; Voncken, Robert; Van Limbergen, Evert J; Reniers, Brigitte; Verhaegen, Frank

    2017-07-07

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192 Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  16. Medical Error Avoidance in Intraoperative Neurophysiological Monitoring: The Communication Imperative.

    PubMed

    Skinner, Stan; Holdefer, Robert; McAuliffe, John J; Sala, Francesco

    2017-11-01

    Error avoidance in medicine follows similar rules that apply within the design and operation of other complex systems. The error-reduction concepts that best fit the conduct of testing during intraoperative neuromonitoring are forgiving design (reversibility of signal loss to avoid/prevent injury) and system redundancy (reduction of false reports by the multiplication of the error rate of tests independently assessing the same structure). However, error reduction in intraoperative neuromonitoring is complicated by the dichotomous roles (and biases) of the neurophysiologist (test recording and interpretation) and surgeon (intervention). This "interventional cascade" can be given as follows: test → interpretation → communication → intervention → outcome. Observational and controlled trials within operating rooms demonstrate that optimized communication, collaboration, and situational awareness result in fewer errors. Well-functioning operating room collaboration depends on familiarity and trust among colleagues. Checklists represent one method to initially enhance communication and avoid obvious errors. All intraoperative neuromonitoring supervisors should strive to use sufficient means to secure situational awareness and trusted communication/collaboration. Face-to-face audiovisual teleconnections may help repair deficiencies when a particular practice model disallows personal operating room availability. All supervising intraoperative neurophysiologists need to reject an insular or deferential or distant mindset.

  17. Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.

    PubMed

    Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D

    2017-06-01

    The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.

  18. Design of fiber optic based respiratory sensor for newborn incubator application

    NASA Astrophysics Data System (ADS)

    Dhia, Arika; Devara, Kresna; Abuzairi, Tomy; Poespawati, N. R.; Purnamaningsih, Retno W.

    2018-02-01

    This paper reports the design of respiratory sensor using fiber optic for newborn incubator application. The sensor works based on light intensity losses difference obtained due to thorax movement during respiration. The output of the sensor launched to support electronic circuits to be processed in Arduino Uno microcontroler such that the real-time respiratory rate (breath per minute) can be presented on LCD. Experiment results using thorax expansion of newborn simulator show that the system is able to measure respiratory rate from 10 up to 130 breaths per minute with 0.595% error and 0.2% hysteresis error.

  19. Gravimetric system using high-speed double switching valves for low liquid flow rates

    NASA Astrophysics Data System (ADS)

    Cheong, Kar-Hooi; Doihara, Ryouji; Shimada, Takashi; Terao, Yoshiya

    2018-07-01

    This paper presents a gravimetric system developed to perform the static weighing with flying-start-and-stop (SW-FSS) calibration method at low liquid flow rates using a pair of identical high-speed switching valves as a flow diverter. Features of the gravimetric system comprise three main components: a pair of switching valves that divert the working liquid between two symmetrical flow paths; a weighing vessel equipped with an overflow inner vessel and enclosed in a weighing chamber; and a liquid discharge mechanism comprising a discharge tube and a discharge pump, used with a multi-purpose bin. These are described with an explanation of the design considerations behind each feature. The overflow inner vessel is designed with a notch in its wall and is positioned so that it does not come into contact with the liquid surface of the accumulated liquid in the weighing vessel or the side wall of the weighing vessel to obtain a good repeatability of the interactive effects between the feeding tube and the submerging working liquid, thus ensuring a correct mass reading of the liquid collection. A performance test showed that, in terms of contribution to the overall uncertainty of the standard flow rate, the pair of switching valves is capable of performing SW-FSS satisfactorily with small relative timing errors within %. However, the mass loss due to evaporation is considered a major source of error of the gravimetric system, showing a maximum error of 0.011% under the most evaporative condition tested for the longest liquid collection time of the gravimetric system.

  20. Detection of pointing errors with CMOS-based camera in intersatellite optical communications

    NASA Astrophysics Data System (ADS)

    Yu, Si-yuan; Ma, Jing; Tan, Li-ying

    2005-01-01

    For very high data rates, intersatellite optical communications hold a potential performance edge over microwave communications. Acquisition and Tracking problem is critical because of the narrow transmit beam. A single array detector in some systems performs both spatial acquisition and tracking functions to detect pointing errors, so both wide field of view and high update rate is required. The past systems tend to employ CCD-based camera with complex readout arrangements, but the additional complexity reduces the applicability of the array based tracking concept. With the development of CMOS array, CMOS-based cameras can employ the single array detector concept. The area of interest feature of the CMOS-based camera allows a PAT system to specify portion of the array. The maximum allowed frame rate increases as the size of the area of interest decreases under certain conditions. A commercially available CMOS camera with 105 fps @ 640×480 is employed in our PAT simulation system, in which only part pixels are used in fact. Beams angle varying in the field of view can be detected after getting across a Cassegrain telescope and an optical focus system. Spot pixel values (8 bits per pixel) reading out from CMOS are transmitted to a DSP subsystem via IEEE 1394 bus, and pointing errors can be computed by the centroid equation. It was shown in test that: (1) 500 fps @ 100×100 is available in acquisition when the field of view is 1mrad; (2)3k fps @ 10×10 is available in tracking when the field of view is 0.1mrad.

  1. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  2. An investigation of error correcting techniques for OMV data

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1992-01-01

    Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).

  3. FPGA implementation of advanced FEC schemes for intelligent aggregation networks

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.

  4. Evaluation of real-time data obtained from gravimetric preparation of antineoplastic agents shows medication errors with possible critical therapeutic impact: Results of a large-scale, multicentre, multinational, retrospective study.

    PubMed

    Terkola, R; Czejka, M; Bérubé, J

    2017-08-01

    Medication errors are a significant cause of morbidity and mortality especially with antineoplastic drugs, owing to their narrow therapeutic index. Gravimetric workflow software systems have the potential to reduce volumetric errors during intravenous antineoplastic drug preparation which may occur when verification is reliant on visual inspection. Our aim was to detect medication errors with possible critical therapeutic impact as determined by the rate of prevented medication errors in chemotherapy compounding after implementation of gravimetric measurement. A large-scale, retrospective analysis of data was carried out, related to medication errors identified during preparation of antineoplastic drugs in 10 pharmacy services ("centres") in five European countries following the introduction of an intravenous workflow software gravimetric system. Errors were defined as errors in dose volumes outside tolerance levels, identified during weighing stages of preparation of chemotherapy solutions which would not otherwise have been detected by conventional visual inspection. The gravimetric system detected that 7.89% of the 759 060 doses of antineoplastic drugs prepared at participating centres between July 2011 and October 2015 had error levels outside the accepted tolerance range set by individual centres, and prevented these doses from reaching patients. The proportion of antineoplastic preparations with deviations >10% ranged from 0.49% to 5.04% across sites, with a mean of 2.25%. The proportion of preparations with deviations >20% ranged from 0.21% to 1.27% across sites, with a mean of 0.71%. There was considerable variation in error levels for different antineoplastic agents. Introduction of a gravimetric preparation system for antineoplastic agents detected and prevented dosing errors which would not have been recognized with traditional methods and could have resulted in toxicity or suboptimal therapeutic outcomes for patients undergoing anticancer treatment. © 2017 The Authors. Journal of Clinical Pharmacy and Therapeutics Published by John Wiley & Sons Ltd.

  5. Quantum state discrimination bounds for finite sample size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less

  6. Technology research for strapdown inertial experiment and digital flight control and guidance

    NASA Technical Reports Server (NTRS)

    Carestia, R. A.; Cottrell, D. E.

    1985-01-01

    A helicopter flight-test program to evaluate the performance of Honeywell's Tetrad - a strapdown, laser gyro, inertial navitation system is discussed. The results of 34 flights showed a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n.mi., with a standard deviation of 1.48 n.m.; and a modeled mean-position-error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. Tetrad's four-ring laser gyros provided reliable and accurate angular rate sensing during the test program and on sensor failures were detected during the evaluation. Criteria suitable for investigating cockpit systems in rotorcraft were developed. This criteria led to the development of two basic simulators. The first was a standard simulator which could be used to obtain baseline information for studying pilot workload and interactions. The second was an advanced simulator which integrated the RODAAS developed by Honeywell into this simulator. The second area also included surveying the aerospace industry to determine the level of use and impact of microcomputers and related components on avionics systems.

  7. System care improves trauma outcome: patient care errors dominate reduced preventable death rate.

    PubMed

    Thoburn, E; Norris, P; Flores, R; Goode, S; Rodriguez, E; Adams, V; Campbell, S; Albrink, M; Rosemurgy, A

    1993-01-01

    A review of 452 trauma deaths in Hillsborough County, Florida, in 1984 documented that 23% of non-CNS trauma deaths were preventable and occurred because of inadequate resuscitation or delay in proper surgical care. In late 1988 Hillsborough County organized a County Trauma Agency (HCTA) to coordinate trauma care among prehospital providers and state-designated trauma centers. The purpose of this study was to review county trauma deaths after the inception of the HCTA to determine the frequency of preventable deaths. 504 trauma deaths occurring between October 1989 and April 1991 were reviewed. Through committee review, 10 deaths were deemed preventable; 2 occurred outside the trauma system. Of the 10 deaths, 5 preventable deaths occurred late in severely injured patients. The preventable death rate has decreased to 7.0% with system care. The causes of preventable deaths have changed from delayed or inadequate intervention to postoperative care errors.

  8. Low-density parity-check codes for volume holographic memory systems.

    PubMed

    Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali

    2003-02-10

    We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.

  9. A ROF transport system using phase & polarization modulation based on OFDM technique

    NASA Astrophysics Data System (ADS)

    Mallick, Khaleda; Patra, Ardhendu Sekhar

    2018-05-01

    A radio-over-fiber (ROF) transport system using phase and polarization modulator based on orthogonal frequency division multiplexing (OFDM) technique has been proposed and demonstrated, to transmit 2.5 Gbps at 7.5 GHz over 40 km single mode fiber (SMF). The transmission performance is observed by proper bit error rate and clear eye diagram. Our proposed system become a prominent alternative, as it has advantages of communication link for greater bandwidth and data rates.

  10. Detection of Methicillin-Resistant Coagulase-Negative Staphylococci by the Vitek 2 System

    PubMed Central

    Johnson, Kristen N.; Andreacchio, Kathleen

    2014-01-01

    The accurate performance of the Vitek 2 GP67 card for detecting methicillin-resistant coagulase-negative staphylococci (CoNS) is not known. We prospectively determined the ability of the Vitek 2 GP67 card to accurately detect methicillin-resistant CoNS, with mecA PCR results used as the gold standard for a 4-month period in 2012. Included in the study were 240 consecutively collected nonduplicate CoNS isolates. Cefoxitin susceptibility by disk diffusion testing was determined for all isolates. We found that the three tested systems, Vitek 2 oxacillin and cefoxitin testing and cefoxitin disk susceptibility testing, lacked specificity and, in some cases, sensitivity for detecting methicillin resistance. The Vitek 2 oxacillin and cefoxitin tests had very major error rates of 4% and 8%, respectively, and major error rates of 38% and 26%, respectively. Disk cefoxitin testing gave the best performance, with very major and major error rates of 2% and 24%, respectively. The test performances were species dependent, with the greatest errors found for Staphylococcus saprophyticus. While the 2014 CLSI guidelines recommend reporting isolates that test resistant by the oxacillin MIC or cefoxitin disk test as oxacillin resistant, following such guidelines produces erroneous results, depending on the test method and bacterial species tested. Vitek 2 cefoxitin testing is not an adequate substitute for cefoxitin disk testing. For critical-source isolates, mecA PCR, rather than Vitek 2 or cefoxitin disk testing, is required for optimal antimicrobial therapy. PMID:24951799

  11. Detecting wrong notes in advance: neuronal correlates of error monitoring in pianists.

    PubMed

    Ruiz, María Herrojo; Jabusch, Hans-Christian; Altenmüller, Eckart

    2009-11-01

    Music performance is an extremely rapid process with low incidence of errors even at the fast rates of production required. This is possible only due to the fast functioning of the self-monitoring system. Surprisingly, no specific data about error monitoring have been published in the music domain. Consequently, the present study investigated the electrophysiological correlates of executive control mechanisms, in particular error detection, during piano performance. Our target was to extend the previous research efforts on understanding of the human action-monitoring system by selecting a highly skilled multimodal task. Pianists had to retrieve memorized music pieces at a fast tempo in the presence or absence of auditory feedback. Our main interest was to study the interplay between auditory and sensorimotor information in the processes triggered by an erroneous action, considering only wrong pitches as errors. We found that around 70 ms prior to errors a negative component is elicited in the event-related potentials and is generated by the anterior cingulate cortex. Interestingly, this component was independent of the auditory feedback. However, the auditory information did modulate the processing of the errors after their execution, as reflected in a larger error positivity (Pe). Our data are interpreted within the context of feedforward models and the auditory-motor coupling.

  12. Same-Day Identification and Antimicrobial Susceptibility Testing of Bacteria in Positive Blood Culture Broths Using Short-Term Incubation on Solid Medium with the MicroFlex LT, Vitek-MS, and Vitek2 Systems

    PubMed Central

    Ha, Jihye; Han, Geum Hee; Kim, Myungsook; Lee, Kyungwon

    2018-01-01

    Background Early and appropriate antibiotic treatment improves the clinical outcome of patients with septicemia; therefore, reducing the turn-around time for identification (ID) and antimicrobial susceptibility test (AST) results is essential. We established a method for rapid ID and AST using short-term incubation of positive blood culture broth samples on solid media, and evaluated its performance relative to that of the conventional method using two rapid ID systems and a rapid AST method. Methods A total of 254 mono-microbial samples were included. Positive blood culture samples were incubated on blood agar plates for six hours and identified by the MicroFlex LT (Bruker Daltonics) and Vitek-MS (bioMeriéux) systems, followed by AST using the Vitek2 System (bioMeriéux). Results The correct species-level ID rates were 82.3% (209/254) and 78.3% (199/254) for the MicroFlex LT and Vitek-MS platforms, respectively. For the 1,174 microorganism/antimicrobial agent combinations tested, the rapid AST method showed total concordance of 97.8% (1,148/1,174) with the conventional method, with a very major error rate of 0.5%, major error rate of 0.7%, and minor error rate of 1.0%. Conclusions Routine implementation of this short-term incubation method could provide ID results on the day of blood culture-positivity detection and one day earlier than the conventional AST method. This simple method will be very useful for rapid ID and AST of bacteria from positive blood culture bottles in routine clinical practice. PMID:29401558

  13. Same-Day Identification and Antimicrobial Susceptibility Testing of Bacteria in Positive Blood Culture Broths Using Short-Term Incubation on Solid Medium with the MicroFlex LT, Vitek-MS, and Vitek2 Systems.

    PubMed

    Ha, Jihye; Hong, Sung Kuk; Han, Geum Hee; Kim, Myungsook; Yong, Dongeun; Lee, Kyungwon

    2018-05-01

    Early and appropriate antibiotic treatment improves the clinical outcome of patients with septicemia; therefore, reducing the turn-around time for identification (ID) and antimicrobial susceptibility test (AST) results is essential. We established a method for rapid ID and AST using short-term incubation of positive blood culture broth samples on solid media, and evaluated its performance relative to that of the conventional method using two rapid ID systems and a rapid AST method. A total of 254 mono-microbial samples were included. Positive blood culture samples were incubated on blood agar plates for six hours and identified by the MicroFlex LT (Bruker Daltonics) and Vitek-MS (bioMeriéux) systems, followed by AST using the Vitek2 System (bioMeriéux). The correct species-level ID rates were 82.3% (209/254) and 78.3% (199/254) for the MicroFlex LT and Vitek-MS platforms, respectively. For the 1,174 microorganism/antimicrobial agent combinations tested, the rapid AST method showed total concordance of 97.8% (1,148/1,174) with the conventional method, with a very major error rate of 0.5%, major error rate of 0.7%, and minor error rate of 1.0%. Routine implementation of this short-term incubation method could provide ID results on the day of blood culture-positivity detection and one day earlier than the conventional AST method. This simple method will be very useful for rapid ID and AST of bacteria from positive blood culture bottles in routine clinical practice. © The Korean Society for Laboratory Medicine

  14. An 802.11 n wireless local area network transmission scheme for wireless telemedicine applications.

    PubMed

    Lin, C F; Hung, S I; Chiang, I H

    2010-10-01

    In this paper, an 802.11 n transmission scheme is proposed for wireless telemedicine applications. IEEE 802.11n standards, a power assignment strategy, space-time block coding (STBC), and an object composition Petri net (OCPN) model are adopted. With the proposed wireless system, G.729 audio bit streams, Joint Photographic Experts Group 2000 (JPEG 2000) clinical images, and Moving Picture Experts Group 4 (MPEG-4) video bit streams achieve a transmission bit error rate (BER) of 10-7, 10-4, and 103 simultaneously. The proposed system meets the requirements prescribed for wireless telemedicine applications. An essential feature of this proposed transmission scheme is that clinical information that requires a high quality of service (QoS) is transmitted at a high power transmission rate with significant error protection. For maximizing resource utilization and minimizing the total transmission power, STBC and adaptive modulation techniques are used in the proposed 802.11 n wireless telemedicine system. Further, low power, direct mapping (DM), low-error protection scheme, and high-level modulation are adopted for messages that can tolerate a high BER. With the proposed transmission scheme, the required reliability of communication can be achieved. Our simulation results have shown that the proposed 802.11 n transmission scheme can be used for developing effective wireless telemedicine systems.

  15. Combinatorial FSK modulation for power-efficient high-rate communications

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Budinger, James M.; Vanderaar, Mark J.

    1991-01-01

    Deep-space and satellite communications systems must be capable of conveying high-rate data accurately with low transmitter power, often through dispersive channels. A class of noncoherent Combinatorial Frequency Shift Keying (CFSK) modulation schemes is investigated which address these needs. The bit error rate performance of this class of modulation formats is analyzed and compared to the more traditional modulation types. Candidate modulator, demodulator, and digital signal processing (DSP) hardware structures are examined in detail. System-level issues are also discussed.

  16. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  17. Trust and the Compliance-Reliance Paradigm: The Effects of Risk, Error Bias, and Reliability on Trust and Dependence.

    PubMed

    Chancey, Eric T; Bliss, James P; Yamani, Yusuke; Handley, Holly A H

    2017-05-01

    This study provides a theoretical link between trust and the compliance-reliance paradigm. We propose that for trust mediation to occur, the operator must be presented with a salient choice, and there must be an element of risk for dependence. Research suggests that false alarms and misses affect dependence via two independent processes, hypothesized as trust in signals and trust in nonsignals. These two trust types manifest in categorically different behaviors: compliance and reliance. Eighty-eight participants completed a primary flight task and a secondary signaling system task. Participants evaluated their trust according to the informational bases of trust: performance, process, and purpose. Participants were in a high- or low-risk group. Signaling systems varied by reliability (90%, 60%) within subjects and error bias (false alarm prone, miss prone) between subjects. False-alarm rate affected compliance but not reliance. Miss rate affected reliance but not compliance. Mediation analyses indicated that trust mediated the relationship between false-alarm rate and compliance. Bayesian mediation analyses favored evidence indicating trust did not mediate miss rate and reliance. Conditional indirect effects indicated that factors of trust mediated the relationship between false-alarm rate and compliance (i.e., purpose) and reliance (i.e., process) but only in the high-risk group. The compliance-reliance paradigm is not the reflection of two types of trust. This research could be used to update training and design recommendations that are based upon the assumption that trust causes operator responses regardless of error bias.

  18. The impact of using an intravenous workflow management system (IVWMS) on cost and patient safety.

    PubMed

    Lin, Alex C; Deng, Yihong; Thaibah, Hilal; Hingl, John; Penm, Jonathan; Ivey, Marianne F; Thomas, Mark

    2018-07-01

    The aim of this study was to determine the financial costs associated with wasted and missing doses before and after the implementation of an intravenous workflow management system (IVWMS) and to quantify the number and the rate of detected intravenous (IV) preparation errors. A retrospective analysis of the sample hospital information system database was conducted using three months of data before and after the implementation of an IVWMS System (DoseEdge ® ) which uses barcode scanning and photographic technologies to track and verify each step of the preparation process. The financial impact associated with wasted and missing >IV doses was determined by combining drug acquisition, labor, accessory, and disposal costs. The intercepted error reports and pharmacist detected error reports were drawn from the IVWMS to quantify the number of errors by defined error categories. The total number of IV doses prepared before and after the implementation of the IVWMS system were 110,963 and 101,765 doses, respectively. The adoption of the IVWMS significantly reduced the amount of wasted and missing IV doses by 14,176 and 2268 doses, respectively (p < 0.001). The overall cost savings of using the system was $144,019 over 3 months. The total number of errors detected was 1160 (1.14%) after using the IVWMS. The implementation of the IVWMS facilitated workflow changes that led to a positive impact on cost and patient safety. The implementation of the IVWMS increased patient safety by enforcing standard operating procedures and bar code verifications. Published by Elsevier B.V.

  19. Reduction in Hospital-Wide Clinical Laboratory Specimen Identification Errors following Process Interventions: A 10-Year Retrospective Observational Study

    PubMed Central

    Ning, Hsiao-Chen; Lin, Chia-Ni; Chiu, Daniel Tsun-Yee; Chang, Yung-Ta; Wen, Chiao-Ni; Peng, Shu-Yu; Chu, Tsung-Lan; Yu, Hsin-Ming; Wu, Tsu-Lan

    2016-01-01

    Background Accurate patient identification and specimen labeling at the time of collection are crucial steps in the prevention of medical errors, thereby improving patient safety. Methods All patient specimen identification errors that occurred in the outpatient department (OPD), emergency department (ED), and inpatient department (IPD) of a 3,800-bed academic medical center in Taiwan were documented and analyzed retrospectively from 2005 to 2014. To reduce such errors, the following series of strategies were implemented: a restrictive specimen acceptance policy for the ED and IPD in 2006; a computer-assisted barcode positive patient identification system for the ED and IPD in 2007 and 2010, and automated sample labeling combined with electronic identification systems introduced to the OPD in 2009. Results Of the 2000345 specimens collected in 2005, 1023 (0.0511%) were identified as having patient identification errors, compared with 58 errors (0.0015%) among 3761238 specimens collected in 2014, after serial interventions; this represents a 97% relative reduction. The total number (rate) of institutional identification errors contributed from the ED, IPD, and OPD over a 10-year period were 423 (0.1058%), 556 (0.0587%), and 44 (0.0067%) errors before the interventions, and 3 (0.0007%), 52 (0.0045%) and 3 (0.0001%) after interventions, representing relative 99%, 92% and 98% reductions, respectively. Conclusions Accurate patient identification is a challenge of patient safety in different health settings. The data collected in our study indicate that a restrictive specimen acceptance policy, computer-generated positive identification systems, and interdisciplinary cooperation can significantly reduce patient identification errors. PMID:27494020

  20. Short-Block Protograph-Based LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  1. Robust keyword retrieval method for OCRed text

    NASA Astrophysics Data System (ADS)

    Fujii, Yusaku; Takebe, Hiroaki; Tanaka, Hiroshi; Hotta, Yoshinobu

    2011-01-01

    Document management systems have become important because of the growing popularity of electronic filing of documents and scanning of books, magazines, manuals, etc., through a scanner or a digital camera, for storage or reading on a PC or an electronic book. Text information acquired by optical character recognition (OCR) is usually added to the electronic documents for document retrieval. Since texts generated by OCR generally include character recognition errors, robust retrieval methods have been introduced to overcome this problem. In this paper, we propose a retrieval method that is robust against both character segmentation and recognition errors. In the proposed method, the insertion of noise characters and dropping of characters in the keyword retrieval enables robustness against character segmentation errors, and character substitution in the keyword of the recognition candidate for each character in OCR or any other character enables robustness against character recognition errors. The recall rate of the proposed method was 15% higher than that of the conventional method. However, the precision rate was 64% lower.

  2. The influence of the structure and culture of medical group practices on prescription drug errors.

    PubMed

    Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer

    2005-08-01

    This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.

  3. Exploring the Relationship of Task Performance and Physical and Cognitive Fatigue During a Daylong Light Precision Task.

    PubMed

    Yung, Marcus; Manji, Rahim; Wells, Richard P

    2017-11-01

    Our aim was to explore the relationship between fatigue and operation system performance during a simulated light precision task over an 8-hr period using a battery of physical (central and peripheral) and cognitive measures. Fatigue may play an important role in the relationship between poor ergonomics and deficits in quality and productivity. However, well-controlled laboratory studies in this area have several limitations, including the lack of work relevance of fatigue exposures and lack of both physical and cognitive measures. There remains a need to understand the relationship between physical and cognitive fatigue and task performance at exposure levels relevant to realistic production or light precision work. Errors and fatigue measures were tracked over the course of a micropipetting task. Fatigue responses from 10 measures and errors in pipetting technique, precision, and targeting were submitted to principal component analysis to descriptively analyze features and patterns. Fatigue responses and error rates contributed to three principal components (PCs), accounting for 50.9% of total variance. Fatigue responses grouped within the three PCs reflected central and peripheral upper extremity fatigue, postural sway, and changes in oculomotor behavior. In an 8-hr light precision task, error rates shared similar patterns to both physical and cognitive fatigue responses, and/or increases in arousal level. The findings provide insight toward the relationship between fatigue and operation system performance (e.g., errors). This study contributes to a body of literature documenting task errors and fatigue, reflecting physical (both central and peripheral) and cognitive processes.

  4. Impact of Stewardship Interventions on Antiretroviral Medication Errors in an Urban Medical Center: A 3-Year, Multiphase Study.

    PubMed

    Zucker, Jason; Mittal, Jaimie; Jen, Shin-Pung; Cheng, Lucy; Cennimo, David

    2016-03-01

    There is a high prevalence of HIV infection in Newark, New Jersey, with University Hospital admitting approximately 600 HIV-infected patients per year. Medication errors involving antiretroviral therapy (ART) could significantly affect treatment outcomes. The goal of this study was to evaluate the effectiveness of various stewardship interventions in reducing the prevalence of prescribing errors involving ART. This was a retrospective review of all inpatients receiving ART for HIV treatment during three distinct 6-month intervals over a 3-year period. During the first year, the baseline prevalence of medication errors was determined. During the second year, physician and pharmacist education was provided, and a computerized order entry system with drug information resources and prescribing recommendations was implemented. Prospective audit of ART orders with feedback was conducted in the third year. Analyses and comparisons were made across the three phases of this study. Of the 334 patients with HIV admitted in the first year, 45% had at least one antiretroviral medication error and 38% had uncorrected errors at the time of discharge. After education and computerized order entry, significant reductions in medication error rates were observed compared to baseline rates; 36% of 315 admissions had at least one error and 31% had uncorrected errors at discharge. While the prevalence of antiretroviral errors in year 3 was similar to that of year 2 (37% of 276 admissions), there was a significant decrease in the prevalence of uncorrected errors at discharge (12%) with the use of prospective review and intervention. Interventions, such as education and guideline development, can aid in reducing ART medication errors, but a committed stewardship program is necessary to elicit the greatest impact. © 2016 Pharmacotherapy Publications, Inc.

  5. Reducing errors in aircraft atmospheric inversion estimates of point-source emissions: the Aliso Canyon natural gas leak as a natural tracer experiment

    NASA Astrophysics Data System (ADS)

    Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.

    2018-04-01

    Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and aircraft atmospheric GHG observations in top-down urban emission monitoring systems.

  6. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061

  7. On the performance evaluation of LQAM-MPPM techniques over exponentiated Weibull fading free-space optical channels

    NASA Astrophysics Data System (ADS)

    Khallaf, Haitham S.; Elfiqi, Abdulaziz E.; Shalaby, Hossam M. H.; Sampei, Seiichi; Obayya, Salah S. A.

    2018-06-01

    We investigate the performance of hybrid L-ary quadrature-amplitude modulation-multi-pulse pulse-position modulation (LQAM-MPPM) techniques over exponentiated Weibull (EW) fading free-space optical (FSO) channel, considering both weather and pointing-error effects. Upper bound and approximate-tight upper bound expressions for the bit-error rate (BER) of LQAM-MPPM techniques over EW FSO channels are obtained, taking into account the effects of fog, beam divergence, and pointing-error. Setup block diagram for both the transmitter and receiver of the LQAM-MPPM/FSO system are introduced and illustrated. The BER expressions are evaluated numerically and the results reveal that LQAM-MPPM technique outperforms ordinary LQAM and MPPM schemes under different fading levels and weather conditions. Furthermore, the effect of modulation-index is investigated and it turned out that a modulation-index greater than 0.4 is required in order to optimize the system performance. Finally, the effect of pointing-error introduces a great power penalty on the LQAM-MPPM system performance. Specifically, at a BER of 10-9, pointing-error introduces power penalties of about 45 and 28 dB for receiver aperture sizes of DR = 50 and 200 mm, respectively.

  8. Automated Identification of Abnormal Adult EEGs

    PubMed Central

    López, S.; Suarez, G.; Jungreis, D.; Obeid, I.; Picone, J.

    2016-01-01

    The interpretation of electroencephalograms (EEGs) is a process that is still dependent on the subjective analysis of the examiners. Though interrater agreement on critical events such as seizures is high, it is much lower on subtler events (e.g., when there are benign variants). The process used by an expert to interpret an EEG is quite subjective and hard to replicate by machine. The performance of machine learning technology is far from human performance. We have been developing an interpretation system, AutoEEG, with a goal of exceeding human performance on this task. In this work, we are focusing on one of the early decisions made in this process – whether an EEG is normal or abnormal. We explore two baseline classification algorithms: k-Nearest Neighbor (kNN) and Random Forest Ensemble Learning (RF). A subset of the TUH EEG Corpus was used to evaluate performance. Principal Components Analysis (PCA) was used to reduce the dimensionality of the data. kNN achieved a 41.8% detection error rate while RF achieved an error rate of 31.7%. These error rates are significantly lower than those obtained by random guessing based on priors (49.5%). The majority of the errors were related to misclassification of normal EEGs. PMID:27195311

  9. DSN telemetry system performance using a maximum likelihood convolutional decoder

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Kemp, R. P.

    1977-01-01

    Results are described of telemetry system performance testing using DSN equipment and a Maximum Likelihood Convolutional Decoder (MCD) for code rates 1/2 and 1/3, constraint length 7 and special test software. The test results confirm the superiority of the rate 1/3 over that of the rate 1/2. The overall system performance losses determined at the output of the Symbol Synchronizer Assembly are less than 0.5 db for both code rates. Comparison of the performance is also made with existing mathematical models. Error statistics of the decoded data are examined. The MCD operational threshold is found to be about 1.96 db.

  10. Sculling Compensation Algorithm for SINS Based on Two-Time Scale Perturbation Model of Inertial Measurements

    PubMed Central

    Wang, Lingling; Fu, Li

    2018-01-01

    In order to decrease the velocity sculling error under vibration environments, a new sculling error compensation algorithm for strapdown inertial navigation system (SINS) using angular rate and specific force measurements as inputs is proposed in this paper. First, the sculling error formula in incremental velocity update is analytically derived in terms of the angular rate and specific force. Next, two-time scale perturbation models of the angular rate and specific force are constructed. The new sculling correction term is derived and a gravitational search optimization method is used to determine the parameters in the two-time scale perturbation models. Finally, the performance of the proposed algorithm is evaluated in a stochastic real sculling environment, which is different from the conventional algorithms simulated in a pure sculling circumstance. A series of test results demonstrate that the new sculling compensation algorithm can achieve balanced real/pseudo sculling correction performance during velocity update with the advantage of less computation load compared with conventional algorithms. PMID:29346323

  11. Frequency of dosage prescribing medication errors associated with manual prescriptions for very preterm infants.

    PubMed

    Horri, J; Cransac, A; Quantin, C; Abrahamowicz, M; Ferdynus, C; Sgro, C; Robillard, P-Y; Iacobelli, S; Gouyon, J-B

    2014-12-01

    The risk of dosage Prescription Medication Error (PME) among manually written prescriptions within 'mixed' prescribing system (computerized physician order entry (CPOE) + manual prescriptions) has not been previously assessed in neonatology. This study aimed to evaluate the rate of dosage PME related to manual prescriptions in the high-risk population of very preterm infants (GA < 33 weeks) in a mixed prescription system. The study was based on a retrospective review of a random sample of manual daily prescriptions in two neonatal intensive care units (NICU) A and B, located in different French University hospitals (Dijon and La Reunion island). Daily prescription was defined as the set of all drugs manually prescribed on a single day for one patient. Dosage error was defined as a deviation of at least ±10% from the weight-appropriate recommended dose. The analyses were based on the assessment of 676 manually prescribed drugs from NICU A (58 different drugs from 93 newborns and 240 daily prescriptions) and 354 manually prescribed drugs from NICU B (73 different drugs from 131 newborns and 241 daily prescriptions). The dosage error rate per 100 manually prescribed drugs was similar in both NICU: 3·8% (95% CI: 2·5-5·6%) in NICU A and 3·1% (95% CI: 1·6-5·5%) in NICU B (P = 0·54). Among all the 37 identified dosage errors, the over-dosing was almost as frequent as the under-dosing (17 and 20 errors, respectively). Potentially severe dosage errors occurred in a total of seven drug prescriptions. None of the dosage PME was recorded in the corresponding medical files and information on clinical outcome was not sufficient to identify clinical conditions related to dosage PME. Overall, 46·8% of manually prescribed drugs were off label or unlicensed, with no significant differences between prescriptions with or without dosage error. The risk of a dosage PME increased significantly if the drug was included in the CPOE system but was manually prescribed (OR = 3·3; 95% CI: 1·6-7·0, P < 0·001). The presence of dosage PME in the manual prescriptions written within mixed prescription systems suggests that manual prescriptions should be totally avoided in neonatal units. © 2014 John Wiley & Sons Ltd.

  12. The Relationship Between Technical Errors and Decision Making Skills in the Junior Resident

    PubMed Central

    Nathwani, J. N.; Fiers, R.M.; Ray, R.D.; Witt, A.K.; Law, K. E.; DiMarco, S.M.; Pugh, C.M.

    2017-01-01

    Objective The purpose of this study is to co-evaluate resident technical errors and decision-making capabilities during placement of a subclavian central venous catheter (CVC). We hypothesize that there will be significant correlations between scenario based decision making skills, and technical proficiency in central line insertion. We also predict residents will have problems in anticipating common difficulties and generating solutions associated with line placement. Design Participants were asked to insert a subclavian central line on a simulator. After completion, residents were presented with a real life patient photograph depicting CVC placement and asked to anticipate difficulties and generate solutions. Error rates were analyzed using chi-square tests and a 5% expected error rate. Correlations were sought by comparing technical errors and scenario based decision making. Setting This study was carried out at seven tertiary care centers. Participants Study participants (N=46) consisted of largely first year research residents that could be followed longitudinally. Second year research and clinical residents were not excluded. Results Six checklist errors were committed more often than anticipated. Residents performed an average of 1.9 errors, significantly more than the 1 error, at most, per person expected (t(44)=3.82, p<.001). The most common error was performance of the procedure steps in the wrong order (28.5%, P<.001). Some of the residents (24%) had no errors, 30% committed one error, and 46 % committed more than one error. The number of technical errors committed negatively correlated with the total number of commonly identified difficulties and generated solutions (r(33)= −.429, p=.021, r(33)= −.383, p=.044 respectively). Conclusions Almost half of the surgical residents committed multiple errors while performing subclavian CVC placement. The correlation between technical errors and decision making skills suggests a critical need to train residents in both technique and error management. ACGME Competencies Medical Knowledge, Practice Based Learning and Improvement, Systems Based Practice PMID:27671618

  13. Using Healthcare Failure Mode and Effect Analysis to reduce medication errors in the process of drug prescription, validation and dispensing in hospitalised patients.

    PubMed

    Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa

    2013-01-01

    To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.

  14. 40-Gb/s PDM-QPSK signal transmission over 160-m wireless distance at W-band.

    PubMed

    Xiao, Jiangnan; Yu, Jianjun; Li, Xinying; Xu, Yuming; Zhang, Ziran; Chen, Long

    2015-03-15

    We experimentally demonstrate a W-band optical-wireless transmission system over 160-m wireless distance with a bit rate up to 40 Gb/s. The optical-wireless transmission system adopts optical polarization-division-multiplexing (PDM), multiple-input multiple-output (MIMO) reception and antenna polarization diversity. Using this system, we experimentally demonstrate the 2×2 MIMO wireless delivery of 20- and 40-Gb/s PDM quadrature-phase-shift-keying (PDM-QPSK) signals over 640- and 160-m wireless links, respectively. The bit-error ratios (BERs) of these transmission systems are both less than the forward-error-correction (FEC) threshold of 3.8×10-3.

  15. Estimating long-run equilibrium real exchange rates: short-lived shocks with long-lived impacts on Pakistan.

    PubMed

    Zardad, Asma; Mohsin, Asma; Zaman, Khalid

    2013-12-01

    The purpose of this study is to investigate the factors that affect real exchange rate volatility for Pakistan through the co-integration and error correction model over a 30-year time period, i.e. between 1980 and 2010. The study employed the autoregressive conditional heteroskedasticity (ARCH), generalized autoregressive conditional heteroskedasticity (GARCH) and Vector Error Correction model (VECM) to estimate the changes in the volatility of real exchange rate series, while an error correction model was used to determine the short-run dynamics of the system. The study is limited to a few variables i.e., productivity differential (i.e., real GDP per capita relative to main trading partner); terms of trade; trade openness and government expenditures in order to manage robust data. The result indicates that real effective exchange rate (REER) has been volatile around its equilibrium level; while, the speed of adjustment is relatively slow. VECM results confirm long run convergence of real exchange rate towards its equilibrium level. Results from ARCH and GARCH estimation shows that real shocks volatility persists, so that shocks die out rather slowly, and lasting misalignment seems to have occurred.

  16. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains

    PubMed Central

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-01-01

    Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033

  17. Multicarrier airborne ultrasound transmission with piezoelectric transducers.

    PubMed

    Ens, Alexander; Reindl, Leonhard M

    2015-05-01

    In decentralized localization systems, the received signal has to be assigned to the sender. Therefore, longrange airborne ultrasound communication enables the transmission of an identifier of the sender within the ultrasound signal to the receiver. Further, in areas with high electromagnetic noise or electromagnetic free areas, ultrasound communication is an alternative. Using code division multiple access (CDMA) to transmit data is ineffective in rooms due to high echo amplitudes. Further, piezoelectric transducers generate a narrow-band ultrasound signal, which limits the data rate. This work shows the use of multiple carrier frequencies in orthogonal frequency division multiplex (OFDM) and differential quadrature phase shift keying modulation with narrowband piezoelectric devices to achieve a packet length of 2.1 ms. Moreover, the adapted channel coding increases data rate by correcting transmission errors. As a result, a 2-carrier ultrasound transmission system on an embedded system achieves a data rate of approximately 5.7 kBaud. Within the presented work, a transmission range up to 18 m with a packet error rate (PER) of 13% at 10-V supply voltage is reported. In addition, the transmission works up to 22 m with a PER of 85%. Moreover, this paper shows the accuracy of the frame synchronization over the distance. Consequently, the system achieves a standard deviation of 14 μs for ranges up to 10 m.

  18. Caffeine enhances real-world language processing: evidence from a proofreading task.

    PubMed

    Brunyé, Tad T; Mahoney, Caroline R; Rapp, David N; Ditman, Tali; Taylor, Holly A

    2012-03-01

    Caffeine has become the most prevalently consumed psychostimulant in the world, but its influences on daily real-world functioning are relatively unknown. The present work investigated the effects of caffeine (0 mg, 100 mg, 200 mg, 400 mg) on a commonplace language task that required readers to identify and correct 4 error types in extended discourse: simple local errors (misspelling 1- to 2-syllable words), complex local errors (misspelling 3- to 5-syllable words), simple global errors (incorrect homophones), and complex global errors (incorrect subject-verb agreement and verb tense). In 2 placebo-controlled, double-blind studies using repeated-measures designs, we found higher detection and repair rates for complex global errors, asymptoting at 200 mg in low consumers (Experiment 1) and peaking at 400 mg in high consumers (Experiment 2). In both cases, covariate analyses demonstrated that arousal state mediated the relationship between caffeine consumption and the detection and repair of complex global errors. Detection and repair rates for the other 3 error types were not affected by caffeine consumption. Taken together, we demonstrate that caffeine has differential effects on error detection and repair as a function of dose and error type, and this relationship is closely tied to caffeine's effects on subjective arousal state. These results support the notion that central nervous system stimulants may enhance global processing of language-based materials and suggest that such effects may originate in caffeine-related right hemisphere brain processes. Implications for understanding the relationships between caffeine consumption and real-world cognitive functioning are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  19. Attitudes of Mashhad Public Hospital's Nurses and Midwives toward the Causes and Rates of Medical Errors Reporting.

    PubMed

    Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh

    2017-03-01

    Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer feedback, fear of punishment or job loss were considered as reasons for under reporting of medical errors. This research demonstrates the need for greater attention to be paid to the causes of medical errors.

  20. tPA Prescription and Administration Errors within a Regional Stroke System

    PubMed Central

    Chung, Lee S; Tkach, Aleksander; Lingenfelter, Erin M; Dehoney, Sarah; Rollo, Jeannie; de Havenon, Adam; DeWitt, Lucy Dana; Grantz, Matthew Ryan; Wang, Haimei; Wold, Jana J; Hannon, Peter M; Weathered, Natalie R; Majersik, Jennifer J

    2015-01-01

    Background IV tPA utilization in acute ischemic stroke (AIS) requires weight-based dosing and a standardized infusion rate. In our regional network, we have tried to minimize tPA dosing errors. We describe the frequency and types of tPA administration errors made in our comprehensive stroke center (CSC) and at community hospitals (CHs) prior to transfer. Methods Using our stroke quality database, we extracted clinical and pharmacy information on all patients who received IV tPA from 2010–11 at the CSC or CH prior to transfer. All records were analyzed for the presence of inclusion/exclusion criteria deviations or tPA errors in prescription, reconstitution, dispensing, or administration, and analyzed for association with outcomes. Results We identified 131 AIS cases treated with IV tPA: 51% female; mean age 68; 32% treated at CSC, 68% at CH (including 26% by telestroke) from 22 CHs. tPA prescription and administration errors were present in 64% of all patients (41% CSC, 75% CH, p<0.001), the most common being incorrect dosage for body weight (19% CSC, 55% CH, p<0.001). Of the 27 overdoses, there were 3 deaths due to systemic hemorrhage or ICH. Nonetheless, outcomes (parenchymal hematoma, mortality, mRS) did not differ between CSC and CH patients nor between those with and without errors. Conclusion Despite focus on minimization of tPA administration errors in AIS patients, such errors were very common in our regional stroke system. Although an association between tPA errors and stroke outcomes was not demonstrated, quality assurance mechanisms are still necessary to reduce potentially dangerous, avoidable errors. PMID:26698642

  1. Competence in Streptococcus pneumoniae is regulated by the rate of ribosomal decoding errors.

    PubMed

    Stevens, Kathleen E; Chang, Diana; Zwack, Erin E; Sebert, Michael E

    2011-01-01

    Competence for genetic transformation in Streptococcus pneumoniae develops in response to accumulation of a secreted peptide pheromone and was one of the initial examples of bacterial quorum sensing. Activation of this signaling system induces not only expression of the proteins required for transformation but also the production of cellular chaperones and proteases. We have shown here that activity of this pathway is sensitively responsive to changes in the accuracy of protein synthesis that are triggered by either mutations in ribosomal proteins or exposure to antibiotics. Increasing the error rate during ribosomal decoding promoted competence, while reducing the error rate below the baseline level repressed the development of both spontaneous and antibiotic-induced competence. This pattern of regulation was promoted by the bacterial HtrA serine protease. Analysis of strains with the htrA (S234A) catalytic site mutation showed that the proteolytic activity of HtrA selectively repressed competence when translational fidelity was high but not when accuracy was low. These findings redefine the pneumococcal competence pathway as a response to errors during protein synthesis. This response has the capacity to address the immediate challenge of misfolded proteins through production of chaperones and proteases and may also be able to address, through genetic exchange, upstream coding errors that cause intrinsic protein folding defects. The competence pathway may thereby represent a strategy for dealing with lesions that impair proper protein coding and for maintaining the coding integrity of the genome. The signaling pathway that governs competence in the human respiratory tract pathogen Streptococcus pneumoniae regulates both genetic transformation and the production of cellular chaperones and proteases. The current study shows that this pathway is sensitively controlled in response to changes in the accuracy of protein synthesis. Increasing the error rate during ribosomal decoding induced competence, while decreasing the error rate repressed competence. This pattern of regulation was promoted by the HtrA protease, which selectively repressed competence when translational fidelity was high but not when accuracy was low. Our findings demonstrate that this organism is able to monitor the accuracy of information used for protein biosynthesis and suggest that errors trigger a response addressing both the immediate challenge of misfolded proteins and, through genetic exchange, upstream coding errors that may underlie protein folding defects. This pathway may represent an evolutionary strategy for maintaining the coding integrity of the genome.

  2. Variable-rate optical communication through the turbulent atmosphere. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Levitt, B. K.

    1971-01-01

    It was demonstrated that the data transmitter can extract real time, channel state information by processing the field received when a pilot tone is sent from the data receiver to the data transmitter. Based on these channel measurements, optimal variable rate techniques were derived and significant improvements in system perforamnce were obtained, particularly at low bit error rates.

  3. MS-READ: Quantitative measurement of amino acid incorporation.

    PubMed

    Mohler, Kyle; Aerni, Hans-Rudolf; Gassaway, Brandon; Ling, Jiqiang; Ibba, Michael; Rinehart, Jesse

    2017-11-01

    Ribosomal protein synthesis results in the genetically programmed incorporation of amino acids into a growing polypeptide chain. Faithful amino acid incorporation that accurately reflects the genetic code is critical to the structure and function of proteins as well as overall proteome integrity. Errors in protein synthesis are generally detrimental to cellular processes yet emerging evidence suggest that proteome diversity generated through mistranslation may be beneficial under certain conditions. Cumulative translational error rates have been determined at the organismal level, however codon specific error rates and the spectrum of misincorporation errors from system to system remain largely unexplored. In particular, until recently technical challenges have limited the ability to detect and quantify comparatively rare amino acid misincorporation events, which occur orders of magnitude less frequently than canonical amino acid incorporation events. We now describe a technique for the quantitative analysis of amino acid incorporation that provides the sensitivity necessary to detect mistranslation events during translation of a single codon at frequencies as low as 1 in 10,000 for all 20 proteinogenic amino acids, as well as non-proteinogenic and modified amino acids. This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. 50 Mbps free space direct detection laser diode optical communication system with Q = 4 PPM signaling

    NASA Technical Reports Server (NTRS)

    Sun, Xiaoli; Davidson, Frederic; Field, Christopher

    1990-01-01

    A 50 Mbps direct detection optical communication system for use in an intersatellite link was constructed with an AlGaAs laser diode transmitter and a silicon avalanche photodiode photodetector. The system used a Q = 4 PPM format. The receiver consisted of a maximum likelihood PPM detector and a timing recovery subsystem. The PPM slot clock was recovered at the receiver by using a transition detector followed by a PLL. The PPM word clock was recovered by using a second PLL whose input was derived from the presence of back-to-back PPM pulses contained in the received random PPM pulse sequences. The system achieved a bit error rate of 0.000001 at less than 50 detected signal photons/information bit. The receiver was capable of acquiring and maintaining slot and word synchronization for received signal levels greater than 20 photons/information bit, at which the receiver bit error rate was about 0.01.

  5. Optical mass memory investigation

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The MASTER 1 optical mass storage system advanced working model (AWM) was designed to demonstrate recording and playback of imagery data and to enable quantitative data to be derived as to the statistical distribution of raw errors experienced through the system. The AWM consists of two subsystems, the recorder and storage and retrieval. The recorder subsystem utilizes key technologies such as an acoustic travelling wave lens to achieve recording of digital data on fiche at a rate of 30 Mbits/sec, whereas the storage and retrieval reproducer subsystem utilizes a less complex optical system that employs an acousto-optical beam deflector to achieve data readout at a 5 Mbits/sec rate. The system has the built in capability for detecting and collecting error statistics. The recorder and storage and retrieval subsystems operate independent of one another and are each constructed in modular form with each module performing independent functions. The operation of each module and its interface to other modules is controlled by one controller for both subsystems.

  6. Error reporting from the da Vinci surgical system in robotic surgery: A Canadian multispecialty experience at a single academic centre

    PubMed Central

    Rajih, Emad; Tholomier, Côme; Cormier, Beatrice; Samouëlian, Vanessa; Warkus, Thomas; Liberman, Moishe; Widmer, Hugues; Lattouf, Jean-Baptiste; Alenizi, Abdullah M.; Meskawi, Malek; Valdivieso, Roger; Hueber, Pierre-Alain; Karakewicz, Pierre I.; El-Hakim, Assaad; Zorn, Kevin C.

    2017-01-01

    Introduction The goal of the study is to evaluate and report on the third-generation da Vinci surgical (Si) system malfunctions. Methods A total of 1228 robotic surgeries were performed between January 2012 and December 2015 at our academic centre. All cases were performed by using a single, dual console, four-arm, da Vinci Si robot system. The three specialties included urology, gynecology, and thoracic surgery. Studied outcomes included the robotic surgical error types, immediate consequences, and operative side effects. Error rate trend with time was also examined. Results Overall robotic malfunctions were documented on the da Vinci Si systems event log in 4.97% (61/1228) of the cases. The most common error was related to pressure sensors in the robotic arms indicating out of limit output. This recoverable fault was noted in 2.04% (25/1228) of cases. Other errors included unrecoverable electronic communication-related in 1.06% (13/1228) of cases, failed encoder error in 0.57% (7/1228), illuminator-related in 0.33% (4/1228), faulty switch in 0.24% (3/1228), battery-related failures in 0.24% (3/1228), and software/hardware error in 0.08% (1/1228) of cases. Surgical delay was reported only in one patient. No conversion to either open or laparoscopic occurred secondary to robotic malfunctions. In 2015, the incidence of robotic error rose to 1.71% (21/1228) from 0.81% (10/1228) in 2014. Conclusions Robotic malfunction is not infrequent in the current era of robotic surgery in various surgical subspecialties, but rarely consequential. Their seldom occurrence does not seem to affect patient safety or surgical outcome. PMID:28503234

  7. Error reporting from the da Vinci surgical system in robotic surgery: A Canadian multispecialty experience at a single academic centre.

    PubMed

    Rajih, Emad; Tholomier, Côme; Cormier, Beatrice; Samouëlian, Vanessa; Warkus, Thomas; Liberman, Moishe; Widmer, Hugues; Lattouf, Jean-Baptiste; Alenizi, Abdullah M; Meskawi, Malek; Valdivieso, Roger; Hueber, Pierre-Alain; Karakewicz, Pierre I; El-Hakim, Assaad; Zorn, Kevin C

    2017-05-01

    The goal of the study is to evaluate and report on the third-generation da Vinci surgical (Si) system malfunctions. A total of 1228 robotic surgeries were performed between January 2012 and December 2015 at our academic centre. All cases were performed by using a single, dual console, four-arm, da Vinci Si robot system. The three specialties included urology, gynecology, and thoracic surgery. Studied outcomes included the robotic surgical error types, immediate consequences, and operative side effects. Error rate trend with time was also examined. Overall robotic malfunctions were documented on the da Vinci Si systems event log in 4.97% (61/1228) of the cases. The most common error was related to pressure sensors in the robotic arms indicating out of limit output. This recoverable fault was noted in 2.04% (25/1228) of cases. Other errors included unrecoverable electronic communication-related in 1.06% (13/1228) of cases, failed encoder error in 0.57% (7/1228), illuminator-related in 0.33% (4/1228), faulty switch in 0.24% (3/1228), battery-related failures in 0.24% (3/1228), and software/hardware error in 0.08% (1/1228) of cases. Surgical delay was reported only in one patient. No conversion to either open or laparoscopic occurred secondary to robotic malfunctions. In 2015, the incidence of robotic error rose to 1.71% (21/1228) from 0.81% (10/1228) in 2014. Robotic malfunction is not infrequent in the current era of robotic surgery in various surgical subspecialties, but rarely consequential. Their seldom occurrence does not seem to affect patient safety or surgical outcome.

  8. Significantly improved precision of cell migration analysis in time-lapse video microscopy through use of a fully automated tracking system

    PubMed Central

    2010-01-01

    Background Cell motility is a critical parameter in many physiological as well as pathophysiological processes. In time-lapse video microscopy, manual cell tracking remains the most common method of analyzing migratory behavior of cell populations. In addition to being labor-intensive, this method is susceptible to user-dependent errors regarding the selection of "representative" subsets of cells and manual determination of precise cell positions. Results We have quantitatively analyzed these error sources, demonstrating that manual cell tracking of pancreatic cancer cells lead to mis-calculation of migration rates of up to 410%. In order to provide for objective measurements of cell migration rates, we have employed multi-target tracking technologies commonly used in radar applications to develop fully automated cell identification and tracking system suitable for high throughput screening of video sequences of unstained living cells. Conclusion We demonstrate that our automatic multi target tracking system identifies cell objects, follows individual cells and computes migration rates with high precision, clearly outperforming manual procedures. PMID:20377897

  9. High Data Rates for AubieSat-2 A & B, Two CubeSats Performing High Energy Science in the Upper Atmosphere

    NASA Technical Reports Server (NTRS)

    Sims, William H.

    2015-01-01

    This paper will discuss a proposed CubeSat size (3 Units / 6 Units) telemetry system concept being developed at Marshall Space Flight Center (MSFC) in cooperation with Auburn University. The telemetry system incorporates efficient, high-bandwidth communications by developing flight-ready, low-cost, PROTOFLIGHT software defined radio (SDR) payload for use on CubeSats. The current telemetry system is slightly larger in dimension of footprint than required to fit within a 0.75 Unit CubeSat volume. Extensible and modular communications for CubeSat technologies will provide high data rates for science experiments performed by two CubeSats flying in formation in Low Earth Orbit. The project is a collaboration between the University of Alabama in Huntsville and Auburn University to study high energy phenomena in the upper atmosphere. Higher bandwidth capacity will enable high-volume, low error-rate data transfer to and from the CubeSats, while also providing additional bandwidth and error correction margin to accommodate more complex encryption algorithms and higher user volume.

  10. Photodiode-based cutting interruption sensor for near-infrared lasers.

    PubMed

    Adelmann, B; Schleier, M; Neumeier, B; Hellmann, R

    2016-03-01

    We report on a photodiode-based sensor system to detect cutting interruptions during laser cutting with a fiber laser. An InGaAs diode records the thermal radiation from the process zone with a ring mirror and optical filter arrangement mounted between a collimation unit and a cutting head. The photodiode current is digitalized with a sample rate of 20 kHz and filtered with a Chebyshev Type I filter. From the measured signal during the piercing, a threshold value is calculated. When the diode signal exceeds this threshold during cutting, a cutting interruption is indicated. This method is applied to sensor signals from cutting mild steel, stainless steel, and aluminum, as well as different material thicknesses and also laser flame cutting, showing the possibility to detect cutting interruptions in a broad variety of applications. In a series of 83 incomplete cuts, every cutting interruption is successfully detected (alpha error of 0%), while no cutting interruption is reported in 266 complete cuts (beta error of 0%). With this remarkable high detection rate and low error rate, the possibility to work with different materials and thicknesses in combination with the easy mounting of the sensor unit also to existing cutting machines highlight the enormous potential for this sensor system in industrial applications.

  11. Average BER and outage probability of the ground-to-train OWC link in turbulence with rain

    NASA Astrophysics Data System (ADS)

    Zhang, Yixin; Yang, Yanqiu; Hu, Beibei; Yu, Lin; Hu, Zheng-Da

    2017-09-01

    The bit-error rate (BER) and outage probability of optical wireless communication (OWC) link for the ground-to-train of the curved track in turbulence with rain is evaluated. Considering the re-modulation effects of raining fluctuation on optical signal modulated by turbulence, we set up the models of average BER and outage probability in the present of pointing errors, based on the double inverse Gaussian (IG) statistical distribution model. The numerical results indicate that, for the same covered track length, the larger curvature radius increases the outage probability and average BER. The performance of the OWC link in turbulence with rain is limited mainly by the rain rate and pointing errors which are induced by the beam wander and train vibration. The effect of the rain rate on the performance of the link is more severe than the atmospheric turbulence, but the fluctuation owing to the atmospheric turbulence affects the laser beam propagation more greatly than the skewness of the rain distribution. Besides, the turbulence-induced beam wander has a more significant impact on the system in heavier rain. We can choose the size of transmitting and receiving apertures and improve the shockproof performance of the tracks to optimize the communication performance of the system.

  12. Performance analysis for mixed FSO/RF Nakagami-m and Exponentiated Weibull dual-hop airborne systems

    NASA Astrophysics Data System (ADS)

    Jing, Zhao; Shang-hong, Zhao; Wei-hu, Zhao; Ke-fan, Chen

    2017-06-01

    In this paper, the performances of mixed free-space optical (FSO)/radio frequency (RF) systems are presented based on the decode-and-forward relaying. The Exponentiated Weibull fading channel with pointing error effect is adopted for the atmospheric fluctuation of FSO channel and the RF link undergoes the Nakagami-m fading. We derived the analytical expression for cumulative distribution function (CDF) of equivalent signal-to-noise ratio (SNR). The novel mathematical presentations of outage probability and average bit-error-rate (BER) are developed based on the Meijer's G function. The analytical results show an accurately match to the Monte-Carlo simulation results. The outage and BER performance for the mixed system by decode-and-forward relay are investigated considering atmospheric turbulence and pointing error condition. The effect of aperture averaging is evaluated in all atmospheric turbulence conditions as well.

  13. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  14. Study of a Satellite Attitude Control System Using Integrating Gyros as Torque Sources

    NASA Technical Reports Server (NTRS)

    White, John S.; Hansen, Q. Marion

    1961-01-01

    This report considers the use of single-degree-of-freedom integrating gyros as torque sources for precise control of satellite attitude. Some general design criteria are derived and applied to the specific example of the Orbiting Astronomical Observatory. The results of the analytical design are compared with the results of an analog computer study and also with experimental results from a low-friction platform. The steady-state and transient behavior of the system, as determined by the analysis, by the analog study, and by the experimental platform agreed quite well. The results of this study show that systems using integrating gyros for precise satellite attitude control can be designed to have a reasonably rapid and well-damped transient response, as well as very small steady-state errors. Furthermore, it is shown that the gyros act as rate sensors, as well as torque sources, so that no rate stabilization networks are required, and when no error sensor is available, the vehicle is still rate stabilized. Hence, it is shown that a major advantage of a gyro control system is that when the target is occulted, an alternate reference is not required.

  15. Determination of the precision error of the pulmonary artery thermodilution catheter using an in vitro continuous flow test rig.

    PubMed

    Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M

    2011-01-01

    Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.

  16. Dissociable effects of surprising rewards on learning and memory.

    PubMed

    Rouhani, Nina; Norman, Kenneth A; Niv, Yael

    2018-03-19

    Reward-prediction errors track the extent to which rewards deviate from expectations, and aid in learning. How do such errors in prediction interact with memory for the rewarding episode? Existing findings point to both cooperative and competitive interactions between learning and memory mechanisms. Here, we investigated whether learning about rewards in a high-risk context, with frequent, large prediction errors, would give rise to higher fidelity memory traces for rewarding events than learning in a low-risk context. Experiment 1 showed that recognition was better for items associated with larger absolute prediction errors during reward learning. Larger prediction errors also led to higher rates of learning about rewards. Interestingly we did not find a relationship between learning rate for reward and recognition-memory accuracy for items, suggesting that these two effects of prediction errors were caused by separate underlying mechanisms. In Experiment 2, we replicated these results with a longer task that posed stronger memory demands and allowed for more learning. We also showed improved source and sequence memory for items within the high-risk context. In Experiment 3, we controlled for the difficulty of reward learning in the risk environments, again replicating the previous results. Moreover, this control revealed that the high-risk context enhanced item-recognition memory beyond the effect of prediction errors. In summary, our results show that prediction errors boost both episodic item memory and incremental reward learning, but the two effects are likely mediated by distinct underlying systems. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. A Feasibility Study for Measuring Accurate Chest Compression Depth and Rate on Soft Surfaces Using Two Accelerometers and Spectral Analysis

    PubMed Central

    Gutiérrez, J. J.; Russell, James K.

    2016-01-01

    Background. Cardiopulmonary resuscitation (CPR) feedback devices are being increasingly used. However, current accelerometer-based devices overestimate chest displacement when CPR is performed on soft surfaces, which may lead to insufficient compression depth. Aim. To assess the performance of a new algorithm for measuring compression depth and rate based on two accelerometers in a simulated resuscitation scenario. Materials and Methods. Compressions were provided to a manikin on two mattresses, foam and sprung, with and without a backboard. One accelerometer was placed on the chest and the second at the manikin's back. Chest displacement and mattress displacement were calculated from the spectral analysis of the corresponding acceleration every 2 seconds and subtracted to compute the actual sternal-spinal displacement. Compression rate was obtained from the chest acceleration. Results. Median unsigned error in depth was 2.1 mm (4.4%). Error was 2.4 mm in the foam and 1.7 mm in the sprung mattress (p < 0.001). Error was 3.1/2.0 mm and 1.8/1.6 mm with/without backboard for foam and sprung, respectively (p < 0.001). Median error in rate was 0.9 cpm (1.0%), with no significant differences between test conditions. Conclusion. The system provided accurate feedback on chest compression depth and rate on soft surfaces. Our solution compensated mattress displacement, avoiding overestimation of compression depth when CPR is performed on soft surfaces. PMID:27999808

  18. Enhanced autocompensating quantum cryptography system.

    PubMed

    Bethune, Donald S; Navarro, Martha; Risk, William P

    2002-03-20

    We have improved the hardware and software of our autocompensating system for quantum key distribution by replacing bulk optical components at the end stations with fiber-optic equivalents and implementing software that synchronizes end-station activities, communicates basis choices, corrects errors, and performs privacy amplification over a local area network. The all-fiber-optic arrangement provides stable, efficient, and high-contrast routing of the photons. The low-bit error rate leads to high error-correction efficiency and minimizes data sacrifice during privacy amplification. Characterization measurements made on a number of commercial avalanche photodiodes are presented that highlight the need for improved devices tailored specifically for quantum information applications. A scheme for frequency shifting the photons returning from Alice's station to allow them to be distinguished from backscattered noise photons is also described.

  19. 20-Gbps optical LiFi transport system.

    PubMed

    Ying, Cheng-Ling; Lu, Hai-Han; Li, Chung-Yi; Cheng, Chun-Jen; Peng, Peng-Chun; Ho, Wen-Jeng

    2015-07-15

    A 20-Gbps optical light-based WiFi (LiFi) transport system employing vertical-cavity surface-emitting laser (VCSEL) and external light injection technique with 16-quadrature amplitude modulation (QAM)-orthogonal frequency-division multiplexing (OFDM) modulating signal is proposed. Good bit error rate (BER) performance and clear constellation map are achieved in our proposed optical LiFi transport systems. An optical LiFi transport system, delivering 16-QAM-OFDM signal over a 6-m free-space link, with a data rate of 20 Gbps, is successfully demonstrated. Such a 20-Gbps optical LiFi transport system provides the advantage of a free-space communication link for high data rates, which can accelerate the visible laser light communication (VLLC) deployment.

  20. Adaptive selective relaying in cooperative free-space optical systems over atmospheric turbulence and misalignment fading channels.

    PubMed

    Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2014-06-30

    In this paper, a novel adaptive cooperative protocol with multiple relays using detect-and-forward (DF) over atmospheric turbulence channels with pointing errors is proposed. The adaptive DF cooperative protocol here analyzed is based on the selection of the optical path, source-destination or different source-relay links, with a greater value of fading gain or irradiance, maintaining a high diversity order. Closed-form asymptotic bit error-rate (BER) expressions are obtained for a cooperative free-space optical (FSO) communication system with Nr relays, when the irradiance of the transmitted optical beam is susceptible to either a wide range of turbulence conditions, following a gamma-gamma distribution of parameters α and β, or pointing errors, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. A greater robustness for different link distances and pointing errors is corroborated by the obtained results if compared with similar cooperative schemes or equivalent multiple-input multiple-output (MIMO) systems. Simulation results are further demonstrated to confirm the accuracy and usefulness of the derived results.

  1. Using a Direct Instruction Flashcard System with Two Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Glover, Patti; McLaughlin, Thomas; Derby, K. Mark; Gower, Jan

    2010-01-01

    Introduction: The use of Direct Instruction (DI) flashcards has been suggested as an effective classroom intervention procedure. The present case report examined the use of DI flashcards with two adolescents with learning disabilities. Objectives: The purpose of this research was to increase the correct rate and decrease the error rate for…

  2. Depicting mass flow rate of R134a /LPG refrigerant through straight and helical coiled adiabatic capillary tubes of vapor compression refrigeration system using artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Gill, Jatinder; Singh, Jagdev

    2018-07-01

    In this work, an experimental investigation is carried out with R134a and LPG refrigerant mixture for depicting mass flow rate through straight and helical coil adiabatic capillary tubes in a vapor compression refrigeration system. Various experiments were conducted under steady-state conditions, by changing capillary tube length, inner diameter, coil diameter and degree of subcooling. The results showed that mass flow rate through helical coil capillary tube was found lower than straight capillary tube by about 5-16%. Dimensionless correlation and Artificial Neural Network (ANN) models were developed to predict mass flow rate. It was found that dimensionless correlation and ANN model predictions agreed well with experimental results and brought out an absolute fraction of variance of 0.961 and 0.988, root mean square error of 0.489 and 0.275 and mean absolute percentage error of 4.75% and 2.31% respectively. The results suggested that ANN model shows better statistical prediction than dimensionless correlation model.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsuta, Y; Tohoku University Graduate School of Medicine, Sendal, Miyagi; Kadoya, N

    Purpose: In this study, we developed a system to calculate three dimensional (3D) dose that reflects dosimetric error caused by leaf miscalibration for head and neck and prostate volumetric modulated arc therapy (VMAT) without additional treatment planning system calculation on real time. Methods: An original system called clarkson dose calculation based dosimetric error calculation to calculate dosimetric error caused by leaf miscalibration was developed by MATLAB (Math Works, Natick, MA). Our program, first, calculates point doses at isocenter for baseline and modified VMAT plan, which generated by inducing MLC errors that enlarged aperture size of 1.0 mm with clarkson dosemore » calculation. Second, error incuced 3D dose was generated with transforming TPS baseline 3D dose using calculated point doses. Results: Mean computing time was less than 5 seconds. For seven head and neck and prostate plans, between our method and TPS calculated error incuced 3D dose, the 3D gamma passing rates (0.5%/2 mm, global) are 97.6±0.6% and 98.0±0.4%. The dose percentage change with dose volume histogram parameter of mean dose on target volume were 0.1±0.5% and 0.4±0.3%, and with generalized equivalent uniform dose on target volume were −0.2±0.5% and 0.2±0.3%. Conclusion: The erroneous 3D dose calculated by our method is useful to check dosimetric error caused by leaf miscalibration before pre treatment patient QA dosimetry checks.« less

  4. Improving TCP Network Performance by Detecting and Reacting to Packet Reordering

    NASA Technical Reports Server (NTRS)

    Kruse, Hans; Ostermann, Shawn; Allman, Mark

    2003-01-01

    There are many factors governing the performance of TCP-basec applications traversing satellite channels. The end-to-end performance of TCP is known to be degraded by the reordering, delay, noise and asymmetry inherent in geosynchronous systems. This result has been largely based on experiments that evaluate the performance of TCP in single flow tests. While single flow tests are useful for deriving information on the theoretical behavior of TCP and allow for easy diagnosis of problems they do not represent a broad range of realistic situations and therefore cannot be used to authoritatively comment on performance issues. The experiments discussed in this report test TCP s performance in a more dynamic environment with competing traffic flows from hundreds of TCP connections running simultaneously across the satellite channel. Another aspect we investigate is TCP's reaction to bit errors on satellite channels. TCP interprets loss as a sign of network congestion. This causes TCP to reduce its transmission rate leading to reduced performance when loss is due to corruption. We allowed the bit error rate on our satellite channel to vary widely and tested the performance of TCP as a function of these bit error rates. Our results show that the average performance of TCP on satellite channels is good even under conditions of loss as high as bit error rates of 10(exp -5)

  5. Models for H₃ receptor antagonist activity of sulfonylurea derivatives.

    PubMed

    Khatri, Naveen; Madan, A K

    2014-03-01

    The histamine H₃ receptor has been perceived as an auspicious target for the treatment of various central and peripheral nervous system diseases. In present study, a wide variety of 60 2D and 3D molecular descriptors (MDs) were successfully utilized for the development of models for the prediction of antagonist activity of sulfonylurea derivatives for histamine H₃ receptors. Models were developed through decision tree (DT), random forest (RF) and moving average analysis (MAA). Dragon software version 6.0.28 was employed for calculation of values of diverse MDs of each analogue involved in the data set. The DT classified and correctly predicted the input data with an impressive non-error rate of 94% in the training set and 82.5% during cross validation. RF correctly classified the analogues into active and inactive with a non-error rate of 79.3%. The MAA based models predicted the antagonist histamine H₃ receptor activity with non-error rate up to 90%. Active ranges of the proposed MAA based models not only exhibited high potency but also showed improved safety as indicated by relatively high values of selectivity index. The statistical significance of the models was assessed through sensitivity, specificity, non-error rate, Matthew's correlation coefficient and intercorrelation analysis. Proposed models offer vast potential for providing lead structures for development of potent but safe H₃ receptor antagonist sulfonylurea derivatives. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Radiological reporting that combine continuous speech recognition with error correction by transcriptionists.

    PubMed

    Ichikawa, Tamaki; Kitanosono, Takashi; Koizumi, Jun; Ogushi, Yoichi; Tanaka, Osamu; Endo, Jun; Hashimoto, Takeshi; Kawada, Shuichi; Saito, Midori; Kobayashi, Makiko; Imai, Yutaka

    2007-12-20

    We evaluated the usefulness of radiological reporting that combines continuous speech recognition (CSR) and error correction by transcriptionists. Four transcriptionists (two with more than 10 years' and two with less than 3 months' transcription experience) listened to the same 100 dictation files and created radiological reports using conventional transcription and a method that combined CSR with manual error correction by the transcriptionists. We compared the 2 groups using the 2 methods for accuracy and report creation time and evaluated the transcriptionists' inter-personal dependence on accuracy rate and report creation time. We used a CSR system that did not require the training of the system to recognize the user's voice. We observed no significant difference in accuracy between the 2 groups and 2 methods that we tested, though transcriptionists with greater experience transcribed faster than those with less experience using conventional transcription. Using the combined method, error correction speed was not significantly different between two groups of transcriptionists with different levels of experience. Combining CSR and manual error correction by transcriptionists enabled convenient and accurate radiological reporting.

  7. Meteor burst communications for LPI applications

    NASA Astrophysics Data System (ADS)

    Schilling, D. L.; Apelewicz, T.; Lomp, G. R.; Lundberg, L. A.

    A technique that enhances the performance of meteor-burst communications is described. The technique, the feedback adaptive variable rate (FAVR) system, maintains a feedback channel that allows the transmitted bit rate to mimic the time behavior of the received power so that a constant bit energy is maintained. This results in a constant probability of bit error in each transmitted bit. Experimentally determined meteor-burst channel characteristics and FAVR system simulation results are presented.

  8. A negentropy minimization approach to adaptive equalization for digital communication systems.

    PubMed

    Choi, Sooyong; Lee, Te-Won

    2004-07-01

    In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.

  9. Smartphone-based photoplethysmographic imaging for heart rate monitoring.

    PubMed

    Alafeef, Maha

    2017-07-01

    The purpose of this study is to make use of visible light reflected mode photoplethysmographic (PPG) imaging for heart rate (HR) monitoring via smartphones. The system uses the built-in camera feature in mobile phones to capture video from the subject's index fingertip. The video is processed, and then the PPG signal resulting from the video stream processing is used to calculate the subject's heart rate. Records from 19 subjects were used to evaluate the system's performance. The HR values obtained by the proposed method were compared with the actual HR. The obtained results show an accuracy of 99.7% and a maximum absolute error of 0.4 beats/min where most of the absolute errors lay in the range of 0.04-0.3 beats/min. Given the encouraging results, this type of HR measurement can be adopted with great benefit, especially in the conditions of personal use or home-based care. The proposed method represents an efficient portable solution for HR accurate detection and recording.

  10. Evaluation of analytical errors in a clinical chemistry laboratory: a 3 year experience.

    PubMed

    Sakyi, As; Laing, Ef; Ephraim, Rk; Asibey, Of; Sadique, Ok

    2015-01-01

    Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified.

  11. The Extended HANDS Characterization and Analysis of Metric Biases

    NASA Astrophysics Data System (ADS)

    Kelecy, T.; Knox, R.; Cognion, R.

    The Extended High Accuracy Network Determination System (Extended HANDS) consists of a network of low cost, high accuracy optical telescopes designed to support space surveillance and development of space object characterization technologies. Comprising off-the-shelf components, the telescopes are designed to provide sub arc-second astrometric accuracy. The design and analysis team are in the process of characterizing the system through development of an error allocation tree whose assessment is supported by simulation, data analysis, and calibration tests. The metric calibration process has revealed 1-2 arc-second biases in the right ascension and declination measurements of reference satellite position, and these have been observed to have fairly distinct characteristics that appear to have some dependence on orbit geometry and tracking rates. The work presented here outlines error models developed to aid in development of the system error budget, and examines characteristic errors (biases, time dependence, etc.) that might be present in each of the relevant system elements used in the data collection and processing, including the metric calibration processing. The relevant reference frames are identified, and include the sensor (CCD camera) reference frame, Earth-fixed topocentric frame, topocentric inertial reference frame, and the geocentric inertial reference frame. The errors modeled in each of these reference frames, when mapped into the topocentric inertial measurement frame, reveal how errors might manifest themselves through the calibration process. The error analysis results that are presented use satellite-sensor geometries taken from periods where actual measurements were collected, and reveal how modeled errors manifest themselves over those specific time periods. These results are compared to the real calibration metric data (right ascension and declination residuals), and sources of the bias are hypothesized. In turn, the actual right ascension and declination calibration residuals are also mapped to other relevant reference frames in an attempt to validate the source of the bias errors. These results will serve as the basis for more focused investigation into specific components embedded in the system and system processes that might contain the source of the observed biases.

  12. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    PubMed Central

    Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco

    2014-01-01

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

  13. Sex differences in the shoulder joint position sense acuity: a cross-sectional study.

    PubMed

    Vafadar, Amir K; Côté, Julie N; Archambault, Philippe S

    2015-09-30

    Work-related musculoskeletal disorders (WMSD) is the most expensive form of work disability. Female sex has been considered as an individual risk factor for the development of WMSD, specifically in the neck and shoulder region. One of the factors that might contribute to the higher injury rate in women is possible differences in neuromuscular control. Accordingly the purpose of this study was to estimate the effect of sex on shoulder joint position sense acuity (as a part of shoulder neuromuscular control) in healthy individuals. Twenty-eight healthy participants, 14 females and 14 males were recruited for this study. To test position sense acuity, subjects were asked to flex their dominant shoulder to one of the three pre-defined angle ranges (low, mid and high-ranges) with eyes closed, hold their arm in that position for three seconds, go back to the starting position and then immediately replicate the same joint flexion angle, while the difference between the reproduced and original angle was taken as the measure of position sense error. The errors were measured using Vicon motion capture system. Subjects reproduced nine positions in total (3 ranges × 3 trials each). Calculation of absolute repositioning error (magnitude of error) showed no significant difference between men and women (p-value ≥ 0.05). However, the analysis of the direction of error (constant error) showed a significant difference between the sexes, as women tended to mostly overestimate the target, whereas men tended to both overestimate and underestimate the target (p-value ≤ 0.01, observed power = 0.79). The results also showed that men had a significantly more variable error, indicating more variability in their position sense, compared to women (p-value ≤ 0.05, observed power = 0.78). Differences observed in the constant JPS error suggest that men and women might use different neuromuscular control strategies in the upper limb. In addition, higher JPS variability observed in men might be one of the factors that could contribute to their lower rate of musculoskeletal disorders, compared to women. The result of this study showed that shoulder position sense, as part of the neuromuscular control system, differs between men and women. This finding can help us better understand the reasons behind the higher rate of musculoskeletal disorders in women, especially in the working environments.

  14. MBM fuel feeding system design and evaluation for FBG pilot plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, William A., E-mail: bill.campbell@usask.ca; Fonstad, Terry; Pugsley, Todd

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer A 1-5 g/s fuel feeding system for pilot scale FBG was designed, built and tested. Black-Right-Pointing-Pointer Multiple conveying stages improve pressure balancing, flow control and stability. Black-Right-Pointing-Pointer Secondary conveyor stage reduced output irregularity from 47% to 15%. Black-Right-Pointing-Pointer Pneumatic air sparging effective in dealing with poor flow ability of MBM powder. Black-Right-Pointing-Pointer Pneumatic injection port plugs with char at gasification temperature of 850 Degree-Sign C. - Abstract: A biomass fuel feeding system has been designed, constructed and evaluated for a fluidized bed gasifier (FBG) pilot plant at the University of Saskatchewan (Saskatoon, SK, Canada). The system was designedmore » for meat and bone meal (MBM) to be injected into the gasifier at a mass flow-rate range of 1-5 g/s. The designed system consists of two stages of screw conveyors, including a metering stage which controlled the flow-rate of fuel, a rotary airlock and an injection conveyor stage, which delivered that fuel at a consistent rate to the FBG. The rotary airlock which was placed between these conveyors, proved unable to maintain a pressure seal, thus the entire conveying system was sealed and pressurized. A pneumatic injection nozzle was also fabricated, tested and fitted to the end of the injection conveyor for direct injection and dispersal into the fluidized bed. The 150 mm metering screw conveyor was shown to effectively control the mass output rate of the system, across a fuel output range of 1-25 g/s, while the addition of the 50 mm injection screw conveyor reduced the irregularity (error) of the system output rate from 47% to 15%. Although material plugging was found to be an issue in the inlet hopper to the injection conveyor, the addition of air sparging ports and a system to pulse air into those ports was found to successfully eliminate this issue. The addition of the pneumatic injection nozzle reduced the output irregularity further to 13%, with an air supply of 50 slpm as the minimum air supply to drive this injector. After commissioning of this final system to the FBG reactor, the injection nozzle was found to plug with char however, and was subsequently removed from the system. Final operation of the reactor continues satisfactorily with the two screw conveyors operating at matching pressure with the fluidized bed, with the output rate of the system estimated based on system characteristic equations, and confirmed by static weight measurements made before and after testing. The error rate by this method is reported to be approximately 10%, which is slightly better than the estimated error rate of 15% for the conveyor system. The reliability of this measurement prediction method relies upon the relative consistency of the physical properties of MBM with respect to its bulk density and feeding characteristics.« less

  15. Monte Carlo simulations of the impact of troposphere, clock and measurement errors on the repeatability of VLBI positions

    NASA Astrophysics Data System (ADS)

    Pany, A.; Böhm, J.; MacMillan, D.; Schuh, H.; Nilsson, T.; Wresnik, J.

    2011-01-01

    Within the International VLBI Service for Geodesy and Astrometry (IVS) Monte Carlo simulations have been carried out to design the next generation VLBI system ("VLBI2010"). Simulated VLBI observables were generated taking into account the three most important stochastic error sources in VLBI, i.e. wet troposphere delay, station clock, and measurement error. Based on realistic physical properties of the troposphere and clocks we ran simulations to investigate the influence of the troposphere on VLBI analyses, and to gain information about the role of clock performance and measurement errors of the receiving system in the process of reaching VLBI2010's goal of mm position accuracy on a global scale. Our simulations confirm that the wet troposphere delay is the most important of these three error sources. We did not observe significant improvement of geodetic parameters if the clocks were simulated with an Allan standard deviation better than 1 × 10-14 at 50 min and found the impact of measurement errors to be relatively small compared with the impact of the troposphere. Along with simulations to test different network sizes, scheduling strategies, and antenna slew rates these studies were used as a basis for the definition and specification of VLBI2010 antennas and recording system and might also be an example for other space geodetic techniques.

  16. Elimination of Emergency Department Medication Errors Due To Estimated Weights.

    PubMed

    Greenwalt, Mary; Griffen, David; Wilkerson, Jim

    2017-01-01

    From 7/2014 through 6/2015, 10 emergency department (ED) medication dosing errors were reported through the electronic incident reporting system of an urban academic medical center. Analysis of these medication errors identified inaccurate estimated weight on patients as the root cause. The goal of this project was to reduce weight-based dosing medication errors due to inaccurate estimated weights on patients presenting to the ED. Chart review revealed that 13.8% of estimated weights documented on admitted ED patients varied more than 10% from subsequent actual admission weights recorded. A random sample of 100 charts containing estimated weights revealed 2 previously unreported significant medication dosage errors (.02 significant error rate). Key improvements included removing barriers to weighing ED patients, storytelling to engage staff and change culture, and removal of the estimated weight documentation field from the ED electronic health record (EHR) forms. With these improvements estimated weights on ED patients, and the resulting medication errors, were eliminated.

  17. Development of an errorable car-following driver model

    NASA Astrophysics Data System (ADS)

    Yang, H.-H.; Peng, H.

    2010-06-01

    An errorable car-following driver model is presented in this paper. An errorable driver model is one that emulates human driver's functions and can generate both nominal (error-free), as well as devious (with error) behaviours. This model was developed for evaluation and design of active safety systems. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. The stochastic car-following behaviour was first analysed and modelled as a random process. Three error-inducing behaviours were then introduced. First, human perceptual limitation was studied and implemented. Distraction due to non-driving tasks was then identified based on the statistical analysis of the driving data. Finally, time delay of human drivers was estimated through a recursive least-square identification process. By including these three error-inducing behaviours, rear-end collisions with the lead vehicle could occur. The simulated crash rate was found to be similar but somewhat higher than that reported in traffic statistics.

  18. Spread-spectrum multiple access using wideband noncoherent MFSK

    NASA Technical Reports Server (NTRS)

    Ha, Tri T.; Pratt, Timothy; Maggenti, Mark A.

    1987-01-01

    Two spread-spectrum multiple access systems which use wideband M-ary frequency shift keying (FSK) (MFSK) as the primary modulation are presented. A bit error rate performance analysis is presented and system throughput is calculated for sample C band and Ku band satellite systems. Sample link analyses are included to illustrate power and adjacent satellite interference considerations in practical multiple access systems.

  19. Robust Timing Synchronization in Aeronautical Mobile Communication Systems

    NASA Technical Reports Server (NTRS)

    Xiong, Fu-Qin; Pinchak, Stanley

    2004-01-01

    This work details a study of robust synchronization schemes suitable for satellite to mobile aeronautical applications. A new scheme, the Modified Sliding Window Synchronizer (MSWS), is devised and compared with existing schemes, including the traditional Early-Late Gate Synchronizer (ELGS), the Gardner Zero-Crossing Detector (GZCD), and the Sliding Window Synchronizer (SWS). Performance of the synchronization schemes is evaluated by a set of metrics that indicate performance in digital communications systems. The metrics are convergence time, mean square phase error (or root mean-square phase error), lowest SNR for locking, initial frequency offset performance, midstream frequency offset performance, and system complexity. The performance of the synchronizers is evaluated by means of Matlab simulation models. A simulation platform is devised to model the satellite to mobile aeronautical channel, consisting of a Quadrature Phase Shift Keying modulator, an additive white Gaussian noise channel, and a demodulator front end. Simulation results show that the MSWS provides the most robust performance at the cost of system complexity. The GZCD provides a good tradeoff between robustness and system complexity for communication systems that require high symbol rates or low overall system costs. The ELGS has a high system complexity despite its average performance. Overall, the SWS, originally designed for multi-carrier systems, performs very poorly in single-carrier communications systems. Table 5.1 in Section 5 provides a ranking of each of the synchronization schemes in terms of the metrics set forth in Section 4.1. Details of comparison are given in Section 5. Based on the results presented in Table 5, it is safe to say that the most robust synchronization scheme examined in this work is the high-sample-rate Modified Sliding Window Synchronizer. A close second is its low-sample-rate cousin. The tradeoff between complexity and lowest mean-square phase error determines the rankings of the Gardner Zero-Crossing Detector and both versions of the Early-Late Gate Synchronizer. The least robust models are the high and low-sample-rate Sliding Window Synchronizers. Consequently, the recommended replacement synchronizer for NASA's Advanced Air Transportation Technologies mobile aeronautical communications system is the high-sample-rate Modified Sliding Window Synchronizer. By incorporating this synchronizer into their system, NASA can be assured that their system will be operational in extremely adverse conditions. The quick convergence time of the MSWS should allow the use of high-level protocols. However, if NASA feels that reduced system complexity is the most important aspect of their replacement synchronizer, the Gardner Zero-Crossing Detector would be the best choice.

  20. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    PubMed

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  1. Glaucoma and Driving: On-Road Driving Characteristics

    PubMed Central

    Wood, Joanne M.; Black, Alex A.; Mallon, Kerry; Thomas, Ravi; Owsley, Cynthia

    2016-01-01

    Purpose To comprehensively investigate the types of driving errors and locations that are most problematic for older drivers with glaucoma compared to those without glaucoma using a standardized on-road assessment. Methods Participants included 75 drivers with glaucoma (mean = 73.2±6.0 years) with mild to moderate field loss (better-eye MD = -1.21 dB; worse-eye MD = -7.75 dB) and 70 age-matched controls without glaucoma (mean = 72.6 ± 5.0 years). On-road driving performance was assessed in a dual-brake vehicle by an occupational therapist using a standardized scoring system which assessed the types of driving errors and the locations where they were made and the number of critical errors that required an instructor intervention. Driving safety was rated on a 10-point scale. Self-reported driving ability and difficulties were recorded using the Driving Habits Questionnaire. Results Drivers with glaucoma were rated as significantly less safe, made more driving errors, and had almost double the rate of critical errors than those without glaucoma. Driving errors involved lane positioning and planning/approach, and were significantly more likely to occur at traffic lights and yield/give-way intersections. There were few between group differences in self-reported driving ability. Conclusions Older drivers with glaucoma with even mild to moderate field loss exhibit impairments in driving ability, particularly during complex driving situations that involve tactical problems with lane-position, planning ahead and observation. These results, together with the fact that these drivers self-report their driving to be relatively good, reinforce the need for evidence-based on-road assessments for evaluating driving fitness. PMID:27472221

  2. Glaucoma and Driving: On-Road Driving Characteristics.

    PubMed

    Wood, Joanne M; Black, Alex A; Mallon, Kerry; Thomas, Ravi; Owsley, Cynthia

    2016-01-01

    To comprehensively investigate the types of driving errors and locations that are most problematic for older drivers with glaucoma compared to those without glaucoma using a standardized on-road assessment. Participants included 75 drivers with glaucoma (mean = 73.2±6.0 years) with mild to moderate field loss (better-eye MD = -1.21 dB; worse-eye MD = -7.75 dB) and 70 age-matched controls without glaucoma (mean = 72.6 ± 5.0 years). On-road driving performance was assessed in a dual-brake vehicle by an occupational therapist using a standardized scoring system which assessed the types of driving errors and the locations where they were made and the number of critical errors that required an instructor intervention. Driving safety was rated on a 10-point scale. Self-reported driving ability and difficulties were recorded using the Driving Habits Questionnaire. Drivers with glaucoma were rated as significantly less safe, made more driving errors, and had almost double the rate of critical errors than those without glaucoma. Driving errors involved lane positioning and planning/approach, and were significantly more likely to occur at traffic lights and yield/give-way intersections. There were few between group differences in self-reported driving ability. Older drivers with glaucoma with even mild to moderate field loss exhibit impairments in driving ability, particularly during complex driving situations that involve tactical problems with lane-position, planning ahead and observation. These results, together with the fact that these drivers self-report their driving to be relatively good, reinforce the need for evidence-based on-road assessments for evaluating driving fitness.

  3. Machine Learned Replacement of N-Labels for Basecalled Sequences in DNA Barcoding.

    PubMed

    Ma, Eddie Y T; Ratnasingham, Sujeevan; Kremer, Stefan C

    2018-01-01

    This study presents a machine learning method that increases the number of identified bases in Sanger Sequencing. The system post-processes a KB basecalled chromatogram. It selects a recoverable subset of N-labels in the KB-called chromatogram to replace with basecalls (A,C,G,T). An N-label correction is defined given an additional read of the same sequence, and a human finished sequence. Corrections are added to the dataset when an alignment determines the additional read and human agree on the identity of the N-label. KB must also rate the replacement with quality value of in the additional read. Corrections are only available during system training. Developing the system, nearly 850,000 N-labels are obtained from Barcode of Life Datasystems, the premier database of genetic markers called DNA Barcodes. Increasing the number of correct bases improves reference sequence reliability, increases sequence identification accuracy, and assures analysis correctness. Keeping with barcoding standards, our system maintains an error rate of percent. Our system only applies corrections when it estimates low rate of error. Tested on this data, our automation selects and recovers: 79 percent of N-labels from COI (animal barcode); 80 percent from matK and rbcL (plant barcodes); and 58 percent from non-protein-coding sequences (across eukaryotes).

  4. Symbol interval optimization for molecular communication with drift.

    PubMed

    Kim, Na-Rae; Eckford, Andrew W; Chae, Chan-Byoung

    2014-09-01

    In this paper, we propose a symbol interval optimization algorithm in molecular communication with drift. Proper symbol intervals are important in practical communication systems since information needs to be sent as fast as possible with low error rates. There is a trade-off, however, between symbol intervals and inter-symbol interference (ISI) from Brownian motion. Thus, we find proper symbol interval values considering the ISI inside two kinds of blood vessels, and also suggest no ISI system for strong drift models. Finally, an isomer-based molecule shift keying (IMoSK) is applied to calculate achievable data transmission rates (achievable rates, hereafter). Normalized achievable rates are also obtained and compared in one-symbol ISI and no ISI systems.

  5. Cardiorespiratory system monitoring using a developed acoustic sensor.

    PubMed

    Abbasi-Kesbi, Reza; Valipour, Atefeh; Imani, Khadije

    2018-02-01

    This Letter proposes a wireless acoustic sensor for monitoring heartbeat and respiration rate based on phonocardiogram (PCG). The developed sensor comprises a processor, a transceiver which operates at industrial, scientific and medical band and the frequency of 2.54 GHz as well as two capacitor microphones which one for recording the heartbeat and another one for respiration rate. To evaluate the precision of the presented sensor in estimating heartbeat and respiration rate, the sensor is tested on the different volunteers and the obtained results are compared with a gold standard as a reference. The results reveal that root-mean-square error are determined <2.27 beats/min and 0.92 breaths/min for the heartbeat and respiration rate in turn. While the standard deviation of the error is obtained <1.26 and 0.63 for heartbeat and respiration rate, respectively. Also, the sensor estimate sounds of [Formula: see text] to [Formula: see text] obtained PCG signal with sensitivity and specificity 98.1% and 98.3% in turn that make 3% improvement than previous works. The results prove that the sensor can be appropriate candidate for recognising abnormal condition in the cardiorespiratory system.

  6. 47 CFR 101.75 - Involuntary relocation procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... engineering, equipment, site and FCC fees, as well as any legitimate and prudent transaction expenses incurred... reliability of their system. For digital data systems, reliability is measured by the percent of time the bit error rate (BER) exceeds a desired value, and for analog or digital voice transmissions, it is measured...

  7. 78 FR 18974 - Increasing Market and Planning Efficiency Through Improved Software; Notice of Technical...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-28

    ... bring together experts from diverse backgrounds and experiences including electric system operators... transmission switching; AC optimal power flow modeling; and use of active and dynamic transmission ratings. In... variability of the system, including forecast error? [cir] How can outage probability be captured in...

  8. Fast QC-LDPC code for free space optical communication

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong

    2017-02-01

    Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.

  9. A correction method for the axial maladjustment of transmission-type optical system based on aberration theory

    NASA Astrophysics Data System (ADS)

    Xu, Chunmei; Huang, Fu-yu; Yin, Jian-ling; Chen, Yu-dan; Mao, Shao-juan

    2016-10-01

    The influence of aberration on misalignment of optical system is considered fully, the deficiencies of Gauss optical correction method is pointed, and a correction method for transmission-type misalignment optical system is proposed based on aberration theory. The variation regularity of single lens aberration caused by axial displacement is analyzed, and the aberration effect is defined. On this basis, through calculating the size of lens adjustment induced by the image position error and the magnifying rate error, the misalignment correction formula based on the constraints of the aberration is deduced mathematically. Taking the three lens collimation system for an example, the test is carried out to validate this method, and its superiority is proved.

  10. A zero-error operational video data compression system

    NASA Technical Reports Server (NTRS)

    Kutz, R. L.

    1973-01-01

    A data compression system has been operating since February 1972, using ATS spin-scan cloud cover data. With the launch of ITOS 3 in October 1972, this data compression system has become the only source of near-realtime very high resolution radiometer image data at the data processing facility. The VHRR image data are compressed and transmitted over a 50 kilobit per second wideband ground link. The goal of the data compression experiment was to send data quantized to six bits at twice the rate possible when no compression is used, while maintaining zero error between the transmitted and reconstructed data. All objectives of the data compression experiment were met, and thus a capability of doubling the data throughput of the system has been achieved.

  11. Integrated model reference adaptive control and time-varying angular rate estimation for micro-machined gyroscopes

    NASA Astrophysics Data System (ADS)

    Tsai, Nan-Chyuan; Sue, Chung-Yang

    2010-02-01

    Owing to the imposed but undesired accelerations such as quadrature error and cross-axis perturbation, the micro-machined gyroscope would not be unconditionally retained at resonant mode. Once the preset resonance is not sustained, the performance of the micro-gyroscope is accordingly degraded. In this article, a direct model reference adaptive control loop which is integrated with a modified disturbance estimating observer (MDEO) is proposed to guarantee the resonant oscillations at drive mode and counterbalance the undesired disturbance mainly caused by quadrature error and cross-axis perturbation. The parameters of controller are on-line innovated by the dynamic error between the MDEO output and expected response. In addition, Lyapunov stability theory is employed to examine the stability of the closed-loop control system. Finally, the efficacy of numerical evaluation on the exerted time-varying angular rate, which is to be detected and measured by the gyroscope, is verified by intensive simulations.

  12. Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.

    PubMed

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J

    2014-08-25

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.

  13. System for and method of freezing biological tissue

    NASA Technical Reports Server (NTRS)

    Williams, T. E.; Cygnarowicz, T. A. (Inventor)

    1978-01-01

    Biological tissue is frozen while a polyethylene bag placed in abutting relationship against opposed walls of a pair of heaters. The bag and tissue are cooled with refrigerating gas at a time programmed rate at least equal to the maximum cooling rate needed at any time during the freezing process. The temperature of the bag, and hence of the tissue, is compared with a time programmed desired value for the tissue temperature to derive an error indication. The heater is activated in response to the error indication so that the temperature of the tissue follows the desired value for the time programmed tissue temperature. The tissue is heated to compensate for excessive cooling of the tissue as a result of the cooling by the refrigerating gas. In response to the error signal, the heater is deactivated while the latent heat of fusion is being removed from the tissue while the tissue is changing phase from liquid to solid.

  14. On the performance of dual-hop mixed RF/FSO wireless communication system in urban area over aggregated exponentiated Weibull fading channels with pointing errors

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian

    2018-03-01

    The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.

  15. Classification based upon gene expression data: bias and precision of error rates.

    PubMed

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  16. Do Errors on Classroom Reading Tasks Slow Growth in Reading? Technical Report No. 404.

    ERIC Educational Resources Information Center

    Anderson, Richard C.; And Others

    A pervasive finding from research on teaching and classroom learning is that a low rate of error on classroom tasks is associated with large year to year gains in achievement, particularly for reading in the primary grades. The finding of a negative relationship between error rate, especially rate of oral reading errors, and gains in reading…

  17. Impact of nonzero boresight pointing errors on the performance of a relay-assisted free-space optical communication system over exponentiated Weibull fading channels.

    PubMed

    Wang, Ping; Liu, Xiaoxia; Cao, Tian; Fu, Huihua; Wang, Ranran; Guo, Lixin

    2016-09-20

    The impact of nonzero boresight pointing errors on the system performance of decode-and-forward protocol-based multihop parallel optical wireless communication systems is studied. For the aggregated fading channel, the atmospheric turbulence is simulated by an exponentiated Weibull model, and pointing errors are described by one recently proposed statistical model including both boresight and jitter. The binary phase-shift keying subcarrier intensity modulation-based analytical average bit error rate (ABER) and outage probability expressions are achieved for a nonidentically and independently distributed system. The ABER and outage probability are then analyzed with different turbulence strengths, receiving aperture sizes, structure parameters (P and Q), jitter variances, and boresight displacements. The results show that aperture averaging offers almost the same system performance improvement with boresight included or not, despite the values of P and Q. The performance enhancement owing to the increase of cooperative path (P) is more evident with nonzero boresight than that with zero boresight (jitter only), whereas the performance deterioration because of the increasing hops (Q) with nonzero boresight is almost the same as that with zero boresight. Monte Carlo simulation is offered to verify the validity of ABER and outage probability expressions.

  18. Architectural elements of hybrid navigation systems for future space transportation

    NASA Astrophysics Data System (ADS)

    Trigo, Guilherme F.; Theil, Stephan

    2018-06-01

    The fundamental limitations of inertial navigation, currently employed by most launchers, have raised interest for GNSS-aided solutions. Combination of inertial measurements and GNSS outputs allows inertial calibration online, solving the issue of inertial drift. However, many challenges and design options unfold. In this work we analyse several architectural elements and design aspects of a hybrid GNSS/INS navigation system conceived for space transportation. The most fundamental architectural features such as coupling depth, modularity between filter and inertial propagation, and open-/closed-loop nature of the configuration, are discussed in the light of the envisaged application. Importance of the inertial propagation algorithm and sensor class in the overall system are investigated, being the handling of sensor errors and uncertainties that arise with lower grade sensory also considered. In terms of GNSS outputs we consider receiver solutions (position and velocity) and raw measurements (pseudorange, pseudorange-rate and time-difference carrier phase). Receiver clock error handling options and atmospheric error correction schemes for these measurements are analysed under flight conditions. System performance with different GNSS measurements is estimated through covariance analysis, being the differences between loose and tight coupling emphasized through partial outage simulation. Finally, we discuss options for filter algorithm robustness against non-linearities and system/measurement errors. A possible scheme for fault detection, isolation and recovery is also proposed.

  19. Estimating genotype error rates from high-coverage next-generation sequence data.

    PubMed

    Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil

    2014-11-01

    Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods. © 2014 Wall et al.; Published by Cold Spring Harbor Laboratory Press.

  20. Demonstration of an 8 × 25-Gb/s optical time-division multiplexing system

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Huo, Li; Li, Yunbo; Wang, Lei; Li, Han; Jiang, Xiangyu; Chen, Xin; Lou, Caiyun

    2017-11-01

    An 8 × 25-Gb/s optical time-division multiplexing (OTDM) system is demonstrated experimentally. The optical pulse source is based on optical frequency comb (OFC) generation and pulse shaping, which can generate nearly chirp-free 25-GHz 1.6-ps optical Gaussian pulse. The eightfold optical time-division demultiplexer consists of a single-driven dual-parallel Mach-Zehnder modulator (DPMZM) and a Mamyshev reshaper. Error-free demultiplexing of 8 × 25-Gb/s back-to-back (B2B) signal with a power penalty of 4.1 dB to 4.4 dB at a bit error rate (BER) of 10-9 is achieved to confirm the performance of the proposed system.

  1. Performance analysis of optical wireless communication system based on two-fold turbo code

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Huang, Dexiu; Yuan, Xiuhua

    2005-11-01

    Optical wireless communication (OWC) is beginning to emerge in the telecommunications market as a strategy to meet last-mile demand owing to its unique combination of features. Turbo codes have an impressive near Shannon-limit error correcting performance. Twofold turbo codes have been recently introduced as the least complex member of the multifold turbo code family. In this paper, at first, we present the mathematical model of signal and optical wireless channel with fading and bit error rate model with scintillation, then we provide a new turbo code method to use in OWC system, we can obtain a better BER curse of OWC system with twofold turbo code than with common turbo code.

  2. A rate-controlled teleoperator task with simulated transport delays

    NASA Technical Reports Server (NTRS)

    Pennington, J. E.

    1983-01-01

    A teleoperator-system simulation was used to examine the effects of two control modes (joint-by-joint and resolved-rate), a proximity-display method, and time delays (up to 2 sec) on the control of a five-degree-of-freedom manipulator performing a probe-in-hole alignment task. Four subjects used proportional rotational control and discrete (on-off) translation control with computer-generated visual displays. The proximity display enabled subjects to separate rotational errors from displacement (translation) errors; thus, when the proximity display was used with resolved-rate control, the simulated task was trivial. The time required to perform the simulated task increased linearly with time delay, but time delays had no effect on alignment accuracy. Based on the results of this simulation, several future studies are recommended.

  3. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  4. Bluetooth Heart Rate Monitors For Spaceflight

    NASA Technical Reports Server (NTRS)

    Buxton, R. E.; West, M. R.; Kalogera, K. L.; Hanson, A. M.

    2016-01-01

    Heart rate monitoring is required for crewmembers during exercise aboard the International Space Station (ISS) and will be for future exploration missions. The cardiovascular system must be sufficiently stressed throughout a mission to maintain the ability to perform nominal and contingency/emergency tasks. High quality heart rate data are required to accurately determine the intensity of exercise performed by the crewmembers and show maintenance of VO2max. The quality of the data collected on ISS is subject to multiple limitations and is insufficient to meet current requirements. PURPOSE: To evaluate the performance of commercially available Bluetooth heart rate monitors (BT_HRM) and their ability to provide high quality heart rate data to monitor crew health aboard the ISS and during future exploration missions. METHODS: Nineteen subjects completed 30 data collection sessions of various intensities on the treadmill and/or cycle. Subjects wore several BT_HRM technologies for each testing session. One electrode-based chest strap (CS) was worn, while one or more optical sensors (OS) were worn. Subjects were instrumented with a 12-lead ECG to compare the heart rate data from the Bluetooth sensors. Each BT_HRM data set was time matched to the ECG data and a +/-5bpm threshold was applied to the difference between the 2 data sets. Percent error was calculated based on the number of data points outside the threshold and the total number of data points. RESULTS: The electrode-based chest straps performed better than the optical sensors. The best performing CS was CS1 (1.6% error), followed by CS4 (3.3% error), CS3 (6.4% error), and CS2 (9.2% error). The OS resulted in 10.4% error for OS1 and 14.9% error for OS2. CONCLUSIONS: The highest quality data came from CS1, but unfortunately it has been discontinued by the manufacturer. The optical sensors have not been ruled out for use, but more investigation is needed to determine how to obtain the best quality data. CS2 will be used in an ISS Bluetooth validation study, because it simultaneously transmits magnetic pulse that is integrated with existing exercise hardware on ISS. The simultaneous data streams allow for beat-to-beat comparison between the current ISS standard and CS2. Upon Bluetooth validation aboard ISS, the research team will down select a new BT_HRM for operational use.

  5. Bluetooth(Registered Trademark) Heart Rate Monitors for Spaceflight

    NASA Technical Reports Server (NTRS)

    Buxton, Roxanne E.; West, Michael R.; Kalogera, Kent L.; Hanson, Andrea M.

    2016-01-01

    Heart rate monitoring is required during exercise for crewmembers aboard the International Space Station (ISS) and will be for future exploration missions. The cardiovascular system must be sufficiently stressed throughout a mission to maintain the ability to perform nominal and contingency/emergency tasks. High quality heart rate data is required to accurately determine the intensity of exercise performed by the crewmembers and show maintenance of VO2max. The quality of the data collected on ISS is subject to multiple limitations and is insufficient to meet current requirements. PURPOSE: To evaluate the performance of commercially available Bluetooth® heart rate monitors (BT_HRM) and their ability to provide high quality heart rate data to monitor crew health on board ISS and during future exploration missions. METHODS: Nineteen subjects completed 30 data collection sessions of various intensities on the treadmill and/or cycle. Subjects wore several BT_HRM technologies for each testing session. One electrode-based chest strap (CS) was worn, while one or more optical sensors (OS) was worn. Subjects were instrumented with a 12-lead ECG to compare the heart rate data from the Bluetooth sensors. Each BT_RHM data set was time matched to the ECG data and a +/-5bpm threshold was applied to the difference between the two data sets. Percent error was calculated based on the number of data points outside the threshold and the total number of data points. REULTS: The electrode-based chest straps performed better than the optical sensors. The best performing CS was CS1 (1.6%error), followed by CS4 (3.3%error), CS3 (6.4%error), and CS2 (9.2%error). The OS resulted in 10.4% error for OS1 and 14.9% error for OS2. CONCLUSIONS: The highest quality data came from CS1, unfortunately it has been discontinued by the manufacturer. The optical sensors have not been ruled out for use, but more investigation is needed to determine how to get the best quality data. CS2 will be used in an ISS Bluetooth validation study, because it simultaneously transmits Magnetic Pulse which is integrated with existing exercise hardware on ISS. The simultaneous data streams allow for beat to beat comparison between the current ISS standard and CS2.Upon Bluetooth(Registered Trademark) validation aboard ISS, down select of a new BT_HRM for operational use will be made.

  6. A Real-Time Position-Locating Algorithm for CCD-Based Sunspot Tracking

    NASA Technical Reports Server (NTRS)

    Taylor, Jaime R.

    1996-01-01

    NASA Marshall Space Flight Center's (MSFC) EXperimental Vector Magnetograph (EXVM) polarimeter measures the sun's vector magnetic field. These measurements are taken to improve understanding of the sun's magnetic field in the hopes to better predict solar flares. Part of the procedure for the EXVM requires image motion stabilization over a period of a few minutes. A high speed tracker can be used to reduce image motion produced by wind loading on the EXVM, fluctuations in the atmosphere and other vibrations. The tracker consists of two elements, an image motion detector and a control system. The image motion detector determines the image movement from one frame to the next and sends an error signal to the control system. For the ground based application to reduce image motion due to atmospheric fluctuations requires an error determination at the rate of at least 100 hz. It would be desirable to have an error determination rate of 1 kHz to assure that higher rate image motion is reduced and to increase the control system stability. Two algorithms are presented that are typically used for tracking. These algorithms are examined for their applicability for tracking sunspots, specifically their accuracy if only one column and one row of CCD pixels are used. To examine the accuracy of this method two techniques are used. One involves moving a sunspot image a known distance with computer software, then applying the particular algorithm to see how accurately it determines this movement. The second technique involves using a rate table to control the object motion, then applying the algorithms to see how accurately each determines the actual motion. Results from these two techniques are presented.

  7. Liability claims and costs before and after implementation of a medical error disclosure program.

    PubMed

    Kachalia, Allen; Kaufman, Samuel R; Boothman, Richard; Anderson, Susan; Welch, Kathleen; Saint, Sanjay; Rogers, Mary A M

    2010-08-17

    Since 2001, the University of Michigan Health System (UMHS) has fully disclosed and offered compensation to patients for medical errors. To compare liability claims and costs before and after implementation of the UMHS disclosure-with-offer program. Retrospective before-after analysis from 1995 to 2007. Public academic medical center and health system. Inpatients and outpatients involved in claims made to UMHS. Number of new claims for compensation, number of claims compensated, time to claim resolution, and claims-related costs. After full implementation of a disclosure-with-offer program, the average monthly rate of new claims decreased from 7.03 to 4.52 per 100,000 patient encounters (rate ratio [RR], 0.64 [95% CI, 0.44 to 0.95]). The average monthly rate of lawsuits decreased from 2.13 to 0.75 per 100,000 patient encounters (RR, 0.35 [CI, 0.22 to 0.58]). Median time from claim reporting to resolution decreased from 1.36 to 0.95 years. Average monthly cost rates decreased for total liability (RR, 0.41 [CI, 0.26 to 0.66]), patient compensation (RR, 0.41 [CI, 0.26 to 0.67]), and non-compensation-related legal costs (RR, 0.39 [CI, 0.22 to 0.67]). The study design cannot establish causality. Malpractice claims generally declined in Michigan during the latter part of the study period. The findings might not apply to other health systems, given that UMHS has a closed staff model covered by a captive insurance company and often assumes legal responsibility. The UMHS implemented a program of full disclosure of medical errors with offers of compensation without increasing its total claims and liability costs. Blue Cross Blue Shield of Michigan Foundation.

  8. Leuconostoc mesenteroides growth in food products: prediction and sensitivity analysis by adaptive-network-based fuzzy inference systems.

    PubMed

    Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien

    2013-01-01

    An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. THE ANFIS AND ANN MODELS WERE COMPARED IN TERMS OF SIX STATISTICAL INDICES CALCULATED BY COMPARING THEIR PREDICTION RESULTS WITH ACTUAL DATA: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R (2)). Graphical plots were also used for model comparison. The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions.

  9. Leuconostoc Mesenteroides Growth in Food Products: Prediction and Sensitivity Analysis by Adaptive-Network-Based Fuzzy Inference Systems

    PubMed Central

    Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien

    2013-01-01

    Background An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. Methods The ANFIS and ANN models were compared in terms of six statistical indices calculated by comparing their prediction results with actual data: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R 2). Graphical plots were also used for model comparison. Conclusions The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. PMID:23705023

  10. Design of analytical failure detection using secondary observers

    NASA Technical Reports Server (NTRS)

    Sisar, M.

    1982-01-01

    The problem of designing analytical failure-detection systems (FDS) for sensors and actuators, using observers, is addressed. The use of observers in FDS is related to the examination of the n-dimensional observer error vector which carries the necessary information on possible failures. The problem is that in practical systems, in which only some of the components of the state vector are measured, one has access only to the m-dimensional observer-output error vector, with m or = to n. In order to cope with these cases, a secondary observer is synthesized to reconstruct the entire observer-error vector from the observer output error vector. This approach leads toward the design of highly sensitive and reliable FDS, with the possibility of obtaining a unique fingerprint for every possible failure. In order to keep the observer's (or Kalman filter) false-alarm rate under a certain specified value, it is necessary to have an acceptable matching between the observer (or Kalman filter) models and the system parameters. A previously developed adaptive observer algorithm is used to maintain the desired system-observer model matching, despite initial mismatching or system parameter variations. Conditions for convergence for the adaptive process are obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors, while accurate and fast parameter identification, in both deterministic and stochastic cases, is obtained.

  11. Equalization for a page-oriented optical memory system

    NASA Astrophysics Data System (ADS)

    Trelewicz, Jennifer Q.; Capone, Jeffrey

    1999-11-01

    In this work, a method of decision-feedback equalization is developed for a digital holographic channel that experiences moderate-to-severe imaging errors. Decision feedback is utilized, not only where the channel is well-behaved, but also near the edges of the camera grid that are subject to a high degree of imaging error. In addition to these effects, the channel is worsened by typical problems of holographic channels, including non-uniform illumination, dropouts, and stuck bits. The approach described in this paper builds on established methods for performing trained and blind equalization on time-varying channels. The approach is tested on experimental data sets. On most of these data sets, the method of equalization described in this work delivers at least an order of magnitude improvement in bit-error rate (BER) before error-correction coding (ECC). When ECC is introduced, the approach is able to recover stored data with no errors for many of the tested data sets. Furthermore, a low BER was maintained even over a range of small alignment perturbations in the system. It is believed that this equalization method can allow cost reductions to be made in page-memory systems, by allowing for a larger image area per page or less complex imaging components, without sacrificing the low BER required by data storage applications.

  12. Online adaptation of a c-VEP Brain-computer Interface(BCI) based on error-related potentials and unsupervised learning.

    PubMed

    Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin

    2012-01-01

    The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.

  13. Real time heart rate variability assessment from Android smartphone camera photoplethysmography: Postural and device influences.

    PubMed

    Guede-Fernandez, F; Ferrer-Mileo, V; Ramos-Castro, J; Fernandez-Chimeno, M; Garcia-Gonzalez, M A

    2015-01-01

    The aim of this paper is to present a smartphone based system for real-time pulse-to-pulse (PP) interval time series acquisition by frame-to-frame camera image processing. The developed smartphone application acquires image frames from built-in rear-camera at the maximum available rate (30 Hz) and the smartphone GPU has been used by Renderscript API for high performance frame-by-frame image acquisition and computing in order to obtain PPG signal and PP interval time series. The relative error of mean heart rate is negligible. In addition, measurement posture and the employed smartphone model influences on the beat-to-beat error measurement of heart rate and HRV indices have been analyzed. Then, the standard deviation of the beat-to-beat error (SDE) was 7.81 ± 3.81 ms in the worst case. Furthermore, in supine measurement posture, significant device influence on the SDE has been found and the SDE is lower with Samsung S5 than Motorola X. This study can be applied to analyze the reliability of different smartphone models for HRV assessment from real-time Android camera frames processing.

  14. Comparison of Meropenem MICs and Susceptibilities for Carbapenemase-Producing Klebsiella pneumoniae Isolates by Various Testing Methods▿

    PubMed Central

    Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.

    2010-01-01

    We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603

  15. Importance of DNA repair in tumor suppression

    NASA Astrophysics Data System (ADS)

    Brumer, Yisroel; Shakhnovich, Eugene I.

    2004-12-01

    The transition from a normal to cancerous cell requires a number of highly specific mutations that affect cell cycle regulation, apoptosis, differentiation, and many other cell functions. One hallmark of cancerous genomes is genomic instability, with mutation rates far greater than those of normal cells. In microsatellite instability (MIN tumors), these are often caused by damage to mismatch repair genes, allowing further mutation of the genome and tumor progression. These mutation rates may lie near the error catastrophe found in the quasispecies model of adaptive RNA genomes, suggesting that further increasing mutation rates will destroy cancerous genomes. However, recent results have demonstrated that DNA genomes exhibit an error threshold at mutation rates far lower than their conservative counterparts. Furthermore, while the maximum viable mutation rate in conservative systems increases indefinitely with increasing master sequence fitness, the semiconservative threshold plateaus at a relatively low value. This implies a paradox, wherein inaccessible mutation rates are found in viable tumor cells. In this paper, we address this paradox, demonstrating an isomorphism between the conservatively replicating (RNA) quasispecies model and the semiconservative (DNA) model with post-methylation DNA repair mechanisms impaired. Thus, as DNA repair becomes inactivated, the maximum viable mutation rate increases smoothly to that of a conservatively replicating system on a transformed landscape, with an upper bound that is dependent on replication rates. On a specific single fitness peak landscape, the repair-free semiconservative system is shown to mimic a conservative system exactly. We postulate that inactivation of post-methylation repair mechanisms is fundamental to the progression of a tumor cell and hence these mechanisms act as a method for the prevention and destruction of cancerous genomes.

  16. Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination.

    PubMed

    Yang, Yingdong; Mao, Xuchu; Tian, Weifeng

    2016-06-08

    Global navigation satellite systems (GNSS) are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM) to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.

  17. Your Health Care May Kill You: Medical Errors.

    PubMed

    Anderson, James G; Abrahamson, Kathleen

    2017-01-01

    Recent studies of medical errors have estimated errors may account for as many as 251,000 deaths annually in the United States (U.S)., making medical errors the third leading cause of death. Error rates are significantly higher in the U.S. than in other developed countries such as Canada, Australia, New Zealand, Germany and the United Kingdom (U.K). At the same time less than 10 percent of medical errors are reported. This study describes the results of an investigation of the effectiveness of the implementation of the MEDMARX Medication Error Reporting system in 25 hospitals in Pennsylvania. Data were collected on 17,000 errors reported by participating hospitals over a 12-month period. Latent growth curve analysis revealed that reporting of errors by health care providers increased significantly over the four quarters. At the same time, the proportion of corrective actions taken by the hospitals remained relatively constant over the 12 months. A simulation model was constructed to examine the effect of potential organizational changes resulting from error reporting. Four interventions were simulated. The results suggest that improving patient safety requires more than voluntary reporting. Organizational changes need to be implemented and institutionalized as well.

  18. Feedback control laws for highly maneuverable aircraft

    NASA Technical Reports Server (NTRS)

    Garrard, William L.; Balas, Gary J.

    1994-01-01

    During the first half of the year, the investigators concentrated their efforts on completing the design of control laws for the longitudinal axis of the HARV. During the second half of the year they concentrated on the synthesis of control laws for the lateral-directional axes. The longitudinal control law design efforts can be briefly summarized as follows. Longitudinal control laws were developed for the HARV using mu synthesis design techniques coupled with dynamic inversion. An inner loop dynamic inversion controller was used to simplify the system dynamics by eliminating the aerodynamic nonlinearities and inertial cross coupling. Models of the errors resulting from uncertainties in the principal longitudinal aerodynamic terms were developed and included in the model of the HARV with the inner loop dynamic inversion controller. This resulted in an inner loop transfer function model which was an integrator with the modeling errors characterized as uncertainties in gain and phase. Outer loop controllers were then designed using mu synthesis to provide robustness to these modeling errors and give desired response to pilot inputs. Both pitch rate and angle of attack command following systems were designed. The following tasks have been accomplished for the lateral-directional controllers: inner and outer loop dynamic inversion controllers have been designed; an error model based on a linearized perturbation model of the inner loop system was derived; controllers for the inner loop system have been designed, using classical techniques, that control roll rate and Dutch roll response; the inner loop dynamic inversion and classical controllers have been implemented on the six degree of freedom simulation; and lateral-directional control allocation scheme has been developed based on minimizing required control effort.

  19. Link Performance Analysis and monitoring - A unified approach to divergent requirements

    NASA Astrophysics Data System (ADS)

    Thom, G. A.

    Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.

  20. A software reconfigurable optical multiband UWB system utilizing a bit-loading combined with adaptive LDPC code rate scheme

    NASA Astrophysics Data System (ADS)

    He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin

    2017-07-01

    In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).

  1. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Improved Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.; hide

    2006-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.

  2. Usability Evaluation and Implementation of a Health Information Technology Dashboard of Evidence-Based Quality Indicators.

    PubMed

    Schall, Mark Christopher; Cullen, Laura; Pennathur, Priyadarshini; Chen, Howard; Burrell, Keith; Matthews, Grace

    2017-06-01

    Health information technology dashboards that integrate evidence-based quality indicators can efficiently and accurately display patient risk information to promote early intervention and improve overall quality of patient care. We describe the process of developing, evaluating, and implementing a dashboard designed to promote quality care through display of evidence-based quality indicators within an electronic health record. Clinician feedback was sought throughout the process. Usability evaluations were provided by three nurse pairs and one physician from medical-surgical areas. Task completion times, error rates, and ratings of system usability were collected to compare the use of quality indicators displayed on the dashboard to the indicators displayed in a conventional electronic health record across eight experimental scenarios. Participants rated the dashboard as "highly usable" following System Usability Scale (mean, 87.5 [SD, 9.6]) and Poststudy System Usability Questionnaire (mean, 1.7 [SD, 0.5]) criteria. Use of the dashboard led to reduced task completion times and error rates in comparison to the conventional electronic health record for quality indicator-related tasks. Clinician responses to the dashboard display capabilities were positive, and a multifaceted implementation plan has been used. Results suggest application of the dashboard in the care environment may lead to improved patient care.

  3. Nursing Home Levels of Care: Reimbursement of Resident Specific Costs

    PubMed Central

    Willemain, Thomas R.

    1980-01-01

    The companion paper on nursing home levels of care (Bishop, Plough and Willemain, 1980) recommended a “split-rate” approach to nursing home reimbursement that would distinguish between fixed and variable costs. This paper examines three alternative treatments of the variable cost component of the rate: a two-level system similar to the distinction between skilled and intermediate care facilities, an individualized (“patient-centered”) system, and a system that assigns a single facility-specific rate that depends on the facility's case-mix (“case-mix reimbursement”). The aim is to better understand the theoretical strengths and weaknesses of these three approaches. The comparison of reimbursement alternatives is framed in terms of minimizing reimbursement error, meaning overpayment and underpayment. We develop a conceptual model of reimbursement error that stresses that the features of the reimbursement scheme are only some of the factors contributing to over- and underpayment. The conceptual model is translated into a computer program for quantitative comparison of the alternatives. PMID:10309330

  4. Large-Area Visually Augmented Navigation for Autonomous Underwater Vehicles

    DTIC Science & Technology

    2005-06-01

    constrain position drift . Correction of errors in position and orientation are made each time the mosaic is updated, which occurs every Lth video frame. They...are the greatest strength of a VAN methodology. It is these measurements which help to correct dead-reckoned drift error and enforce recovery of a...systems. [INSTRUMENT [VARIABLE I INTENAL? I UPDATE RATE PRECISION FRANGE J DRIFT Acoustic Altimeter Z - Altitude yes varies: 0.1-10 Hz 0.01-1.0 m varies

  5. Optical communication with semiconductor laser diodes

    NASA Technical Reports Server (NTRS)

    Davidson, F.

    1988-01-01

    Slot timing recovery in a direct detection optical PPM communication system can be achieved by processing the photodetector waveform with a nonlinear device whose output forms the input to a phase lock group. The choice of a simple transition detector as the nonlinearity is shown to give satisfactory synchronization performance. The rms phase error of the recovered slot clock and the effect of slot timing jitter on the bit error probability were directly measured. The experimental system consisted of an AlGaAs laser diode (lambda = 834 nm) and a silicon avalanche photodiode (APD) photodetector and used Q=4 PPM signaling operated at a source data rate of 25 megabits/second. The mathematical model developed to characterize system performance is shown to be in good agreement with actual performance measurements. The use of the recovered slot clock in the receiver resulted in no degradation in receiver sensitivity compared to a system with perfect slot timing. The system achieved a bit error probability of 10 to the minus 6 power at received signal energies corresponding to an average of less than 60 detected photons per information bit.

  6. Error mapping of high-speed AFM systems

    NASA Astrophysics Data System (ADS)

    Klapetek, Petr; Picco, Loren; Payton, Oliver; Yacoot, Andrew; Miles, Mervyn

    2013-02-01

    In recent years, there have been several advances in the development of high-speed atomic force microscopes (HSAFMs) to obtain images with nanometre vertical and lateral resolution at frame rates in excess of 1 fps. To date, these instruments are lacking in metrology for their lateral scan axes; however, by imaging a series of two-dimensional lateral calibration standards, it has been possible to obtain information about the errors associated with these HSAFM scan axes. Results from initial measurements are presented in this paper and show that the scan speed needs to be taken into account when performing a calibration as it can lead to positioning errors of up to 3%.

  7. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less

  8. Flight evaluation of differential GPS aided inertial navigation systems

    NASA Technical Reports Server (NTRS)

    Mcnally, B. David; Paielli, Russell A.; Bach, Ralph E., Jr.; Warner, David N., Jr.

    1992-01-01

    Algorithms are described for integration of Differential Global Positioning System (DGPS) data with Inertial Navigation System (INS) data to provide an integrated DGPS/INS navigation system. The objective is to establish the benefits that can be achieved through various levels of integration of DGPS with INS for precision navigation. An eight state Kalman filter integration was implemented in real-time on a twin turbo-prop transport aircraft to evaluate system performance during terminal approach and landing operations. A fully integrated DGPS/INS system is also presented which models accelerometer and rate-gyro measurement errors plus position, velocity, and attitude errors. The fully integrated system was implemented off-line using range-domain (seventeen-state) and position domain (fifteen-state) Kalman filters. Both filter integration approaches were evaluated using data collected during the flight test. Flight-test data consisted of measurements from a 5 channel Precision Code GPS receiver, a strap-down Inertial Navigation Unit (INU), and GPS satellite differential range corrections from a ground reference station. The aircraft was laser tracked to determine its true position. Results indicate that there is no significant improvement in positioning accuracy with the higher levels of DGPS/INS integration. All three systems provided high-frequency (e.g., 20 Hz) estimates of position and velocity. The fully integrated system provided estimates of inertial sensor errors which may be used to improve INS navigation accuracy should GPS become unavailable, and improved estimates of acceleration, attitude, and body rates which can be used for guidance and control. Precision Code DGPS/INS positioning accuracy (root-mean-square) was 1.0 m cross-track and 3.0 m vertical. (This AGARDograph was sponsored by the Guidance and Control Panel.)

  9. Performance enhancement of wireless mobile adhoc networks through improved error correction and ICI cancellation

    NASA Astrophysics Data System (ADS)

    Sabir, Zeeshan; Babar, M. Inayatullah; Shah, Syed Waqar

    2012-12-01

    Mobile adhoc network (MANET) refers to an arrangement of wireless mobile nodes that have the tendency of dynamically and freely self-organizing into temporary and arbitrary network topologies. Orthogonal frequency division multiplexing (OFDM) is the foremost choice for MANET system designers at the Physical Layer due to its inherent property of high data rate transmission that corresponds to its lofty spectrum efficiency. The downside of OFDM includes its sensitivity to synchronization errors (frequency offsets and symbol time). Most of the present day techniques employing OFDM for data transmission support mobility as one of the primary features. This mobility causes small frequency offsets due to the production of Doppler frequencies. It results in intercarrier interference (ICI) which degrades the signal quality due to a crosstalk between the subcarriers of OFDM symbol. An efficient frequency-domain block-type pilot-assisted ICI mitigation scheme is proposed in this article which nullifies the effect of channel frequency offsets from the received OFDM symbols. Second problem addressed in this article is the noise effect induced by different sources into the received symbol increasing its bit error rate and making it unsuitable for many applications. Forward-error-correcting turbo codes have been employed into the proposed model which adds redundant bits into the system which are later used for error detection and correction purpose. At the receiver end, maximum a posteriori (MAP) decoding algorithm is implemented using two component MAP decoders. These decoders tend to exchange interleaved extrinsic soft information among each other in the form of log likelihood ratio improving the previous estimate regarding the decoded bit in each iteration.

  10. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  11. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  12. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  13. Image statistics decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Pitt, G. H., III; Swanson, L.; Yuen, J. H.

    1987-01-01

    It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.

  14. Optical system components for navigation grade fiber optic gyroscopes

    NASA Astrophysics Data System (ADS)

    Heimann, Marcus; Liesegang, Maximilian; Arndt-Staufenbiel, Norbert; Schröder, Henning; Lang, Klaus-Dieter

    2013-10-01

    Interferometric fiber optic gyroscopes belong to the class of inertial sensors. Due to their high accuracy they are used for absolute position and rotation measurement in manned/unmanned vehicles, e.g. submarines, ground vehicles, aircraft or satellites. The important system components are the light source, the electro optical phase modulator, the optical fiber coil and the photodetector. This paper is focused on approaches to realize a stable light source and fiber coil. Superluminescent diode and erbium doped fiber laser were studied to realize an accurate and stable light source. Therefor the influence of the polarization grade of the source and the effects due to back reflections to the source were studied. During operation thermal working conditions severely affect accuracy and stability of the optical fiber coil, which is the sensor element. Thermal gradients that are applied to the fiber coil have large negative effects on the achievable system accuracy of the optic gyroscope. Therefore a way of calculating and compensating the rotation rate error of a fiber coil due to thermal change is introduced. A simplified 3 dimensional FEM of a quadrupole wound fiber coil is used to determine the build-up of thermal fields in the polarization maintaining fiber due to outside heating sources. The rotation rate error due to these sources is then calculated and compared to measurement data. A simple regression model is used to compensate the rotation rate error with temperature measurement at the outside of the fiber coil. To realize a compact and robust optical package for some of the relevant optical system components an approach based on ion exchanged waveguides in thin glass was developed. This waveguides are used to realize 1x2 and 1x4 splitter with fiber coupling interface or direct photodiode coupling.

  15. Effect of the incidence angle to free space optical communication based on cat-eye modulating retro-reflector

    NASA Astrophysics Data System (ADS)

    Zhang, Lai-xian; Sun, Hua-yan; Zhao, Yan-zhong; Zheng, Yong-hui; Shan, Cong-miao

    2013-08-01

    Based on the cat-eye effect of optical system, free space optical communication based on cat-eye modulating retro-reflector can build communication link rapidly. Compared to classical free space optical communication system, system based on cat-eye modulating retro-reflector has great advantages such as building communication link more rapidly, a passive terminal is smaller, lighter and lower power consuming. The incident angle is an important factor of cat-eye effect, so it will affect the retro-reflecting communication link. In this paper, the principle and work flow of free space optical communication based on cat-eye modulating retro-reflector were introduced. Then, using the theory of geometric optics, the equivalent model of modulating retro-reflector with incidence angle was presented. The analytical solution of active area and retro-reflected light intensity of cat-eye modulating retro-reflector were given. Noise of PIN photodetector was analyzed, based on which, bit error rate of free space optical communication based on cat-eye modulating retro-reflector was presented. Finally, simulations were done to study the effect of incidence angle to the communication. The simulation results show that the incidence angle has little effect on active area and retro-reflected light intensity when the incidence beam is in the active field angle of cat-eye modulating retro-reflector. With certain system and condition, the communication link can rapidly be built when the incidence light beam is in the field angle, and the bit error rate increases greatly with link range. When link range is smaller than 35Km, the bit error rate is less than 10-16.

  16. Forensic surface metrology: tool mark evidence.

    PubMed

    Gambino, Carol; McLaughlin, Patrick; Kuo, Loretta; Kammerman, Frani; Shenkin, Peter; Diaczuk, Peter; Petraco, Nicholas; Hamby, James; Petraco, Nicholas D K

    2011-01-01

    Over the last several decades, forensic examiners of impression evidence have come under scrutiny in the courtroom due to analysis methods that rely heavily on subjective morphological comparisons. Currently, there is no universally accepted system that generates numerical data to independently corroborate visual comparisons. Our research attempts to develop such a system for tool mark evidence, proposing a methodology that objectively evaluates the association of striated tool marks with the tools that generated them. In our study, 58 primer shear marks on 9 mm cartridge cases, fired from four Glock model 19 pistols, were collected using high-resolution white light confocal microscopy. The resulting three-dimensional surface topographies were filtered to extract all "waviness surfaces"-the essential "line" information that firearm and tool mark examiners view under a microscope. Extracted waviness profiles were processed with principal component analysis (PCA) for dimension reduction. Support vector machines (SVM) were used to make the profile-gun associations, and conformal prediction theory (CPT) for establishing confidence levels. At the 95% confidence level, CPT coupled with PCA-SVM yielded an empirical error rate of 3.5%. Complementary, bootstrap-based computations for estimated error rates were 0%, indicating that the error rate for the algorithmic procedure is likely to remain low on larger data sets. Finally, suggestions are made for practical courtroom application of CPT for assigning levels of confidence to SVM identifications of tool marks recorded with confocal microscopy. Copyright © 2011 Wiley Periodicals, Inc.

  17. Evaluation of Analytical Errors in a Clinical Chemistry Laboratory: A 3 Year Experience

    PubMed Central

    Sakyi, AS; Laing, EF; Ephraim, RK; Asibey, OF; Sadique, OK

    2015-01-01

    Background: Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. Aim: We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. Materials and Methods: We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). Results: A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Conclusion: Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified. PMID:25745569

  18. Critical Mutation Rate Has an Exponential Dependence on Population Size in Haploid and Diploid Populations

    PubMed Central

    Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.

    2013-01-01

    Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200

  19. Cost-Effectiveness Analysis of an Automated Medication System Implemented in a Danish Hospital Setting.

    PubMed

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    To evaluate the cost-effectiveness of an automated medication system (AMS) implemented in a Danish hospital setting. An economic evaluation was performed alongside a controlled before-and-after effectiveness study with one control ward and one intervention ward. The primary outcome measure was the number of errors in the medication administration process observed prospectively before and after implementation. To determine the difference in proportion of errors after implementation of the AMS, logistic regression was applied with the presence of error(s) as the dependent variable. Time, group, and interaction between time and group were the independent variables. The cost analysis used the hospital perspective with a short-term incremental costing approach. The total 6-month costs with and without the AMS were calculated as well as the incremental costs. The number of avoided administration errors was related to the incremental costs to obtain the cost-effectiveness ratio expressed as the cost per avoided administration error. The AMS resulted in a statistically significant reduction in the proportion of errors in the intervention ward compared with the control ward. The cost analysis showed that the AMS increased the ward's 6-month cost by €16,843. The cost-effectiveness ratio was estimated at €2.01 per avoided administration error, €2.91 per avoided procedural error, and €19.38 per avoided clinical error. The AMS was effective in reducing errors in the medication administration process at a higher overall cost. The cost-effectiveness analysis showed that the AMS was associated with affordable cost-effectiveness rates. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  20. Cross-layer Design for MIMO Systems with Transmit Antenna Selection and Imperfect CSI

    NASA Astrophysics Data System (ADS)

    Yu, Xiangbin; Liu, Yan; Rui, Yun; Zhou, Tingting; Yin, Xin

    2013-04-01

    In this paper, by combining adaptive modulation and automatic repeat request (ARQ), a cross-layer design (CLD) scheme for multiple-input and multiple-output (MIMO) system with transmit antenna selection (TAS) and imperfect channel state information (CSI) is presented. Based on the imperfect CSI, the probability density function of the effective signal to noise ratio (SNR) is derived, and the fading gain switching thresholds are also derived subject to a target packet loss rate and fixed power constraint. According to these results, we further derive the average spectrum efficiency (SE) and packet error rate (PER) of the system. As a result, closed-form expressions of the average SE and PER are obtained, respectively. The derived expressions include the expressions under perfect CSI as special cases, and can provide good performance evaluation for the CLD system with imperfect CSI. Simulation results verify the validity of the theoretical analysis. The results show that the CLD system with TAS provides better SE than that with space-time block coding, but the SE and PER performance of the system with imperfect CSI are worse than those with perfect CSI due to the estimation error.

Top