Sample records for acceptable error range

  1. Acceptance threshold theory can explain occurrence of homosexual behaviour.

    PubMed

    Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  2. Hospital-based transfusion error tracking from 2005 to 2010: identifying the key errors threatening patient transfusion safety.

    PubMed

    Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie

    2014-01-01

    This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).

  3. Structure and dating errors in the geologic time scale and periodicity in mass extinctions

    NASA Technical Reports Server (NTRS)

    Stothers, Richard B.

    1989-01-01

    Structure in the geologic time scale reflects a partly paleontological origin. As a result, ages of Cenozoic and Mesozoic stage boundaries exhibit a weak 28-Myr periodicity that is similar to the strong 26-Myr periodicity detected in mass extinctions of marine life by Raup and Sepkoski. Radiometric dating errors in the geologic time scale, to which the mass extinctions are stratigraphically tied, do not necessarily lessen the likelihood of a significant periodicity in mass extinctions, but do spread the acceptable values of the period over the range 25-27 Myr for the Harland et al. time scale or 25-30 Myr for the DNAG time scale. If the Odin time scale is adopted, acceptable periods fall between 24 and 33 Myr, but are not robust against dating errors. Some indirect evidence from independently-dated flood-basalt volcanic horizons tends to favor the Odin time scale.

  4. Inflation of the type I error: investigations on regulatory recommendations for bioequivalence of highly variable drugs.

    PubMed

    Wonnemann, Meinolf; Frömke, Cornelia; Koch, Armin

    2015-01-01

    We investigated different evaluation strategies for bioequivalence trials with highly variable drugs on their resulting empirical type I error and empirical power. The classical 'unscaled' crossover design with average bioequivalence evaluation, the Add-on concept of the Japanese guideline, and the current 'scaling' approach of EMA were compared. Simulation studies were performed based on the assumption of a single dose drug administration while changing the underlying intra-individual variability. Inclusion of Add-on subjects following the Japanese concept led to slight increases of the empirical α-error (≈7.5%). For the approach of EMA we noted an unexpected tremendous increase of the rejection rate at a geometric mean ratio of 1.25. Moreover, we detected error rates slightly above the pre-set limit of 5% even at the proposed 'scaled' bioequivalence limits. With the classical 'unscaled' approach and the Japanese guideline concept the goal of reduced subject numbers in bioequivalence trials of HVDs cannot be achieved. On the other hand, widening the acceptance range comes at the price that quite a number of products will be accepted bioequivalent that had not been accepted in the past. A two-stage design with control of the global α therefore seems the better alternative.

  5. Color Compatibility of Gingival Shade Guides and Gingiva-Colored Dental Materials with Healthy Human Gingiva.

    PubMed

    Sarmast, Nima D; Angelov, Nikola; Ghinea, Razvan; Powers, John M; Paravina, Rade D

    The CIELab and CIEDE2000 coverage error (ΔE* COV and ΔE' COV , respectively) of basic shades of different gingival shade guides and gingiva-colored restorative dental materials (n = 5) was calculated as compared to a previously compiled database on healthy human gingiva. Data were analyzed using analysis of variance with Tukey-Kramer multiple-comparison test (P < .05). A 50:50% acceptability threshold of 4.6 for ΔE* and 4.1 for ΔE' was used to interpret the results. ΔE* COV / ΔE' COV ranged from 4.4/3.5 to 8.6/6.9. The majority of gingival shade guides and gingiva-colored restorative materials exhibited statistically significant coverage errors above the 50:50% acceptability threshold and uneven shade distribution.

  6. [Statistical Process Control (SPC) can help prevent treatment errors without increasing costs in radiotherapy].

    PubMed

    Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C

    2010-01-01

    Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.

  7. Absolute GPS Positioning Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  8. General error analysis in the relationship between free thyroxine and thyrotropin and its clinical relevance.

    PubMed

    Goede, Simon L; Leow, Melvin Khee-Shing

    2013-01-01

    This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.

  9. [Statistical approach to evaluate the occurrence of out-of acceptable ranges and accuracy for antimicrobial susceptibility tests in inter-laboratory quality control program].

    PubMed

    Ueno, Tamio; Matuda, Junichi; Yamane, Nobuhisa

    2013-03-01

    To evaluate the occurrence of out-of acceptable ranges and accuracy of antimicrobial susceptibility tests, we applied a new statistical tool to the Inter-Laboratory Quality Control Program established by the Kyushu Quality Control Research Group. First, we defined acceptable ranges of minimum inhibitory concentration (MIC) for broth microdilution tests and inhibitory zone diameter for disk diffusion tests on the basis of Clinical and Laboratory Standards Institute (CLSI) M100-S21. In the analysis, more than two out-of acceptable range results in the 20 tests were considered as not allowable according to the CLSI document. Of the 90 participating laboratories, 46 (51%) experienced one or more occurrences of out-of acceptable range results. Then, a binomial test was applied to each participating laboratory. The results indicated that the occurrences of out-of acceptable range results in the 11 laboratories were significantly higher when compared to the CLSI recommendation (allowable rate < or = 0.05). The standard deviation indices(SDI) were calculated by using reported results, mean and standard deviation values for the respective antimicrobial agents tested. In the evaluation of accuracy, mean value from each laboratory was statistically compared with zero using a Student's t-test. The results revealed that 5 of the 11 above laboratories reported erroneous test results that systematically drifted to the side of resistance. In conclusion, our statistical approach has enabled us to detect significantly higher occurrences and source of interpretive errors in antimicrobial susceptibility tests; therefore, this approach can provide us with additional information that can improve the accuracy of the test results in clinical microbiology laboratories.

  10. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    PubMed Central

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  11. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  12. Measuring human remains in the field: Grid technique, total station, or MicroScribe?

    PubMed

    Sládek, Vladimír; Galeta, Patrik; Sosna, Daniel

    2012-09-10

    Although three-dimensional (3D) coordinates for human intra-skeletal landmarks are among the most important data that anthropologists have to record in the field, little is known about the reliability of various measuring techniques. We compared the reliability of three techniques used for 3D measurement of human remain in the field: grid technique (GT), total station (TS), and MicroScribe (MS). We measured 365 field osteometric points on 12 skeletal sequences excavated at the Late Medieval/Early Modern churchyard in Všeruby, Czech Republic. We compared intra-observer, inter-observer, and inter-technique variation using mean difference (MD), mean absolute difference (MAD), standard deviation of difference (SDD), and limits of agreement (LA). All three measuring techniques can be used when accepted error ranges can be measured in centimeters. When a range of accepted error measurable in millimeters is needed, MS offers the best solution. TS can achieve the same reliability as does MS, but only when the laser beam is accurately pointed into the center of the prism. When the prism is not accurately oriented, TS produces unreliable data. TS is more sensitive to initialization than is MS. GT measures human skeleton with acceptable reliability for general purposes but insufficiently when highly accurate skeletal data are needed. We observed high inter-technique variation, indicating that just one technique should be used when spatial data from one individual are recorded. Subadults are measured with slightly lower error than are adults. The effect of maximum excavated skeletal length has little practical significance in field recording. When MS is not available, we offer practical suggestions that can help to increase reliability when measuring human skeleton in the field. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. Freeform solar concentrator with a highly asymmetric acceptance cone

    NASA Astrophysics Data System (ADS)

    Wheelwright, Brian; Angel, J. Roger P.; Coughenour, Blake; Hammer, Kimberly

    2014-10-01

    A solar concentrator with a highly asymmetric acceptance cone is investigated. Concentrating photovoltaic systems require dual-axis sun tracking to maintain nominal concentration throughout the day. In addition to collecting direct rays from the solar disk, which subtends ~0.53 degrees, concentrating optics must allow for in-field tracking errors due to mechanical misalignment of the module, wind loading, and control loop biases. The angular range over which the concentrator maintains <90% of on-axis throughput is defined as the optical acceptance angle. Concentrators with substantial rotational symmetry likewise exhibit rotationally symmetric acceptance angles. In the field, this is sometimes a poor match with azimuth-elevation trackers, which have inherently asymmetric tracking performance. Pedestal-mounted trackers with low torsional stiffness about the vertical axis have better elevation tracking than azimuthal tracking. Conversely, trackers which rotate on large-footprint circular tracks are often limited by elevation tracking performance. We show that a line-focus concentrator, composed of a parabolic trough primary reflector and freeform refractive secondary, can be tailored to have a highly asymmetric acceptance angle. The design is suitable for a tracker with excellent tracking accuracy in the elevation direction, and poor accuracy in the azimuthal direction. In the 1000X design given, when trough optical errors (2mrad rms slope deviation) are accounted for, the azimuthal acceptance angle is +/- 1.65°, while the elevation acceptance angle is only +/-0.29°. This acceptance angle does not include the angular width of the sun, which consumes nearly all of the elevation tolerance at this concentration level. By decreasing the average concentration, the elevation acceptance angle can be increased. This is well-suited for a pedestal alt-azimuth tracker with a low cost slew bearing (without anti-backlash features).

  14. Ultrasound transducer function: annual testing is not sufficient.

    PubMed

    Mårtensson, Mattias; Olsson, Mats; Brodin, Lars-Åke

    2010-10-01

    The objective was to follow-up the study 'High incidence of defective ultrasound transducers in use in routine clinical practice' and evaluate if annual testing is good enough to reduce the incidence of defective ultrasound transducers in routine clinical practice to an acceptable level. A total of 299 transducers were tested in 13 clinics at five hospitals in the Stockholm area. Approximately 7000-15,000 ultrasound examinations are carried out at these clinics every year. The transducers tested in the study had been tested and classified as fully operational 1 year before and since then been in normal use in the routine clinical practice. The transducers were tested with the Sonora FirstCall Test System. There were 81 (27.1%) defective transducers found; giving a 95% confidence interval ranging from 22.1 to 32.1%. The most common transducer errors were 'delamination' of the ultrasound lens and 'break in the cable' which together constituted 82.7% of all transducer errors found. The highest error rate was found at the radiological clinics with a mean error rate of 36.0%. There was a significant difference in error rate between two observed ways the clinics handled the transducers. There was no significant difference in the error rates of the transducer brands or the transducers models. Annual testing is not sufficient to reduce the incidence of defective ultrasound transducers in routine clinical practice to an acceptable level and it is strongly advisable to create a user routine that minimizes the handling of the transducers.

  15. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  16. The Reliability of Pedalling Rates Employed in Work Tests on the Bicycle Ergometer.

    ERIC Educational Resources Information Center

    Bolonchuk, W. W.

    The purpose of this study was to determine whether a group of volunteer subjects could produce and maintain a pedalling cadence within an acceptable range of error. This, in turn, would aid in determining the reliability of pedalling rates employed in work tests on the bicycle ergometer. Forty male college students were randomly given four…

  17. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  18. Evaluation of real-time data obtained from gravimetric preparation of antineoplastic agents shows medication errors with possible critical therapeutic impact: Results of a large-scale, multicentre, multinational, retrospective study.

    PubMed

    Terkola, R; Czejka, M; Bérubé, J

    2017-08-01

    Medication errors are a significant cause of morbidity and mortality especially with antineoplastic drugs, owing to their narrow therapeutic index. Gravimetric workflow software systems have the potential to reduce volumetric errors during intravenous antineoplastic drug preparation which may occur when verification is reliant on visual inspection. Our aim was to detect medication errors with possible critical therapeutic impact as determined by the rate of prevented medication errors in chemotherapy compounding after implementation of gravimetric measurement. A large-scale, retrospective analysis of data was carried out, related to medication errors identified during preparation of antineoplastic drugs in 10 pharmacy services ("centres") in five European countries following the introduction of an intravenous workflow software gravimetric system. Errors were defined as errors in dose volumes outside tolerance levels, identified during weighing stages of preparation of chemotherapy solutions which would not otherwise have been detected by conventional visual inspection. The gravimetric system detected that 7.89% of the 759 060 doses of antineoplastic drugs prepared at participating centres between July 2011 and October 2015 had error levels outside the accepted tolerance range set by individual centres, and prevented these doses from reaching patients. The proportion of antineoplastic preparations with deviations >10% ranged from 0.49% to 5.04% across sites, with a mean of 2.25%. The proportion of preparations with deviations >20% ranged from 0.21% to 1.27% across sites, with a mean of 0.71%. There was considerable variation in error levels for different antineoplastic agents. Introduction of a gravimetric preparation system for antineoplastic agents detected and prevented dosing errors which would not have been recognized with traditional methods and could have resulted in toxicity or suboptimal therapeutic outcomes for patients undergoing anticancer treatment. © 2017 The Authors. Journal of Clinical Pharmacy and Therapeutics Published by John Wiley & Sons Ltd.

  19. Pencil beam proton radiography using a multilayer ionization chamber

    NASA Astrophysics Data System (ADS)

    Farace, Paolo; Righetto, Roberto; Meijers, Arturs

    2016-06-01

    A pencil beam proton radiography (PR) method, using a commercial multilayer ionization chamber (MLIC) integrated with a treatment planning system (TPS) was developed. A Giraffe (IBA Dosimetry) MLIC (±0.5 mm accuracy) was used to obtain pencil beam PR by delivering spots uniformly positioned at a 5.0 mm distance in a 9  ×  9 square of spots. PRs of an electron-density (with tissue-equivalent inserts) phantom and a head phantom were acquired. The integral depth dose (IDD) curves of the delivered spots were computed by the TPS in a volume of water simulating the MLIC, and virtually added to the CT at the exit side of the phantoms. For each spot, measured and calculated IDD were overlapped in order to compute a map of range errors. On the head-phantom, the maximum dose from PR acquisition was estimated. Additionally, on the head phantom the impact on the range errors map was estimated in case of a 1 mm position misalignment. In the electron-density phantom, range errors were within 1 mm in the soft-tissue rods, but greater in the dense-rod. In the head-phantom the range errors were  -0.9  ±  2.7 mm on the whole map and within 1 mm in the brain area. On both phantoms greater errors were observed at inhomogeneity interfaces, due to sensitivity to small misalignment, and inaccurate TPS dose computation. The effect of the 1 mm misalignment was clearly visible on the range error map and produced an increased spread of range errors (-1.0  ±  3.8 mm on the whole map). The dose to the patient for such PR acquisitions would be acceptable as the maximum dose to the head phantom was  <2cGyE. By the described 2D method, allowing to discriminate misalignments, range verification can be performed in selected areas to implement an in vivo quality assurance program.

  20. Pencil beam proton radiography using a multilayer ionization chamber.

    PubMed

    Farace, Paolo; Righetto, Roberto; Meijers, Arturs

    2016-06-07

    A pencil beam proton radiography (PR) method, using a commercial multilayer ionization chamber (MLIC) integrated with a treatment planning system (TPS) was developed. A Giraffe (IBA Dosimetry) MLIC (±0.5 mm accuracy) was used to obtain pencil beam PR by delivering spots uniformly positioned at a 5.0 mm distance in a 9  ×  9 square of spots. PRs of an electron-density (with tissue-equivalent inserts) phantom and a head phantom were acquired. The integral depth dose (IDD) curves of the delivered spots were computed by the TPS in a volume of water simulating the MLIC, and virtually added to the CT at the exit side of the phantoms. For each spot, measured and calculated IDD were overlapped in order to compute a map of range errors. On the head-phantom, the maximum dose from PR acquisition was estimated. Additionally, on the head phantom the impact on the range errors map was estimated in case of a 1 mm position misalignment. In the electron-density phantom, range errors were within 1 mm in the soft-tissue rods, but greater in the dense-rod. In the head-phantom the range errors were  -0.9  ±  2.7 mm on the whole map and within 1 mm in the brain area. On both phantoms greater errors were observed at inhomogeneity interfaces, due to sensitivity to small misalignment, and inaccurate TPS dose computation. The effect of the 1 mm misalignment was clearly visible on the range error map and produced an increased spread of range errors (-1.0  ±  3.8 mm on the whole map). The dose to the patient for such PR acquisitions would be acceptable as the maximum dose to the head phantom was  <2cGyE. By the described 2D method, allowing to discriminate misalignments, range verification can be performed in selected areas to implement an in vivo quality assurance program.

  1. Evidence-based pathology: umbilical cord coiling.

    PubMed

    Khong, T Y

    2010-12-01

    The generation of a pathology test result must be based on criteria that are proven to be acceptably reproducible and clinically relevant to be evidence-based. This review de-constructs the umbilical cord coiling index to illustrate how it can stray from being evidence-based. Publications related to umbilical cord coiling were retrieved and analysed with regard to how the umbilical coiling index was calculated, abnormal coiling was defined and reference ranges were constructed. Errors and other influences that can occur with the measurement of the length of the umbilical cord or of the number of coils can compromise the generation of the coiling index. Definitions of abnormal coiling are not consistent in the literature. Reference ranges defining hypocoiling or hypercoiling have not taken those potential errors or the possible effect of gestational age into account. Even the way numerical test results in anatomical pathology are generated, as illustrated by the umbilical coiling index, warrants a critical analysis into its evidence base to ensure that they are reproducible or free from errors.

  2. Detection and avoidance of errors in computer software

    NASA Technical Reports Server (NTRS)

    Kinsler, Les

    1989-01-01

    The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.

  3. Evaluation of a point-of-care glucose and β-hydroxybutyrate meter operated in various environmental conditions in prepartum and postpartum sheep.

    PubMed

    Hornig, Katlin J; Byers, Stacey R; Callan, Robert J; Holt, Timothy; Field, Megan; Han, Hyungchul

    2013-08-01

    To compare β-hydroxybutyrate (BHB) and glucose concentrations measured with a dual-purpose point-of-care (POC) meter designed for use in humans and a laboratory biochemical analyzer (LBA) to determine whether the POC meter would be reliable for on-farm measurement of blood glucose and BHB concentrations in sheep in various environmental conditions and nutritional states. 36 pregnant mixed-breed ewes involved in a maternal feed restriction study. Blood samples were collected from each sheep at multiple points throughout gestation and lactation to allow for tracking of gradually increasing metabolic hardship. Whole blood glucose and BHB concentrations were measured with the POC meter and compared with serum results obtained with an LBA. 464 samples were collected. Whole blood BHB concentrations measured with the POC meter compared well with LBA results, and error grid analysis showed the POC values were acceptable. Whole blood glucose concentrations measured with the POC meter had more variation, compared with LBA values, over the glucose ranges evaluated. Results of error grid analysis of POC-measured glucose concentrations were not acceptable, indicating errors likely to result in needless treatment with glucose or other supplemental energy sources in normoglycemic sheep. The POC meter was user-friendly and performed well across a wide range of conditions. The meter was adequate for detection of pregnancy toxemia in sheep via whole blood BHB concentration. Results should be interpreted with caution when the POC meter is used to measure blood glucose concentrations.

  4. Test-retest reliability of jump execution variables using mechanography: a comparison of jump protocols.

    PubMed

    Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N

    2018-05-01

    Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.

  5. [Comparability study of analytical results between a group of clinical laboratories].

    PubMed

    Alsius-Serra, A; Ballbé-Anglada, M; López-Yeste, M L; Buxeda-Figuerola, M; Guillén-Campuzano, E; Juan-Pereira, L; Colomé-Mallolas, C; Caballé-Martín, I

    2015-01-01

    To describe the study of the comparability of the measurements levels of biological tests processed in biochemistry in Catlab's 4 laboratories. Quality requirements, coefficients of variation and total error (CV% and TE %) were established. Controls were verified with the precision requirements (CV%) in each test and each individual laboratory analyser. Fresh serum samples were used for the comparability study. The differences were analysed using a Microsoft Access® application that produces modified Bland-Altman plots. The comparison of 32 biological parameters that are performed in more than one laboratory and/or analyser generated 306 Bland-Altman graphs. Of these, 101 (33.1%) fell within the accepted range of values based on biological variability, and 205 (66.9%) required revision. Data were re-analysed based on consensus minimum specifications for analytical quality (consensus of the Asociación Española de Farmacéuticos Analistas (AEFA), the Sociedad Española de Bioquímica Clínica y Patología Molecular (SEQC), the Asociación Española de Biopatología Médica (AEBM) and the Sociedad Española de Hematología y Hemoterapia (SEHH), October 2013). With the new specifications, 170 comparisons (56%) fitted the requirements and 136 (44%) required additional review. Taking into account the number of points that exceeded the requirement, random errors, range of results in which discrepancies were detected, and range of clinical decision, it was shown that the 44% that required review were acceptable, and the 32 tests were comparable in all laboratories and analysers. The analysis of the results showed that the consensus requirements of the 4 scientific societies were met. However, each laboratory should aim to meet stricter criteria for total error. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  6. Boundary overlap for medical image segmentation evaluation

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina

    2017-03-01

    All medical image segmentation algorithms need to be validated and compared, and yet no evaluation framework is widely accepted within the imaging community. Collections of segmentation results often need to be compared and ranked by their effectiveness. Evaluation measures which are popular in the literature are based on region overlap or boundary distance. None of these are consistent in the way they rank segmentation results: they tend to be sensitive to one or another type of segmentation error (size, location, shape) but no single measure covers all error types. We introduce a new family of measures, with hybrid characteristics. These measures quantify similarity/difference of segmented regions by considering their overlap around the region boundaries. This family is more sensitive than other measures in the literature to combinations of segmentation error types. We compare measure performance on collections of segmentation results sourced from carefully compiled 2D synthetic data, and also on 3D medical image volumes. We show that our new measure: (1) penalises errors successfully, especially those around region boundaries; (2) gives a low similarity score when existing measures disagree, thus avoiding overly inflated scores; and (3) scores segmentation results over a wider range of values. We consider a representative measure from this family and the effect of its only free parameter on error sensitivity, typical value range, and running time.

  7. Design of a Pneumatic Tool for Manual Drilling Operations in Confined Spaces

    NASA Astrophysics Data System (ADS)

    Janicki, Benjamin

    This master's thesis describes the design process and testing results for a pneumatically actuated, manually-operated tool for confined space drilling operations. The purpose of this device is to back-drill pilot holes inside a commercial airplane wing. It is lightweight, and a "locator pin" enables the operator to align the drill over a pilot hole. A suction pad stabilizes the system, and an air motor and flexible drive shaft power the drill. Two testing procedures were performed to determine the practicality of this prototype. The first was the "offset drill test", which qualified the exit hole position error due to an initial position error relative to the original pilot hole. The results displayed a linear relationship, and it was determined that position errors of less than .060" would prevent the need for rework, with errors of up to .030" considered acceptable. For the second test, a series of holes were drilled with the pneumatic tool and analyzed for position error, diameter range, and cycle time. The position errors and hole diameter range were within the allowed tolerances. The average cycle time was 45 seconds, 73 percent of which was for drilling the hole, and 27 percent of which was for positioning the device. Recommended improvements are discussed in the conclusion, and include a more durable flexible drive shaft, a damper for drill feed control, and a more stable locator pin.

  8. Precise Orbit Determination for GEOSAT Follow-On Using Satellite Laser Ranging Data and Intermission Altimeter Crossovers

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Rowlands, David D.; Luthcke, Scott B.; Zelensky, Nikita P.; Chinn, Douglas S.; Pavlis, Despina E.; Marr, Gregory

    2001-01-01

    The US Navy's GEOSAT Follow-On Spacecraft was launched on February 10, 1998 with the primary objective of the mission to map the oceans using a radar altimeter. Following an extensive set of calibration campaigns in 1999 and 2000, the US Navy formally accepted delivery of the satellite on November 29, 2000. Satellite laser ranging (SLR) and Doppler (Tranet-style) beacons track the spacecraft. Although limited amounts of GPS data were obtained, the primary mode of tracking remains satellite laser ranging. The GFO altimeter measurements are highly precise, with orbit error the largest component in the error budget. We have tuned the non-conservative force model for GFO and the gravity model using SLR, Doppler and altimeter crossover data sampled over one year. Gravity covariance projections to 70x70 show the radial orbit error on GEOSAT was reduced from 2.6 cm in EGM96 to 1.3 cm with the addition of SLR, GFO/GFO and TOPEX/GFO crossover data. Evaluation of the gravity fields using SLR and crossover data support the covariance projections and also show a dramatic reduction in geographically-correlated error for the tuned fields. In this paper, we report on progress in orbit determination for GFO using GFO/GFO and TOPEX/GFO altimeter crossovers. We will discuss improvements in satellite force modeling and orbit determination strategy, which allows reduction in GFO radial orbit error from 10-15 cm to better than 5 cm.

  9. Submillimeter, millimeter, and microwave spectral line catalogue

    NASA Technical Reports Server (NTRS)

    Poynter, R. L.; Pickett, H. M.

    1980-01-01

    A computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between O and 3000 GHz (such as; wavelengths longer than 100 m) is discussed. The catalogue was used as a planning guide and as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances.

  10. A Reduced-Order Model For Zero-Mass Synthetic Jet Actuators

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.; Vatsa, Veer S.

    2007-01-01

    Accurate details of the general performance of fluid actuators is desirable over a range of flow conditions, within some predetermined error tolerance. Designers typically model actuators with different levels of fidelity depending on the acceptable level of error in each circumstance. Crude properties of the actuator (e.g., peak mass rate and frequency) may be sufficient for some designs, while detailed information is needed for other applications (e.g., multiple actuator interactions). This work attempts to address two primary objectives. The first objective is to develop a systematic methodology for approximating realistic 3-D fluid actuators, using quasi-1-D reduced-order models. Near full fidelity can be achieved with this approach at a fraction of the cost of full simulation and only a modest increase in cost relative to most actuator models used today. The second objective, which is a direct consequence of the first, is to determine the approximate magnitude of errors committed by actuator model approximations of various fidelities. This objective attempts to identify which model (ranging from simple orifice exit boundary conditions to full numerical simulations of the actuator) is appropriate for a given error tolerance.

  11. Life satisfaction and self-reported problems after spinal cord injury: measurement of underlying dimensions.

    PubMed

    Krause, James S; Reed, Karla S

    2009-08-01

    Evaluate the utility of the current 7-scale structure of the Life Situation Questionnaire-Revised (LSQ-R) using confirmatory factor analysis (CFA) and explore the factor structure of each set of items. Adults (N = 1,543) with traumatic spinal cord injury (SCI) were administered the 20 satisfaction and 30 problems items from the LSQ-R. CFA suggests that the existing 7-scale structure across the 50 items was within the acceptable range (root-mean-square error of approximation [RMSEA] = 0.078), although it fell just outside of this range for women. Factor analysis revealed 3 satisfaction factors and 6 problems factors. The overall fit of the problems items (RMSEA = 0.070) was superior to that of the satisfaction items (RMSEA = 0.80). RMSEA fell just outside of the acceptable range for Whites and men on the satisfaction scales. All scales had acceptable internal consistency. Results suggest the original scoring of the LSQ-R remains viable, although individual results should be reviewed for special population. Factor analysis of subsets of items allows satisfaction and problems items to be used independently, depending on the study purpose. (c) 2009 APA

  12. Estimating body fat in NCAA Division I female athletes: a five-compartment model validation of laboratory methods.

    PubMed

    Moon, Jordan R; Eckerson, Joan M; Tobkin, Sarah E; Smith, Abbie E; Lockwood, Christopher M; Walter, Ashley A; Cramer, Joel T; Beck, Travis W; Stout, Jeffrey R

    2009-01-01

    The purpose of the present study was to determine the validity of various laboratory methods for estimating percent body fat (%fat) in NCAA Division I college female athletes (n = 29; 20 +/- 1 year). Body composition was assessed via hydrostatic weighing (HW), air displacement plethysmography (ADP), and dual-energy X-ray absorptiometry (DXA), and estimates of %fat derived using 4-compartment (C), 3C, and 2C models were compared to a criterion 5C model that included bone mineral content, body volume (BV), total body water, and soft tissue mineral. The Wang-4C and the Siri-3C models produced nearly identical values compared to the 5C model (r > 0.99, total error (TE) < 0.40%fat). For the remaining laboratory methods, constant error values (CE) ranged from -0.04%fat (HW-Siri) to -3.71%fat (DXA); r values ranged from 0.89 (ADP-Siri, ADP-Brozek) to 0.93 (DXA); standard error of estimate values ranged from 1.78%fat (DXA) to 2.19%fat (ADP-Siri, ADP-Brozek); and TE values ranged from 2.22%fat (HW-Brozek) to 4.90%fat (DXA). The limits of agreement for DXA (-10.10 to 2.68%fat) were the largest with a significant trend of -0.43 (P < 0.05). With the exception of DXA, all of the equations resulted in acceptable TE values (<3.08%fat). However, the results for individual estimates of %fat using the Brozek equation indicated that the 2C models that derived BV from ADP and HW overestimated (5.38, 3.65%) and underestimated (5.19, 4.88%) %fat, respectively. The acceptable TE values for both HW and ADP suggest that these methods are valid for estimating %fat in college female athletes; however, the Wang-4C and Siri-3C models should be used to identify individual estimates of %fat in this population.

  13. A regret-induced status-quo bias

    PubMed Central

    Nicolle, A.; Fleming, S.M.; Bach, D.R.; Driver, J.; Dolan, R. J.

    2011-01-01

    A suboptimal bias towards accepting the ‘status-quo’ option in decision-making is well established behaviorally, but the underlying neural mechanisms are less clear. Behavioral evidence suggests the emotion of regret is higher when errors arise from rejection rather than acceptance of a status-quo option. Such asymmetry in the genesis of regret might drive the status-quo bias on subsequent decisions, if indeed erroneous status-quo rejections have a greater neuronal impact than erroneous status-quo acceptances. To test this, we acquired human fMRI data during a difficult perceptual decision task that incorporated a trial-to-trial intrinsic status-quo option, with explicit signaling of outcomes (error or correct). Behaviorally, experienced regret was higher after an erroneous status-quo rejection compared to acceptance. Anterior insula and medial prefrontal cortex showed increased BOLD signal after such status-quo rejection errors. In line with our hypothesis, a similar pattern of signal change predicted acceptance of the status-quo on a subsequent trial. Thus, our data link a regret-induced status-quo bias to error-related activity on the preceding trial. PMID:21368043

  14. Glycosylated haemoglobin: measurement and clinical use.

    PubMed

    Peacock, I

    1984-08-01

    The discovery, biochemistry, laboratory determination, and clinical application of glycosylated haemoglobins are reviewed. Sources of error are discussed in detail. No single assay method is suitable for all purposes, and in the foreseeable future generally acceptable standards and reference ranges are unlikely to be agreed. Each laboratory must establish its own. Nevertheless, the development of glycosylated haemoglobin assays is an important advance. They offer the best available means of assessing diabetic control.

  15. Body position reproducibility and joint alignment stability criticality on a muscular strength research device

    NASA Astrophysics Data System (ADS)

    Nunez, F.; Romero, A.; Clua, J.; Mas, J.; Tomas, A.; Catalan, A.; Castellsaguer, J.

    2005-08-01

    MARES (Muscle Atrophy Research and Exercise System) is a computerized ergometer for neuromuscular research to be flown and installed onboard the International Space Station in 2007. Validity of data acquired depends on controlling and reducing all significant error sources. One of them is the misalignment of the joint rotation axis with respect to the motor axis.The error induced on the measurements is proportional to the misalignment between both axis. Therefore, the restraint system's performance is critical [1]. MARES HRS (Human Restraint System) assures alignment within an acceptable range while performing the exercise (results: elbow movement:13.94mm+/-5.45, Knee movement: 22.36mm+/- 6.06 ) and reproducibility of human positioning (results: elbow movement: 2.82mm+/-1.56, Knee movement 7.45mm+/-4.8 ). These results allow limiting measurement errors induced by misalignment.

  16. Performance comparison for Barnes model 12-1000, Exotech model 100, and Ideas Inc. Biometer Mark 2

    NASA Technical Reports Server (NTRS)

    Robinson, B. (Principal Investigator)

    1981-01-01

    Results of tests show that all channels of all instruments, except channel 3 of the Biometer Mark 2, were stable in response to input signals were linear, and were adequately stable in response to temperature changes. The Biometer Mark 2 is labelled with an inappropriate description of the units measured and the dynamic range is a inappropriate for field measurements causing unnecessarily high fractional errors. This instrument is, therefore, quantization limited. The dynamic range and noise performance of the Model 12-1000 are appropriate for remote sensing field research. The field of view and performance of the Model 100A and the Model 12-1000 are satisfactory. The Biometer Mark 2 has not, as yet, been satisfactorily equipped with an acceptable field of view determining device. Neither the widely used aperture plate nor the 24 deg cone are acceptable.

  17. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  18. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  19. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  20. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  1. The difficult task of assessing perimortem and postmortem fractures on the skeleton: a blind text on 210 fractures of known origin.

    PubMed

    Cappella, Annalisa; Amadasi, Alberto; Castoldi, Elisa; Mazzarelli, Debora; Gaudio, Daniel; Cattaneo, Cristina

    2014-11-01

    The distinction between perimortem and postmortem fractures is an important challenge for forensic anthropology. Such a crucial task is presently based on macro-morphological criteria widely accepted in the scientific community. However, several limits affect these parameters which have not yet been investigated thoroughly. This study aims at highlighting the pitfalls and errors in evaluating perimortem or postmortem fractures. Two trained forensic anthropologists were asked to classify 210 fractures of known origin in four skeletons (three victims of blunt force trauma and one natural death) as perimortem, postmortem, or dubious, twice in 6 months in order to assess intraobserver error also. Results show large errors, ranging from 14.8 to 37% for perimortem fractures and from 5.5 to 14.8% for postmortem ones; more than 80% of errors concerned trabecular bone. This supports the need for more objective and reliable criteria for a correct assessment of peri- and postmortem bone fractures. © 2014 American Academy of Forensic Sciences.

  2. Study on the Rationality and Validity of Probit Models of Domino Effect to Chemical Process Equipment caused by Overpressure

    NASA Astrophysics Data System (ADS)

    Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong

    2013-04-01

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.

  3. Reevaluating Recovery: Perceived Violations and Preemptive Interventions on Emergency Psychiatry Rounds

    PubMed Central

    Cohen, Trevor; Blatter, Brett; Almeida, Carlos; Patel, Vimla L.

    2007-01-01

    Objective Contemporary error research suggests that the quest to eradicate error is misguided. Error commission, detection, and recovery are an integral part of cognitive work, even at the expert level. In collaborative workspaces, the perception of potential error is directly observable: workers discuss and respond to perceived violations of accepted practice norms. As perceived violations are captured and corrected preemptively, they do not fit Reason’s widely accepted definition of error as “failure to achieve an intended outcome.” However, perceived violations suggest the aversion of potential error, and consequently have implications for error prevention. This research aims to identify and describe perceived violations of the boundaries of accepted procedure in a psychiatric emergency department (PED), and how they are resolved in practice. Design Clinical discourse from fourteen PED patient rounds was audio-recorded. Excerpts from recordings suggesting perceived violations or incidents of miscommunication were extracted and analyzed using qualitative coding methods. The results are interpreted in relation to prior research on vulnerabilities to error in the PED. Results Thirty incidents of perceived violations or miscommunication are identified and analyzed. Of these, only one medication error was formally reported. Other incidents would not have been detected by a retrospective analysis. Conclusions The analysis of perceived violations expands the data available for error analysis beyond occasional reported adverse events. These data are prospective: responses are captured in real time. This analysis supports a set of recommendations to improve the quality of care in the PED and other critical care contexts. PMID:17329728

  4. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  5. Sweat Sodium Concentration: Inter-Unit Variability of a Low Cost, Portable, and Battery Operated Sodium Analyzer.

    PubMed

    Goulet, Eric D B; Baker, Lindsay B

    2017-12-01

    The B-722 Laqua Twin is a low cost, portable, and battery operated sodium analyzer, which can be used for the assessment of sweat sodium concentration. The Laqua Twin is reliable and provides a degree of accuracy similar to more expensive analyzers; however, its interunit measurement error remains unknown. The purpose of this study was to compare the sodium concentration values of 70 sweat samples measured using three different Laqua Twin units. Mean absolute errors, random errors and constant errors among the different Laqua Twins ranged respectively between 1.7 mmol/L to 3.5 mmol/L, 2.5 mmol/L to 3.7 mmol/L and -0.6 mmol/L to 3.9 mmol/L. Proportional errors among Laqua Twins were all < 2%. Based on a within-subject biological variability in sweat sodium concentration of ± 12%, the maximal allowable imprecision among instruments was considered to be £ 6%. In that respect, the within (2.9%), between (4.5%), and total (5.4%) measurement error coefficient of variations were all < 6%. For a given sweat sodium concentration value, the largest observed difference in mean and lower and upper bound error of measurements among instruments were, respectively, 4.7 mmol/L, 2.3 mmol/L, and 7.0 mmol/L. In conclusion, our findings show that the interunit measurement error of the B-722 Laqua Twin is low and methodologically acceptable.

  6. [CIRRNET® - learning from errors, a success story].

    PubMed

    Frank, O; Hochreutener, M; Wiederkehr, P; Staender, S

    2012-06-01

    CIRRNET® is the network of local error-reporting systems of the Swiss Patient Safety Foundation. The network has been running since 2006 together with the Swiss Society for Anaesthesiology and Resuscitation (SGAR), and network participants currently include 39 healthcare institutions from all four different language regions of Switzerland. Further institutions can join at any time. Local error reports in CIRRNET® are bundled at a supraregional level, categorised in accordance with the WHO classification, and analysed by medical experts. The CIRRNET® database offers a solid pool of data with error reports from a wide range of medical specialist's areas and provides the basis for identifying relevant problem areas in patient safety. These problem areas are then processed in cooperation with specialists with extremely varied areas of expertise, and recommendations for avoiding these errors are developed by changing care processes (Quick-Alerts®). Having been approved by medical associations and professional medical societies, Quick-Alerts® are widely supported and well accepted in professional circles. The CIRRNET® database also enables any affiliated CIRRNET® participant to access all error reports in the 'closed user area' of the CIRRNET® homepage and to use these error reports for in-house training. A healthcare institution does not have to make every mistake itself - it can learn from the errors of others, compare notes with other healthcare institutions, and use existing knowledge to advance its own patient safety.

  7. Definition of an Acceptable Glass composition Region (AGCR) via an Index System and a Partitioning Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, D. K.; Taylor, A. S.; Edwards, T.B.

    2005-06-26

    The objective of this investigation was to appeal to the available ComPro{trademark} database of glass compositions and measured PCTs that have been generated in the study of High Level Waste (HLW)/Low Activity Waste (LAW) glasses to define an Acceptable Glass Composition Region (AGCR). The term AGCR refers to a glass composition region in which the durability response (as defined by the Product Consistency Test (PCT)) is less than some pre-defined, acceptable value that satisfies the Waste Acceptance Product Specifications (WAPS)--a value of 10 g/L was selected for this study. To assess the effectiveness of a specific classification or index systemmore » to differentiate between acceptable and unacceptable glasses, two types of errors (Type I and Type II errors) were monitored. A Type I error reflects that a glass with an acceptable durability response (i.e., a measured NL [B] < 10 g/L) is classified as unacceptable by the system of composition-based constraints. A Type II error occurs when a glass with an unacceptable durability response is classified as acceptable by the system of constraints. Over the course of the efforts to meet this objective, two approaches were assessed. The first (referred to as the ''Index System'') was based on the use of an evolving system of compositional constraints which were used to explore the possibility of defining an AGCR. This approach was primarily based on ''glass science'' insight to establish the compositional constraints. Assessments of the Brewer and Taylor Index Systems did not result in the definition of an AGCR. Although the Taylor Index System minimized Type I errors which allowed access to composition regions of interest to improve melt rate or increase waste loadings for DWPF as compared to the current durability model, Type II errors were also committed. In the context of the application of a particular classification system in the process control system, Type II errors are much more serious than Type I errors. A Type I error only reflects that the particular constraint system being used is overly conservative (i.e., its application restricts access to glasses that have an acceptable measured durability response). A Type II error results in a more serious misclassification that could result in allowing the transfer of a Slurry Mix Evaporator (SME) batch to the melter, which is predicted to produce a durable product based on the specific system applied but in reality does not meet the defined ''acceptability'' criteria. More specifically, a nondurable product could be produced in DWPF. Given the presence of Type II errors, the Index System approach was deemed inadequate for further implementation consideration at the DWPF. The second approach (the JMP partitioning process) was purely data driven and empirically derived--glass science was not a factor. In this approach, the collection of composition--durability data in ComPro was sequentially partitioned or split based on the best available specific criteria and variables. More specifically, the JMP software chose the oxide (Al{sub 2}O{sub 3} for this dataset) that most effectively partitions the PCT responses (NL [B]'s)--perhaps not 100% effective based on a single oxide. Based on this initial split, a second request was made to split a particular set of the ''Y'' values (good or bad PCTs based on the 10 g/L limit) based on the next most critical ''X'' variable. This ''splitting'' or ''partitioning'' process was repeated until an AGCR was defined based on the use of only 3 oxides (Al{sub 2}O{sub 3}, CaO, and MgO) and critical values of > 3.75 wt% Al{sub 2}O{sub 3}, {ge} 0.616 wt% CaO, and < 3.521 wt% MgO. Using this set of criteria, the ComPro database was partitioned in which no Type II errors were committed. The automated partitioning function screened or removed 978 of the 2406 ComPro glasses which did cause some initial concerns regarding excessive conservatism regardless of its ability to identify an AGCR. However, a preliminary review of glasses within the 1428 ''acceptable'' glasses defining the ACGR includes glass systems of interest to support the accelerated mission.« less

  8. Development of a Rapid Derivative Spectrophotometric Method for Simultaneous Determination of Acetaminophen, Diphenhydramine and Pseudoephedrine in Tablets

    PubMed Central

    Souri, Effat; Rahimi, Aghil; Shabani Ravari, Nazanin; Barazandeh Tehrani, Maliheh

    2015-01-01

    A mixture of acetaminophen, diphenhydramine hydrochloride and pseudoephedrine hydrochloride is used for the symptomatic treatment of common cold. In this study, a derivative spectrophotometric method based on zero-crossing technique was proposed for simultaneous determination of acetaminophen, diphenhydramine hydrochloride and pseudoephedrine hydrochloride. Determination of these drugs was performed using the 1D value of acetaminophen at 281.5 nm, 2D value of diphenhydramine hydrochloride at 226.0 nm and 4D value of pseudoephedrine hydrochloride at 218.0 nm. The analysis method was linear over the range of 5-50, 0.25-4, and 0.5-5 µg/mL for acetaminophen, diphenhydramine hydrochloride and pseudoephedrine hydrochloride, respectively. The within-day and between-day CV and error values for all three compounds were within an acceptable range (CV<2.2% and error<3%). The developed method was used for simultaneous determination of these drugs in pharmaceutical dosage forms and no interference from excipients was observed. PMID:25901150

  9. Heterogenic Solid Biofuel Sampling Methodology and Uncertainty Associated with Prompt Analysis

    PubMed Central

    Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Patiño, David; Collazo, Joaquín

    2010-01-01

    Accurate determination of the properties of biomass is of particular interest in studies on biomass combustion or cofiring. The aim of this paper is to develop a methodology for prompt analysis of heterogeneous solid fuels with an acceptable degree of accuracy. Special care must be taken with the sampling procedure to achieve an acceptable degree of error and low statistical uncertainty. A sampling and error determination methodology for prompt analysis is presented and validated. Two approaches for the propagation of errors are also given and some comparisons are made in order to determine which may be better in this context. Results show in general low, acceptable levels of uncertainty, demonstrating that the samples obtained in the process are representative of the overall fuel composition. PMID:20559506

  10. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Cargo Movement Operations System (CMOS). Software Test Description

    DTIC Science & Technology

    1990-10-28

    resulting in errors in paragraph numbers and titles. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION...location to test the update of the truck manifest. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION...CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [

  12. [Whistleblowing: a difficult concept for nurses].

    PubMed

    Habermann, Monika; Cramer, Henning; Pielage, Friedhelm; Stagge, Maya

    2010-10-01

    Preventig errors and implementing risk management systems in health and nursing care requires knowledge about nurses' perceptions of errors, such as their handling and their reporting of errors. Whistleblowing is a way of reporting serious deficits by leaving predetermined pathways and addressing persons, institutions or media outside the organisation. In eighteen semi-structured interviews nurses were asked if they could imagine acting as a whistleblower, or if they even had ever blown the whistle before. The scope of their appraisal ranged from strictly disapproving such behaviour (what was done by most of the interviewees) to approving only hesitantly because of personal risks. Central themes were allegiance to the organisation, to the team and to colleagues, responsibility for the patients, and the consideration of personal risks. This corresponds to the results of other studies on whistleblowing, as described in the discussion. Nurses have to be encouraged to accept professional responsibilities as well as organisational ways of error reporting have to be found and to be discussed, e. g. in terms of best practice examples. Whistleblowing should be regarded as an act by which patient advocacy is expressed.

  13. Computer-socket manufacturing error: How much before it is clinically apparent?

    PubMed Central

    Sanders, Joan E.; Severance, Michael R.; Allyn, Kathryn J.

    2015-01-01

    The purpose of this research was to pursue quality standards for computer-manufacturing of prosthetic sockets for people with transtibial limb loss. Thirty-three duplicates of study participants’ normally used sockets were fabricated using central fabrication facilities. Socket-manufacturing errors were compared with clinical assessments of socket fit. Of the 33 sockets tested, 23 were deemed clinically to need modification. All 13 sockets with mean radial error (MRE) greater than 0.25 mm were clinically unacceptable, and 11 of those were deemed in need of sizing reduction. Of the remaining 20 sockets, 5 sockets with interquartile range (IQR) greater than 0.40 mm were deemed globally or regionally oversized and in need of modification. Of the remaining 15 sockets, 5 sockets with closed contours of elevated surface normal angle error (SNAE) were deemed clinically to need shape modification at those closed contour locations. The remaining 10 sockets were deemed clinically acceptable and not in need modification. MRE, IQR, and SNAE may serve as effective metrics to characterize quality of computer-manufactured prosthetic sockets, helping facilitate the development of quality standards for the socket manufacturing industry. PMID:22773260

  14. Emperical Tests of Acceptance Sampling Plans

    NASA Technical Reports Server (NTRS)

    White, K. Preston, Jr.; Johnson, Kenneth L.

    2012-01-01

    Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).

  15. Survey and Method for Determination of Trajectory Predictor Requirements

    NASA Technical Reports Server (NTRS)

    Rentas, Tamika L.; Green, Steven M.; Cate, Karen Tung

    2009-01-01

    A survey of air-traffic-management researchers, representing a broad range of automation applications, was conducted to document trajectory-predictor requirements for future decision-support systems. Results indicated that the researchers were unable to articulate a basic set of trajectory-prediction requirements for their automation concepts. Survey responses showed the need to establish a process to help developers determine the trajectory-predictor-performance requirements for their concepts. Two methods for determining trajectory-predictor requirements are introduced. A fast-time simulation method is discussed that captures the sensitivity of a concept to the performance of its trajectory-prediction capability. A characterization method is proposed to provide quicker, yet less precise results, based on analysis and simulation to characterize the trajectory-prediction errors associated with key modeling options for a specific concept. Concept developers can then identify the relative sizes of errors associated with key modeling options, and qualitatively determine which options lead to significant errors. The characterization method is demonstrated for a case study involving future airport surface traffic management automation. Of the top four sources of error, results indicated that the error associated with accelerations to and from turn speeds was unacceptable, the error associated with the turn path model was acceptable, and the error associated with taxi-speed estimation was of concern and needed a higher fidelity concept simulation to obtain a more precise result

  16. Improving the sensitivity and specificity of a bioanalytical assay for the measurement of certolizumab pegol.

    PubMed

    Smeraglia, John; Silva, John-Paul; Jones, Kieran

    2017-08-01

    In order to evaluate placental transfer of certolizumab pegol (CZP), a more sensitive and selective bioanalytical assay was required to accurately measure low CZP concentrations in infant and umbilical cord blood. Results & methodology: A new electrochemiluminescence immunoassay was developed to measure CZP levels in human plasma. Validation experiments demonstrated improved selectivity (no matrix interference observed) and a detection range of 0.032-5.0 μg/ml. Accuracy and precision met acceptance criteria (mean total error ≤20.8%). Dilution linearity and sample stability were acceptable and sufficient to support the method. The electrochemiluminescence immunoassay was validated for measuring low CZP concentrations in human plasma. The method demonstrated a more than tenfold increase in sensitivity compared with previous assays, and improved selectivity for intact CZP.

  17. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    PubMed

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure Validation Simulation Study

    NASA Technical Reports Server (NTRS)

    Murdoch, Jennifer L.; Bussink, Frank J. L.; Chamberlain, James P.; Chartrand, Ryan C.; Palmer, Michael T.; Palmer, Susan O.

    2008-01-01

    The Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure (ITP) Validation Simulation Study investigated the viability of an ITP designed to enable oceanic flight level changes that would not otherwise be possible. Twelve commercial airline pilots with current oceanic experience flew a series of simulated scenarios involving either standard or ITP flight level change maneuvers and provided subjective workload ratings, assessments of ITP validity and acceptability, and objective performance measures associated with the appropriate selection, request, and execution of ITP flight level change maneuvers. In the majority of scenarios, subject pilots correctly assessed the traffic situation, selected an appropriate response (i.e., either a standard flight level change request, an ITP request, or no request), and executed their selected flight level change procedure, if any, without error. Workload ratings for ITP maneuvers were acceptable and not substantially higher than for standard flight level change maneuvers, and, for the majority of scenarios and subject pilots, subjective acceptability ratings and comments for ITP were generally high and positive. Qualitatively, the ITP was found to be valid and acceptable. However, the error rates for ITP maneuvers were higher than for standard flight level changes, and these errors may have design implications for both the ITP and the study's prototype traffic display. These errors and their implications are discussed.

  19. Overcoming status quo bias in the human brain.

    PubMed

    Fleming, Stephen M; Thomas, Charlotte L; Dolan, Raymond J

    2010-03-30

    Humans often accept the status quo when faced with conflicting choice alternatives. However, it is unknown how neural pathways connecting cognition with action modulate this status quo acceptance. Here we developed a visual detection task in which subjects tended to favor the default when making difficult, but not easy, decisions. This bias was suboptimal in that more errors were made when the default was accepted. A selective increase in subthalamic nucleus (STN) activity was found when the status quo was rejected in the face of heightened decision difficulty. Analysis of effective connectivity showed that inferior frontal cortex, a region more active for difficult decisions, exerted an enhanced modulatory influence on the STN during switches away from the status quo. These data suggest that the neural circuits required to initiate controlled, nondefault actions are similar to those previously shown to mediate outright response suppression. We conclude that specific prefrontal-basal ganglia dynamics are involved in rejecting the default, a mechanism that may be important in a range of difficult choice scenarios.

  20. Rectifying calibration error of Goldmann applanation tonometer is easy!

    PubMed

    Choudhari, Nikhil S; Moorthy, Krishna P; Tungikar, Vinod B; Kumar, Mohan; George, Ronnie; Rao, Harsha L; Senthil, Sirisha; Vijaya, Lingam; Garudadri, Chandra Sekhar

    2014-11-01

    Purpose: Goldmann applanation tonometer (GAT) is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland) were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn't suffice. We followed the South East Asia Glaucoma Interest Group's definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively). Results: Twelve out of 29 (41.3%) GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6%) faulty instruments. Only one (8.3%) faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.

  1. August median streamflow on ungaged streams in Eastern Coastal Maine

    USGS Publications Warehouse

    Lombard, Pamela J.

    2004-01-01

    Methods for estimating August median streamflow were developed for ungaged, unregulated streams in eastern coastal Maine. The methods apply to streams with drainage areas ranging in size from 0.04 to 73.2 square miles and fraction of basin underlain by a sand and gravel aquifer ranging from 0 to 71 percent. The equations were developed with data from three long-term (greater than or equal to 10 years of record) continuous-record streamflow-gaging stations, 23 partial-record streamflow- gaging stations, and 5 short-term (less than 10 years of record) continuous-record streamflow-gaging stations. A mathematical technique for estimating a standard low-flow statistic, August median streamflow, at partial-record streamflow-gaging stations and short-term continuous-record streamflow-gaging stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term continuous-record streamflow-gaging stations (index stations). Generalized least-squares regression analysis (GLS) was used to relate estimates of August median streamflow at streamflow-gaging stations to basin characteristics at these same stations to develop equations that can be applied to estimate August median streamflow on ungaged streams. GLS accounts for different periods of record at the gaging stations and the cross correlation of concurrent streamflows among gaging stations. Thirty-one stations were used for the final regression equations. Two basin characteristics?drainage area and fraction of basin underlain by a sand and gravel aquifer?are used in the calculated regression equation to estimate August median streamflow for ungaged streams. The equation has an average standard error of prediction from -27 to 38 percent. A one-variable equation uses only drainage area to estimate August median streamflow when less accuracy is acceptable. This equation has an average standard error of prediction from -30 to 43 percent. Model error is larger than sampling error for both equations, indicating that additional or improved estimates of basin characteristics could be important to improved estimates of low-flow statistics. Weighted estimates of August median streamflow at partial- record or continuous-record gaging stations range from 0.003 to 31.0 cubic feet per second or from 0.1 to 0.6 cubic feet per second per square mile. Estimates of August median streamflow on ungaged streams in eastern coastal Maine, within the range of acceptable explanatory variables, range from 0.003 to 45 cubic feet per second or 0.1 to 0.6 cubic feet per second per square mile. Estimates of August median streamflow per square mile of drainage area generally increase as drainage area and fraction of basin underlain by a sand and gravel aquifer increase.

  2. Utilizing knowledge from prior plans in the evaluation of quality assurance

    NASA Astrophysics Data System (ADS)

    Stanhope, Carl; Wu, Q. Jackie; Yuan, Lulin; Liu, Jianfei; Hood, Rodney; Yin, Fang-Fang; Adamson, Justus

    2015-06-01

    Increased interest regarding sensitivity of pre-treatment intensity modulated radiotherapy and volumetric modulated arc radiotherapy (VMAT) quality assurance (QA) to delivery errors has led to the development of dose-volume histogram (DVH) based analysis. This paradigm shift necessitates a change in the acceptance criteria and action tolerance for QA. Here we present a knowledge based technique to objectively quantify degradations in DVH for prostate radiotherapy. Using machine learning, organ-at-risk (OAR) DVHs from a population of 198 prior patients’ plans were adapted to a test patient’s anatomy to establish patient-specific DVH ranges. This technique was applied to single arc prostate VMAT plans to evaluate various simulated delivery errors: systematic single leaf offsets, systematic leaf bank offsets, random normally distributed leaf fluctuations, systematic lag in gantry angle of the mutli-leaf collimators (MLCs), fluctuations in dose rate, and delivery of each VMAT arc with a constant rather than variable dose rate. Quantitative Analyses of Normal Tissue Effects in the Clinic suggests V75Gy dose limits of 15% for the rectum and 25% for the bladder, however the knowledge based constraints were more stringent: 8.48   ±   2.65% for the rectum and 4.90   ±   1.98% for the bladder. 19   ±   10 mm single leaf and 1.9   ±   0.7 mm single bank offsets resulted in rectum DVHs worse than 97.7% (2σ) of clinically accepted plans. PTV degradations fell outside of the acceptable range for 0.6   ±   0.3 mm leaf offsets, 0.11   ±   0.06 mm bank offsets, 0.6   ±   1.3 mm of random noise, and 1.0   ±   0.7° of gantry-MLC lag. Utilizing a training set comprised of prior treatment plans, machine learning is used to predict a range of achievable DVHs for the test patient’s anatomy. Consequently, degradations leading to statistical outliers may be identified. A knowledge based QA evaluation enables customized QA criteria per treatment site, institution and/or physician and can often be more sensitive to errors than criteria based on organ complication rates.

  3. Comparison of Accuracy in Intraocular Lens Power Calculation by Measuring Axial Length with Immersion Ultrasound Biometry and Partial Coherence Interferometry.

    PubMed

    Ruangsetakit, Varee

    2015-11-01

    To re-examine relative accuracy of intraocular lens (IOL) power calculation of immersion ultrasound biometry (IUB) and partial coherence interferometry (PCI) based on a new approach that limits its interest on the cases in which the IUB's IOL and PCI's IOL assignments disagree. Prospective observational study of 108 eyes that underwent cataract surgeries at Taksin Hospital. Two halves ofthe randomly chosen sample eyes were implanted with the IUB- and PCI-assigned lens. Postoperative refractive errors were measured in the fifth week. More accurate calculation was based on significantly smaller mean absolute errors (MAEs) and root mean squared errors (RMSEs) away from emmetropia. The distributions of the errors were examined to ensure that the higher accuracy was significant clinically as well. The (MAEs, RMSEs) were smaller for PCI of (0.5106 diopter (D), 0.6037D) than for IUB of (0.7000D, 0.8062D). The higher accuracy was principally contributedfrom negative errors, i.e., myopia. The MAEs and RMSEs for (IUB, PCI)'s negative errors were (0.7955D, 0.5185D) and (0.8562D, 0.5853D). Their differences were significant. The 72.34% of PCI errors fell within a clinically accepted range of ± 0.50D, whereas 50% of IUB errors did. PCI's higher accuracy was significant statistically and clinically, meaning that lens implantation based on PCI's assignments could improve postoperative outcomes over those based on IUB's assignments.

  4. Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement

    PubMed Central

    Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian

    2013-01-01

    Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990

  5. Accuracy study of a robotic system for MRI-guided prostate needle placement.

    PubMed

    Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian

    2013-09-01

    Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Slotted rotatable target assembly and systematic error analysis for a search for long range spin dependent interactions from exotic vector boson exchange using neutron spin rotation

    NASA Astrophysics Data System (ADS)

    Haddock, C.; Crawford, B.; Fox, W.; Francis, I.; Holley, A.; Magers, S.; Sarsour, M.; Snow, W. M.; Vanderwerp, J.

    2018-03-01

    We discuss the design and construction of a novel target array of nonmagnetic test masses used in a neutron polarimetry measurement made in search for new possible exotic spin dependent neutron-atominteractions of Nature at sub-mm length scales. This target was designed to accept and efficiently transmit a transversely polarized slow neutron beam through a series of long open parallel slots bounded by flat rectangular plates. These openings possessed equal atom density gradients normal to the slots from the flat test masses with dimensions optimized to achieve maximum sensitivity to an exotic spin-dependent interaction from vector boson exchanges with ranges in the mm - μm regime. The parallel slots were oriented differently in four quadrants that can be rotated about the neutron beam axis in discrete 90°increments using a Geneva drive. The spin rotation signals from the 4 quadrants were measured using a segmented neutron ion chamber to suppress possible systematic errors from stray magnetic fields in the target region. We discuss the per-neutron sensitivity of the target to the exotic interaction, the design constraints, the potential sources of systematic errors which could be present in this design, and our estimate of the achievable sensitivity using this method.

  7. Fundamental frequency estimation of singing voice

    NASA Astrophysics Data System (ADS)

    de Cheveigné, Alain; Henrich, Nathalie

    2002-05-01

    A method of fundamental frequency (F0) estimation recently developped for speech [de Cheveigné and Kawahara, J. Acoust. Soc. Am. (to be published)] was applied to singing voice. An electroglottograph signal recorded together with the microphone provided a reference by which estimates could be validated. Using standard parameter settings as for speech, error rates were low despite the wide range of F0s (about 100 to 1600 Hz). Most ``errors'' were due to irregular vibration of the vocal folds, a sharp formant resonance that reduced the waveform to a single harmonic, or fast F0 changes such as in high-amplitude vibrato. Our database (18 singers from baritone to soprano) included examples of diphonic singing for which melody is carried by variations of the frequency of a narrow formant rather than F0. Varying a parameter (ratio of inharmonic to total power) the algorithm could be tuned to follow either frequency. Although the method has not been formally tested on a wide range of instruments, it seems appropriate for musical applications because it is accurate, accepts a wide range of F0s, and can be implemented with low latency for interactive applications. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.

  8. Use and limitations of ASHRAE solar algorithms in solar energy utilization studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, E.F.

    1978-01-01

    Algorithms for computer calculation of solar radiation based on cloud cover data, recommended by the ASHRAE Task Group on Energy Requirements for Buildings, are examined for applicability in solar utilization studies. The implementation is patterned after a well-known computer program, NBSLD. The results of these algorithms, including horizontal and tilted surface insolation and useful energy collectable, are compared to observations and results obtainable by the Liu and Jordan method. For purposes of comparison, data for Riverside, CA from 1960 through 1963 are examined. It is shown that horizontal values so predicted are frequently less than 10% and always less thanmore » 23% in error when compared to averages of hourly measurements during important collection hours in 1962. Average daily errors range from -14 to 9% over the year. When averaged on an hourly basis over four years, there is a 21% maximum discrepancy compared to the Liu and Jordan method. Corresponding tilted-surface discrepancies are slightly higher, as are those for useful energy collected. Possible sources of these discrepancies and errors are discussed. Limitations of the algorithms and various implementations are examined, and it is suggested that certain assumptions acceptable for building loads analysis may not be acceptable for solar utilization studies. In particular, it is shown that the method of separatingg diffuse and direct components in the presence of clouds requires careful consideration in order to achieve accuracy and efficiency in any implementation.« less

  9. Bench-to-bedside review: the importance of the precision of the reference technique in method comparison studies--with specific reference to the measurement of cardiac output.

    PubMed

    Cecconi, Maurizio; Rhodes, Andrew; Poloniecki, Jan; Della Rocca, Giorgio; Grounds, R Michael

    2009-01-01

    Bland-Altman analysis is used for assessing agreement between two measurements of the same clinical variable. In the field of cardiac output monitoring, its results, in terms of bias and limits of agreement, are often difficult to interpret, leading clinicians to use a cutoff of 30% in the percentage error in order to decide whether a new technique may be considered a good alternative. This percentage error of +/- 30% arises from the assumption that the commonly used reference technique, intermittent thermodilution, has a precision of +/- 20% or less. The combination of two precisions of +/- 20% equates to a total error of +/- 28.3%, which is commonly rounded up to +/- 30%. Thus, finding a percentage error of less than +/- 30% should equate to the new tested technique having an error similar to the reference, which therefore should be acceptable. In a worked example in this paper, we discuss the limitations of this approach, in particular in regard to the situation in which the reference technique may be either more or less precise than would normally be expected. This can lead to inappropriate conclusions being drawn from data acquired in validation studies of new monitoring technologies. We conclude that it is not acceptable to present comparison studies quoting percentage error as an acceptability criteria without reporting the precision of the reference technique.

  10. Automatically generated acceptance test: A software reliability experiment

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  11. Evaluation of Robustness to Setup and Range Uncertainties for Head and Neck Patients Treated With Pencil Beam Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyapa, Robert; Lowe, Matthew; Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester

    Purpose: To evaluate the robustness of head and neck plans for treatment with intensity modulated proton therapy to range and setup errors, and to establish robustness parameters for the planning of future head and neck treatments. Methods and Materials: Ten patients previously treated were evaluated in terms of robustness to range and setup errors. Error bar dose distributions were generated for each plan, from which several metrics were extracted and used to define a robustness database of acceptable parameters over all analyzed plans. The patients were treated in sequentially delivered series, and plans were evaluated for both the first seriesmore » and for the combined error over the whole treatment. To demonstrate the application of such a database in the head and neck, for 1 patient, an alternative treatment plan was generated using a simultaneous integrated boost (SIB) approach and plans of differing numbers of fields. Results: The robustness database for the treatment of head and neck patients is presented. In an example case, comparison of single and multiple field plans against the database show clear improvements in robustness by using multiple fields. A comparison of sequentially delivered series and an SIB approach for this patient show both to be of comparable robustness, although the SIB approach shows a slightly greater sensitivity to uncertainties. Conclusions: A robustness database was created for the treatment of head and neck patients with intensity modulated proton therapy based on previous clinical experience. This will allow the identification of future plans that may benefit from alternative planning approaches to improve robustness.« less

  12. Characterisation of false-positive observations in botanical surveys

    PubMed Central

    2017-01-01

    Errors in botanical surveying are a common problem. The presence of a species is easily overlooked, leading to false-absences; while misidentifications and other mistakes lead to false-positive observations. While it is common knowledge that these errors occur, there are few data that can be used to quantify and describe these errors. Here we characterise false-positive errors for a controlled set of surveys conducted as part of a field identification test of botanical skill. Surveys were conducted at sites with a verified list of vascular plant species. The candidates were asked to list all the species they could identify in a defined botanically rich area. They were told beforehand that their final score would be the sum of the correct species they listed, but false-positive errors counted against their overall grade. The number of errors varied considerably between people, some people create a high proportion of false-positive errors, but these are scattered across all skill levels. Therefore, a person’s ability to correctly identify a large number of species is not a safeguard against the generation of false-positive errors. There was no phylogenetic pattern to falsely observed species; however, rare species are more likely to be false-positive as are species from species rich genera. Raising the threshold for the acceptance of an observation reduced false-positive observations dramatically, but at the expense of more false negative errors. False-positive errors are higher in field surveying of plants than many people may appreciate. Greater stringency is required before accepting species as present at a site, particularly for rare species. Combining multiple surveys resolves the problem, but requires a considerable increase in effort to achieve the same sensitivity as a single survey. Therefore, other methods should be used to raise the threshold for the acceptance of a species. For example, digital data input systems that can verify, feedback and inform the user are likely to reduce false-positive errors significantly. PMID:28533972

  13. Results from the HARP Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catanesi, M. G.

    2008-02-21

    Hadron production is a key ingredient in many aspects of {nu} physics. Precise prediction of atmospheric {nu} fluxes, characterization of accelerator {nu} beams, quantification of {pi} production and capture for {nu}-factory designs, all of these would profit from hadron production measurements. HARP at the CERN PS was the first hadron production experiment designed on purpose to match all these requirements. It combines a large, full phase space acceptance with low systematic errors and high statistics. HARP was operated in the range from 3 GeV to 15 GeV. We briefly describe here the most recent results.

  14. Measurement of latent cognitive abilities involved in concept identification learning.

    PubMed

    Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Nock, Matthew K; Naifeh, James A; Heeringa, Steven; Ursano, Robert J; Stein, Murray B

    2015-01-01

    We used cognitive and psychometric modeling techniques to evaluate the construct validity and measurement precision of latent cognitive abilities measured by a test of concept identification learning: the Penn Conditional Exclusion Test (PCET). Item response theory parameters were embedded within classic associative- and hypothesis-based Markov learning models and were fitted to 35,553 Army soldiers' PCET data from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Data were consistent with a hypothesis-testing model with multiple latent abilities-abstraction and set shifting. Latent abstraction ability was positively correlated with number of concepts learned, and latent set-shifting ability was negatively correlated with number of perseverative errors, supporting the construct validity of the two parameters. Abstraction was most precisely assessed for participants with abilities ranging from 1.5 standard deviations below the mean to the mean itself. Measurement of set shifting was acceptably precise only for participants making a high number of perseverative errors. The PCET precisely measures latent abstraction ability in the Army STARRS sample, especially within the range of mildly impaired to average ability. This precision pattern is ideal for a test developed to measure cognitive impairment as opposed to cognitive strength. The PCET also measures latent set-shifting ability, but reliable assessment is limited to the impaired range of ability, reflecting that perseverative errors are rare among cognitively healthy adults. Integrating cognitive and psychometric models can provide information about construct validity and measurement precision within a single analytical framework.

  15. Y-balance test: a reliability study involving multiple raters.

    PubMed

    Shaffer, Scott W; Teyhen, Deydre S; Lorenson, Chelsea L; Warren, Rick L; Koreerat, Christina M; Straseske, Crystal A; Childs, John D

    2013-11-01

    The Y-balance test (YBT) is one of the few field expedient tests that have shown predictive validity for injury risk in an athletic population. However, analysis of the YBT in a heterogeneous population of active adults (e.g., military, specific occupations) involving multiple raters with limited experience in a mass screening setting is lacking. The primary purpose of this study was to determine interrater test-retest reliability of the YBT in a military setting using multiple raters. Sixty-four service members (53 males, 11 females) actively conducting military training volunteered to participate. Interrater test-retest reliability of the maximal reach had intraclass correlation coefficients (2,1) of 0.80 to 0.85 with a standard error of measurement ranging from 3.1 to 4.2 cm for the 3 reach directions (anterior, posteromedial, and posterolateral). Interrater test-retest reliability of the average reach of 3 trails had an intraclass correlation coefficients (2,3) range of 0.85 to 0.93 with an associated standard error of measurement ranging from 2.0 to 3.5cm. The YBT showed good interrater test-retest reliability with an acceptable level of measurement error among multiple raters screening active duty service members. In addition, 31.3% (n = 20 of 64) of participants exhibited an anterior reach asymmetry of >4cm, suggesting impaired balance symmetry and potentially increased risk for injury. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  16. Simulation of aspheric tolerance with polynomial fitting

    NASA Astrophysics Data System (ADS)

    Li, Jing; Cen, Zhaofeng; Li, Xiaotong

    2018-01-01

    The shape of the aspheric lens changes caused by machining errors, resulting in a change in the optical transfer function, which affects the image quality. At present, there is no universally recognized tolerance criterion standard for aspheric surface. To study the influence of aspheric tolerances on the optical transfer function, the tolerances of polynomial fitting are allocated on the aspheric surface, and the imaging simulation is carried out by optical imaging software. Analysis is based on a set of aspheric imaging system. The error is generated in the range of a certain PV value, and expressed as a form of Zernike polynomial, which is added to the aspheric surface as a tolerance term. Through optical software analysis, the MTF of optical system can be obtained and used as the main evaluation index. Evaluate whether the effect of the added error on the MTF of the system meets the requirements of the current PV value. Change the PV value and repeat the operation until the acceptable maximum allowable PV value is obtained. According to the actual processing technology, consider the error of various shapes, such as M type, W type, random type error. The new method will provide a certain development for the actual free surface processing technology the reference value.

  17. Determining the optimal window length for pattern recognition-based myoelectric control: balancing the competing effects of classification error and controller delay.

    PubMed

    Smith, Lauren H; Hargrove, Levi J; Lock, Blair A; Kuiken, Todd A

    2011-04-01

    Pattern recognition-based control of myoelectric prostheses has shown great promise in research environments, but has not been optimized for use in a clinical setting. To explore the relationship between classification error, controller delay, and real-time controllability, 13 able-bodied subjects were trained to operate a virtual upper-limb prosthesis using pattern recognition of electromyogram (EMG) signals. Classification error and controller delay were varied by training different classifiers with a variety of analysis window lengths ranging from 50 to 550 ms and either two or four EMG input channels. Offline analysis showed that classification error decreased with longer window lengths (p < 0.01 ). Real-time controllability was evaluated with the target achievement control (TAC) test, which prompted users to maneuver the virtual prosthesis into various target postures. The results indicated that user performance improved with lower classification error (p < 0.01 ) and was reduced with longer controller delay (p < 0.01 ), as determined by the window length. Therefore, both of these effects should be considered when choosing a window length; it may be beneficial to increase the window length if this results in a reduced classification error, despite the corresponding increase in controller delay. For the system employed in this study, the optimal window length was found to be between 150 and 250 ms, which is within acceptable controller delays for conventional multistate amplitude controllers.

  18. Flight assessment of an atmospheric turbulence measurement system with emphasis on long wavelengths

    NASA Technical Reports Server (NTRS)

    Rhyne, R. H.

    1976-01-01

    A flight assessment has been made of a system for measuring the three components of atmospheric turbulence in the frequency range associated with airplane motions (0 to approximately 0.5 Hz). Results of the assessment indicate acceptable accuracy of the resulting time histories and power spectra. Small residual errors at the airplane short period and Dutch roll frequencies (0.5 and 0.25 Hz, respectively), as determined from in-flight maneuvers in smooth air, would not be detectable on the power spectra. However, errors at approximately 0.25 Hz can be present in the time history of the lateral turbulence component, particularly at the higher altitudes where airplane yawing motions are large. An assessment of the quantities comprising the vertical turbulence component leads to the conclusion that the vertical component is essentially accurate to zero frequency.

  19. Application of Near Infrared Spectroscopy Coupled with Fluidized Bed Enrichment and Chemometrics to Detect Low Concentration of β-Naphthalenesulfonic Acid.

    PubMed

    Li, Wei; Zhang, Xuan; Zheng, Kaiyi; Du, Yiping; Cap, Peng; Sui, Tao; Geng, Jinpei

    2015-01-01

    A fluidized bed enrichment technique was developed to improve sensitivity of near infrared (NIR) spectroscopy with features of rapidness and large volume solution. D301 resin was used as an adsorption material to preconcentrate β-naphthalenesulfonic acid in solutions in a concentration range of 2.0-100.0 μg/mL, and NIR spectra were measured directly relative to the β-naphthalenesulfonic acid adsorbed on the material. An improved partial least squares (PLS) model was attained with the aid of multiplicative scatter correction pretreatment and stability competitive adaptive reweighted sampling wavenumber selection method. The root mean square error of cross validation was 1.87 μg/mL at PLS factor of 7. An independent test set was used to assess the model, with the relative error (RE) in an acceptable range of 0.46 to 10.03% and mean RE of 3.72%. This study confirmed the viability of the proposed method for the measurement of a low content of β-naphthalenesulfonic acid in water.

  20. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  1. Sampling command generator corrects for noise and dropouts in recorded data

    NASA Technical Reports Server (NTRS)

    Anderson, T. O.

    1973-01-01

    Generator measures period between zero crossings of reference signal and accepts as correct timing points only those zero crossings which occur acceptably close to nominal time predicted from last accepted command. Unidirectional crossover points are used exclusively so errors from analog nonsymmetry of crossover detector are avoided.

  2. Development and implementation of a human accuracy program in patient foodservice.

    PubMed

    Eden, S H; Wood, S M; Ptak, K M

    1987-04-01

    For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.

  3. Reliability and measurement error of active knee extension range of motion in a modified slump test position: a pilot study.

    PubMed

    Tucker, Neil; Reid, Duncan; McNair, Peter

    2007-01-01

    The slump test is a tool to assess the mechanosensitivity of the neuromeningeal structures within the vertebral canal. While some studies have investigated the reliability of aspects of this test within the same day, few have assessed the reliability across days. Therefore, the purpose of this pilot study was to investigate reliability when measuring active knee extension range of motion (AROM) in a modified slump test position within trials on a single day and across days. Ten male and ten female asymptomatic subjects, ages 20-49 (mean age 30.1, SD 6.4) participated in the study. Knee extension AROM in a modified slump position with the cervical spine in a flexed position and then in an extended position was measured via three trials on two separate days. Across three trials, knee extension AROM increased significantly with a mean magnitude of 2 degrees within days for both cervical spine positions (P>0.05). The findings showed that there was no statistically significant difference in knee extension AROM measurements across days (P>0.05). The intraclass correlation coefficients for the mean of the three trials across days were 0.96 (lower limit 95% CI: 0.90) with the cervical spine flexed and 0.93 (lower limit 95% CI: 0.83) with cervical extension. Measurement error was calculated by way of the typical error and 95% limits of agreement, and visually represented in Bland and Altman plots. The typical error for the cervical flexed and extended positions averaged across trials was 2.6 degrees and 3.3 degrees , respectively. The limits of agreement were narrow, and the Bland and Altman plots also showed minimal bias in the joint angles across days with a random distribution of errors across the range of measured angles. This study demonstrated that knee extension AROM could be reliably measured across days in subjects without pathology and that the measurement error was acceptable. Implications of variability over multiple trials are discussed. The modified set-up for the test using the Kincom dynamometer and elevated thigh position may be useful to clinical researchers in determining the mechanosensitivity of the nervous system.

  4. Reliability and Measurement Error of Active Knee Extension Range of Motion in a Modified Slump Test Position: A Pilot Study

    PubMed Central

    Tucker, Neil; Reid, Duncan; McNair, Peter

    2007-01-01

    The slump test is a tool to assess the mechanosensitivity of the neuromeningeal structures within the vertebral canal. While some studies have investigated the reliability of aspects of this test within the same day, few have assessed the reliability across days. Therefore, the purpose of this pilot study was to investigate reliability when measuring active knee extension range of motion (AROM) in a modified slump test position within trials on a single day and across days. Ten male and ten female asymptomatic subjects, ages 20–49 (mean age 30.1, SD 6.4) participated in the study. Knee extension AROM in a modified slump position with the cervical spine in a flexed position and then in an extended position was measured via three trials on two separate days. Across three trials, knee extension AROM increased significantly with a mean magnitude of 2° within days for both cervical spine positions (P>0.05). The findings showed that there was no statistically significant difference in knee extension AROM measurements across days (P>0.05). The intraclass correlation coefficients for the mean of the three trials across days were 0.96 (lower limit 95% CI: 0.90) with the cervical spine flexed and 0.93 (lower limit 95% CI: 0.83) with cervical extension. Measurement error was calculated by way of the typical error and 95% limits of agreement, and visually represented in Bland and Altman plots. The typical error for the cervical flexed and extended positions averaged across trials was 2.6° and 3.3°, respectively. The limits of agreement were narrow, and the Bland and Altman plots also showed minimal bias in the joint angles across days with a random distribution of errors across the range of measured angles. This study demonstrated that knee extension AROM could be reliably measured across days in subjects without pathology and that the measurement error was acceptable. Implications of variability over multiple trials are discussed. The modified set-up for the test using the Kincom dynamometer and elevated thigh position may be useful to clinical researchers in determining the mechanosensitivity of the nervous system. PMID:19066666

  5. Effect of Orthokeratology on myopia progression: twelve-year results of a retrospective cohort study.

    PubMed

    Lee, Yueh-Chang; Wang, Jen-Hung; Chiu, Cheng-Jen

    2017-12-08

    Several studies reported the efficacy of orthokeratology for myopia control. Somehow, there is limited publication with follow-up longer than 3 years. This study aims to research whether overnight orthokeratology influences the progression rate of the manifest refractive error of myopic children in a longer follow-up period (up to 12 years). And if changes in progression rate are found, to investigate the relationship between refractive changes and different baseline factors, including refraction error, wearing age and lens replacement frequency. In addition, this study collects long-term safety profile of overnight orthokeratology. This is a retrospective study of sixty-six school-age children who received overnight orthokeratology correction between January 1998 and December 2013. Thirty-six subjects whose baseline age and refractive error matched with those in the orthokeratology group were selected to form control group. These subjects were followed up at least for 12 months. Manifest refractions, cycloplegic refractions, uncorrected and best-corrected visual acuities, power vector of astigmatism, corneal curvature, and lens replacement frequency were obtained for analysis. Data of 203 eyes were derived from 66 orthokeratology subjects (31 males and 35 females) and 36 control subjects (22 males and 14 females) enrolled in this study. Their wearing ages ranged from 7 years to 16 years (mean ± SE, 11.72 ± 0.18 years). The follow-up time ranged from 1 year to 13 years (mean ± SE, 6.32 ± 0.15 years). At baseline, their myopia ranged from -0.5 D to -8.0 D (mean ± SE, -3.70 ± 0.12 D), and astigmatism ranged from 0 D to -3.0 D (mean ± SE, -0.55 ± 0.05 D). Comparing with control group, orthokeratology group had a significantly (p < 0.001) lower trend of refractive error change during the follow-up periods. According to the analysis results of GEE model, greater power of astigmatism was found to be associated with increased change of refractive error during follow-up years. Overnight orthokeratology was effective in slowing myopia progression over a twelve-year follow-up period and demonstrated a clinically acceptable safety profile. Initial higher astigmatism power was found to be associated with increased change of refractive error during follow-up years.

  6. A probabilistic approach to remote compositional analysis of planetary surfaces

    USGS Publications Warehouse

    Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.

    2017-01-01

    Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.

  7. Analysis of space telescope data collection system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    An analysis of the expected performance for the Multiple Access (MA) system is provided. The analysis covers the expected bit error rate performance, the effects of synchronization loss, the problem of self-interference, and the problem of phase ambiguity. The problem of false acceptance of a command word due to data inversion is discussed. A mathematical determination of the probability of accepting an erroneous command word due to a data inversion is presented. The problem is examined for three cases: (1) a data inversion only, (2) a data inversion and a random error within the same command word, and a block (up to 256 48-bit words) containing both a data inversion and a random error.

  8. August Median Streamflow on Ungaged Streams in Eastern Aroostook County, Maine

    USGS Publications Warehouse

    Lombard, Pamela J.; Tasker, Gary D.; Nielsen, Martha G.

    2003-01-01

    Methods for estimating August median streamflow were developed for ungaged, unregulated streams in the eastern part of Aroostook County, Maine, with drainage areas from 0.38 to 43 square miles and mean basin elevations from 437 to 1,024 feet. Few long-term, continuous-record streamflow-gaging stations with small drainage areas were available from which to develop the equations; therefore, 24 partial-record gaging stations were established in this investigation. A mathematical technique for estimating a standard low-flow statistic, August median streamflow, at partial-record stations was applied by relating base-flow measurements at these stations to concurrent daily flows at nearby long-term, continuous-record streamflow- gaging stations (index stations). Generalized least-squares regression analysis (GLS) was used to relate estimates of August median streamflow at gaging stations to basin characteristics at these same stations to develop equations that can be applied to estimate August median streamflow on ungaged streams. GLS accounts for varying periods of record at the gaging stations and the cross correlation of concurrent streamflows among gaging stations. Twenty-three partial-record stations and one continuous-record station were used for the final regression equations. The basin characteristics of drainage area and mean basin elevation are used in the calculated regression equation for ungaged streams to estimate August median flow. The equation has an average standard error of prediction from -38 to 62 percent. A one-variable equation uses only drainage area to estimate August median streamflow when less accuracy is acceptable. This equation has an average standard error of prediction from -40 to 67 percent. Model error is larger than sampling error for both equations, indicating that additional basin characteristics could be important to improved estimates of low-flow statistics. Weighted estimates of August median streamflow, which can be used when making estimates at partial-record or continuous-record gaging stations, range from 0.03 to 11.7 cubic feet per second or from 0.1 to 0.4 cubic feet per second per square mile. Estimates of August median streamflow on ungaged streams in the eastern part of Aroostook County, within the range of acceptable explanatory variables, range from 0.03 to 30 cubic feet per second or 0.1 to 0.7 cubic feet per second per square mile. Estimates of August median streamflow per square mile of drainage area generally increase as mean elevation and drainage area increase.

  9. An Analysis of Plan Robustness for Esophageal Tumors: Comparing Volumetric Modulated Arc Therapy Plans and Spot Scanning Proton Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Samantha, E-mail: samantha.warren@oncology.ox.ac.uk; Partridge, Mike; Bolsi, Alessandra

    Purpose: Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods andmore » Materials: For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV){sub 50Gy} or PTV{sub 62.5Gy} (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results: SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D{sub 98} was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D{sub 98} was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D{sub 98} was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D{sub 98} was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions: The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial.« less

  10. An Analysis of Plan Robustness for Esophageal Tumors: Comparing Volumetric Modulated Arc Therapy Plans and Spot Scanning Proton Planning

    PubMed Central

    Warren, Samantha; Partridge, Mike; Bolsi, Alessandra; Lomax, Anthony J.; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A.

    2016-01-01

    Purpose Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods and Materials For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV)50Gy or PTV62.5Gy (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D98 was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D98 was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D98 was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D98 was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial. PMID:27084641

  11. An Analysis of Plan Robustness for Esophageal Tumors: Comparing Volumetric Modulated Arc Therapy Plans and Spot Scanning Proton Planning.

    PubMed

    Warren, Samantha; Partridge, Mike; Bolsi, Alessandra; Lomax, Anthony J; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A

    2016-05-01

    Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV)50Gy or PTV62.5Gy (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose-volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D98 was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D98 was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D98 was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D98 was lower by 0.3% to 2.2% of the prescribed GTV dose. The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Development and validity of a method for the evaluation of printed education material

    PubMed Central

    Castro, Mauro Silveira; Pilger, Diogo; Fuchs, Flávio Danni; Ferreira, Maria Beatriz Cardoso

    Objectives To develop and study the validity of an instrument for evaluation of Printed Education Materials (PEM); to evaluate the use of acceptability indices; to identify possible influences of professional aspects. Methods An instrument for PEM evaluation was developed which included tree steps: domain identification, item generation and instrument design. A reading to easy PEM was developed for education of patient with systemic hypertension and its treatment with hydrochlorothiazide. Construct validity was measured based on previously established errors purposively introduced into the PEM, which served as extreme groups. An acceptability index was applied taking into account the rate of professionals who should approve each item. Participants were 10 physicians (9 men) and 5 nurses (all women). Results Many professionals identified intentional errors of crude character. Few participants identified errors that needed more careful evaluation, and no one detected the intentional error that required literature analysis. Physicians considered as acceptable 95.8% of the items of the PEM, and nurses 29.2%. The differences between the scoring were statistically significant in 27% of the items. In the overall evaluation, 66.6% were considered as acceptable. The analysis of each item revealed a behavioral pattern for each professional group. Conclusions The use of instruments for evaluation of printed education materials is required and may improve the quality of the PEM available for the patients. Not always are the acceptability indices totally correct or represent high quality of information. The professional experience, the practice pattern, and perhaps the gendre of the reviewers may influence their evaluation. An analysis of the PEM by professionals in communication, in drug information, and patients should be carried out to improve the quality of the proposed material. PMID:25214924

  13. Discrepancies in reporting the CAG repeat lengths for Huntington's disease

    PubMed Central

    Quarrell, Oliver W; Handley, Olivia; O'Donovan, Kirsty; Dumoulin, Christine; Ramos-Arroyo, Maria; Biunno, Ida; Bauer, Peter; Kline, Margaret; Landwehrmeyer, G Bernhard

    2012-01-01

    Huntington's disease results from a CAG repeat expansion within the Huntingtin gene; this is measured routinely in diagnostic laboratories. The European Huntington's Disease Network REGISTRY project centrally measures CAG repeat lengths on fresh samples; these were compared with the original results from 121 laboratories across 15 countries. We report on 1326 duplicate results; a discrepancy in reporting the upper allele occurred in 51% of cases, this reduced to 13.3% and 9.7% when we applied acceptable measurement errors proposed by the American College of Medical Genetics and the Draft European Best Practice Guidelines, respectively. Duplicate results were available for 1250 lower alleles; discrepancies occurred in 40% of cases. Clinically significant discrepancies occurred in 4.0% of cases with a potential unexplained misdiagnosis rate of 0.3%. There was considerable variation in the discrepancy rate among 10 of the countries participating in this study. Out of 1326 samples, 348 were re-analysed by an accredited diagnostic laboratory, based in Germany, with concordance rates of 93% and 94% for the upper and lower alleles, respectively. This became 100% if the acceptable measurement errors were applied. The central laboratory correctly reported allele sizes for six standard reference samples, blind to the known result. Our study differs from external quality assessment (EQA) schemes in that these are duplicate results obtained from a large sample of patients across the whole diagnostic range. We strongly recommend that laboratories state an error rate for their measurement on the report, participate in EQA schemes and use reference materials regularly to adjust their own internal standards. PMID:21811303

  14. Thermal comfort study of hospital workers in Malaysia.

    PubMed

    Yau, Y H; Chew, B T

    2009-12-01

    This article presents findings of the thermal comfort study in hospitals. A field survey was conducted to investigate the temperature range for thermal comfort in hospitals in the tropics. Thermal acceptability assessment was conducted to examine whether the hospitals in the tropics met the ASHRAE Standard-55 80% acceptability criteria. A total of 114 occupants in four hospitals were involved in the study. The results of the field study revealed that only 44% of the examined locations met the comfort criteria specified in ASHRAE Standard 55. The survey also examined the predicted percentage of dissatisfied in the hospitals. The results showed that 49% of the occupants were satisfied with the thermal environments in the hospitals. The field survey analysis revealed that the neutral temperature for Malaysian hospitals was 26.4 degrees C. The comfort temperature range that satisfied 90% of the occupants in the space was in the range of 25.3-28.2 degrees C. The results from the field study suggested that a higher comfort temperature was required for Malaysians in hospital environments compared with the temperature criteria specified in ASHRAE Standard (2003). In addition, the significant deviation between actual mean vote and predicted mean vote (PMV) strongly implied that PMV could not be applied without errors in hospitals in the tropics. The new findings on thermal comfort temperature range in hospitals in the tropics could be used as an important guide for building services engineers and researchers who are intending to minimize energy usage in heating, ventilating and air conditioning systems in hospitals operating in the tropics with acceptable thermal comfort level and to improve the performance and well-being of its workers.

  15. Simulation and experimental research of 1MWe solar tower power plant in China

    NASA Astrophysics Data System (ADS)

    Yu, Qiang; Wang, Zhifeng; Xu, Ershu

    2016-05-01

    The establishment of a reliable simulation system for a solar tower power plant can greatly increase the economic and safety performance of the whole system. In this paper, a dynamic model of the 1MWe Solar Tower Power Plant at Badaling in Beijing is developed based on the "STAR-90" simulation platform, including the heliostat field, the central receiver system (water/steam), etc. The dynamic behavior of the global CSP plant can be simulated. In order to verify the validity of simulation system, a complete experimental process was synchronously simulated by repeating the same operating steps based on the simulation platform, including the locations and number of heliostats, the mass flow of the feed water, etc. According to the simulation and experimental results, some important parameters are taken out to make a deep comparison. The results show that there is good alignment between the simulations and the experimental results and that the error range can be acceptable considering the error of the models. In the end, a comprehensive and deep analysis on the error source is carried out according to the comparative results.

  16. Submillimeter, millimeter, and microwave spectral line catalogue

    NASA Technical Reports Server (NTRS)

    Poynter, R. L.; Pickett, H. M.

    1984-01-01

    This report describes a computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between 0 and 10000 GHz (i.e., wavelengths longer than 30 micrometers). The catalogue can be used as a planning guide or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue has been constructed using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalogue will add more atoms and molecules and update the present listings (151 species) as new data appear. The catalogue is available from the authors as a magnetic tape recorded in card images and as a set of microfiche records.

  17. Submillimeter, millimeter, and microwave spectral line catalogue

    NASA Technical Reports Server (NTRS)

    Poynter, R. L.; Pickett, H. M.

    1981-01-01

    A computer accessible catalogue of submillimeter, millimeter and microwave spectral lines in the frequency range between 0 and 3000 GHZ (i.e., wavelengths longer than 100 mu m) is presented which can be used a planning guide or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalogue will add more atoms and molecules and update the present listings (133 species) as new data appear. The catalogue is available as a magnetic tape recorded in card images and as a set of microfiche records.

  18. NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid

    NASA Astrophysics Data System (ADS)

    Thomas, Togis; Gupta, K. K.

    2016-03-01

    Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.

  19. In-situ Calibration Methods for Phased Array High Frequency Radars

    NASA Astrophysics Data System (ADS)

    Flament, P. J.; Flament, M.; Chavanne, C.; Flores-vidal, X.; Rodriguez, I.; Marié, L.; Hilmer, T.

    2016-12-01

    HF radars measure currents through the Doppler-shift of electromagnetic waves Bragg-scattered by surface gravity waves. While modern clocks and digital synthesizers yield range errors negligible compared to the bandwidth-limited range resolution, azimuth calibration issues arise for beam-forming phased arrays. Sources of errors in the phases of the received waves can be internal to the radar system (phase errors of filters, cable lengths, antenna tuning) and geophysical (standing waves, propagation and refraction anomalies). They result in azimuthal biases (which can be range-dependent) and beam-forming side-lobes (which induce Doppler ambiguities). We analyze the experimental calibrations of 17 deployments of WERA HF radars, performed between 2003 and 2012 in Hawaii, the Adriatic, France, Mexico and the Philippines. Several strategies were attempted: (i) passive reception of continuous multi-frequency transmitters on GPS-tracked boats, cars, and drones; (ii) bi-static calibrations of radars in mutual view; (iii) active echoes from vessels of opportunity of unknown positions or tracked through AIS; (iv) interference of unknown remote transmitters with the chirped local oscillator. We found that: (a) for antennas deployed on the sea shore, a single-azimuth calibration is sufficient to correct phases within a typical beam-forming azimuth range; (b) after applying this azimuth-independent correction, residual pointing errors are 1-2 deg. rms; (c) for antennas deployed on irregular cliffs or hills, back from shore, systematic biases appear for some azimuths at large incidence angles, suggesting that some of the ground-wave electromagnetic energy propagates in a terrain-following mode between the sea shore and the antennas; (d) for some sites, fluctuations of 10-25 deg. in radio phase at 20-40 deg. azimuthal period, not significantly correlated among antennas, are omnipresent in calibrations along a constant-range circle, suggesting standing waves or multiple paths in the presence of reflecting structures (buildings, fences), or possibly fractal nature of the wavefronts; (e) amplitudes lack stability in time and azimuth to be usable as a-priori calibrations, confirming the accepted method of re-normalizing amplitudes by the signal of nearby cells prior to beam-forming.

  20. Evaluation of the Regional Atmospheric Modeling System in the Eastern Range Dispersion Assessment System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan

    2000-01-01

    The Applied Meteorology Unit is conducting an evaluation of the Regional Atmospheric Modeling System (RAMS) contained within the Eastern Range Dispersion Assessment System (ERDAS). ERDAS provides emergency response guidance for operations at the Cape Canaveral Air Force Station and the Kennedy Space Center in the event of an accidental hazardous material release or aborted vehicle launch. The prognostic data from RAMS is available to ERDAS for display and is used to initialize the 45th Range Safety (45 SW/SE) dispersion model. Thus, the accuracy of the 45 SW/SE dispersion model is dependent upon the accuracy of RAMS forecasts. The RAMS evaluation task consists of an objective and subjective component for the Florida warm and cool seasons of 1999-2000. The objective evaluation includes gridded and point error statistics at surface and upper-level observational sites, a comparison of the model errors to a coarser grid configuration of RAMS, and a benchmark of RAMS against the widely accepted Eta model. The warm-season subjective evaluation involves a verification of the onset and movement of the Florida east coast sea breeze and RAMS forecast precipitation. This interim report provides a summary of the RAMS objective and subjective evaluation for the 1999 Florida warm season only.

  1. The evolution of Crew Resource Management training in commercial aviation

    NASA Technical Reports Server (NTRS)

    Helmreich, R. L.; Merritt, A. C.; Wilhelm, J. A.

    1999-01-01

    In this study, we describe changes in the nature of Crew Resource Management (CRM) training in commercial aviation, including its shift from cockpit to crew resource management. Validation of the impact of CRM is discussed. Limitations of CRM, including lack of cross-cultural generality are considered. An overarching framework that stresses error management to increase acceptance of CRM concepts is presented. The error management approach defines behavioral strategies taught in CRM as error countermeasures that are employed to avoid error, to trap errors committed, and to mitigate the consequences of error.

  2. Ultrasonic Blood Flow Measurement in Haemodialysis

    PubMed Central

    Sampson, D.; Papadimitriou, M.; Kulatilake, A. E.

    1970-01-01

    A 5-megacycle Doppler flow meter, calibrated in-vitro, was found to give a linear response to blood flow in the ranges commonly encountered in haemodialysis. With this, blood flow through artificial kidneys could be measured simply and with a clinically acceptable error. The method is safe, as blood lines do not have to be punctured or disconnected and hence there is no risk of introducing infection. Besides its value as a research tool the flow meter is useful in evaluating new artificial kidneys. Suitably modified it could form the basis of an arterial flow alarm system. PMID:5416812

  3. A directional cylindrical anemometer with four sets of differential pressure sensors

    NASA Astrophysics Data System (ADS)

    Liu, C.; Du, L.; Zhao, Z.

    2016-03-01

    This paper presents a solid-state directional anemometer for simultaneously measuring the speed and direction of a wind in a plane in a speed range 1-40 m/s. This instrument has a cylindrical shape and works by detecting the pressure differences across diameters of the cylinder when exposed to wind. By analyzing our experimental data in a Reynolds number regime 1.7 × 103-7 × 104, we figure out the relationship between the pressure difference distribution and the wind velocity. We propose a novel and simple solution based on the relationship and design an anemometer which composes of a circular cylinder with four sets of differential pressure sensors, tubes connecting these sensors with the cylinder's surface, and corresponding circuits. In absence of moving parts, this instrument is small and immune of friction. It has simple internal structures, and the fragile sensing elements are well protected. Prototypes have been fabricated to estimate performance of proposed approach. The power consumption of the prototype is less than 0.5 W, and the sample rate is up to 31 Hz. The test results in a wind tunnel indicate that the maximum relative speed measuring error is 5% and the direction error is no more than 5° in a speed range 2-40 m/s. In theory, it is capable of measuring wind up to 60 m/s. When the air stream goes slower than 2 m/s, the measuring errors of directions are slightly greater, and the performance of speed measuring degrades but remains in an acceptable range of ±0.2 m/s.

  4. Development and validation of effective real-time and periodic interinstrument comparison method for automatic hematology analyzers.

    PubMed

    Park, Sang Hyuk; Park, Chan-Jeoung; Kim, Mi-Jeong; Choi, Mi-Ok; Han, Min-Young; Cho, Young-Uk; Jang, Seongsoo

    2014-12-01

    We developed and validated an interinstrument comparison method for automatic hematology analyzers based on the 99th percentile coefficient of variation (CV) cutoff of daily means and validated in both patient samples and quality control (QC) materials. A total of 120 patient samples were obtained over 6 months. Data from the first 3 months were used to determine 99th percentile CV cutoff values, and data obtained in the last 3 months were used to calculate acceptable ranges and rejection rates. Identical analyses were also performed using QC materials. Two instrument comparisons were also performed, and the most appropriate allowable total error (ATE) values were determined. The rejection rates based on the 99th percentile cutoff values were within 10.00% and 9.30% for the patient samples and QC materials, respectively. The acceptable ranges of QC materials based on the currently used method were wider than those calculated from the 99th percentile CV cutoff values in most items. In two-instrument comparisons, 34.8% of all comparisons failed, and 87.0% of failed comparisons were successful when 4 SD was applied as an ATE value instead of 3 SD. The 99th percentile CV cutoff value-derived daily acceptable ranges can be used as a real-time interinstrument comparison method in both patient samples and QC materials. Applying 4 SD as an ATE value can significantly reduce unnecessarily followed recalibration in the leukocyte differential counts, reticulocytes, and mean corpuscular volume. Copyright© by the American Society for Clinical Pathology.

  5. 3D point cloud analysis of structured light registration in computer-assisted navigation in spinal surgeries

    NASA Astrophysics Data System (ADS)

    Gupta, Shaurya; Guha, Daipayan; Jakubovic, Raphael; Yang, Victor X. D.

    2017-02-01

    Computer-assisted navigation is used by surgeons in spine procedures to guide pedicle screws to improve placement accuracy and in some cases, to better visualize patient's underlying anatomy. Intraoperative registration is performed to establish a correlation between patient's anatomy and the pre/intra-operative image. Current algorithms rely on seeding points obtained directly from the exposed spinal surface to achieve clinically acceptable registration accuracy. Registration of these three dimensional surface point-clouds are prone to various systematic errors. The goal of this study was to evaluate the robustness of surgical navigation systems by looking at the relationship between the optical density of an acquired 3D point-cloud and the corresponding surgical navigation error. A retrospective review of a total of 48 registrations performed using an experimental structured light navigation system developed within our lab was conducted. For each registration, the number of points in the acquired point cloud was evaluated relative to whether the registration was acceptable, the corresponding system reported error and target registration error. It was demonstrated that the number of points in the point cloud neither correlates with the acceptance/rejection of a registration or the system reported error. However, a negative correlation was observed between the number of the points in the point-cloud and the corresponding sagittal angular error. Thus, system reported total registration points and accuracy are insufficient to gauge the accuracy of a navigation system and the operating surgeon must verify and validate registration based on anatomical landmarks prior to commencing surgery.

  6. High-density force myography: A possible alternative for upper-limb prosthetic control.

    PubMed

    Radmand, Ashkan; Scheme, Erik; Englehart, Kevin

    2016-01-01

    Several multiple degree-of-freedom upper-limb prostheses that have the promise of highly dexterous control have recently been developed. Inadequate controllability, however, has limited adoption of these devices. Introducing more robust control methods will likely result in higher acceptance rates. This work investigates the suitability of using high-density force myography (HD-FMG) for prosthetic control. HD-FMG uses a high-density array of pressure sensors to detect changes in the pressure patterns between the residual limb and socket caused by the contraction of the forearm muscles. In this work, HD-FMG outperforms the standard electromyography (EMG)-based system in detecting different wrist and hand gestures. With the arm in a fixed, static position, eight hand and wrist motions were classified with 0.33% error using the HD-FMG technique. Comparatively, classification errors in the range of 2.2%-11.3% have been reported in the literature for multichannel EMG-based approaches. As with EMG, position variation in HD-FMG can introduce classification error, but incorporating position variation into the training protocol reduces this effect. Channel reduction was also applied to the HD-FMG technique to decrease the dimensionality of the problem as well as the size of the sensorized area. We found that with informed, symmetric channel reduction, classification error could be decreased to 0.02%.

  7. Estimation and Simulation of Slow Crack Growth Parameters from Constant Stress Rate Data

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Weaver, Aaron S.

    2003-01-01

    Closed form, approximate functions for estimating the variances and degrees-of-freedom associated with the slow crack growth parameters n, D, B, and A(sup *) as measured using constant stress rate ('dynamic fatigue') testing were derived by using propagation of errors. Estimates made with the resulting functions and slow crack growth data for a sapphire window were compared to the results of Monte Carlo simulations. The functions for estimation of the variances of the parameters were derived both with and without logarithmic transformation of the initial slow crack growth equations. The transformation was performed to make the functions both more linear and more normal. Comparison of the Monte Carlo results and the closed form expressions derived with propagation of errors indicated that linearization is not required for good estimates of the variances of parameters n and D by the propagation of errors method. However, good estimates variances of the parameters B and A(sup *) could only be made when the starting slow crack growth equation was transformed and the coefficients of variation of the input parameters were not too large. This was partially a result of the skewered distributions of B and A(sup *). Parametric variation of the input parameters was used to determine an acceptable range for using closed form approximate equations derived from propagation of errors.

  8. Myopia, contact lens use and self-esteem

    PubMed Central

    Dias, Lynette; Manny, Ruth E; Weissberg, Erik; Fern, Karen D

    2013-01-01

    Purpose To evaluate whether contact lens (CL) use was associated with self-esteem in myopic children originally enrolled in the Correction of Myopia Evaluation Trial (COMET), that after five years continued as an observational study of myopia progression with CL use permitted. Methods Usable data at the six-year visit, one year after CL use was allowed (n = 423/469, age 12-17 years), included questions on CL use, refractive error measurements and self-reported self-esteem in several areas (scholastic/athletic competence, physical appearance, social acceptance, behavioural conduct and global self-worth). Self-esteem, scored from 1 (low) to 4 (high), was measured by the Self-Perception Profile for Children in participants under 14 years or the Self-Perception Profile for Adolescents, in those 14 years and older. Multiple regression analyses were used to evaluate associations between self-esteem and relevant factors identified by univariate analyses (e.g., CL use, gender, ethnicity), while adjusting for baseline self-esteem prior to CL use. Results Mean (±SD) self-esteem scores at the six-year visit (mean age=15.3±1.3 years; mean refractive error= −4.6 ±1.5D) ranged from 2.74 (± 0.76) on athletic competence to 3.33 (± 0.53) on global self-worth. CL wearers (n=224) compared to eyeglass wearers (n=199) were more likely to be female (p<0.0001). Those who chose to wear CLs had higher social acceptance, athletic competence and behavioural conduct scores (p < 0.05) at baseline compared to eyeglass users. CL users continued to report higher social acceptance scores at the six-year visit (p=0.03), after adjusting for baseline scores and other covariates. Ethnicity was also independently associated with social acceptance in the multivariable analyses (p=0.011); African-Americans had higher scores than Asians, Whites and Hispanics. Age and refractive error were not associated with self-esteem or CL use. Conclusions COMET participants who chose to wear CLs after five years of eyeglass use had higher self-esteem compared to those who remained in glasses both preceding and following CL use. This suggests that self-esteem may influence the decision to wear CLs and that CLs in turn are associated with higher self-esteem in individuals most likely to wear them. PMID:23763482

  9. A system of equations to approximate the pharmacokinetic parameters of lacosamide at steady state from one plasma sample.

    PubMed

    Cawello, Willi; Schäfer, Carina

    2014-08-01

    Frequent plasma sampling to monitor pharmacokinetic (PK) profile of antiepileptic drugs (AEDs), is invasive, costly and time consuming. For drugs with a well-defined PK profile, such as AED lacosamide, equations can accurately approximate PK parameters from one steady-state plasma sample. Equations were derived to approximate steady-state peak and trough lacosamide plasma concentrations (Cpeak,ss and Ctrough,ss, respectively) and area under concentration-time curve during dosing interval (AUCτ,ss) from one plasma sample. Lacosamide (ka: ∼2 h(-1); ke: ∼0.05 h(-1), corresponding to half-life of 13 h) was calculated to reach Cpeak,ss after ∼1 h (tmax,ss). Equations were validated by comparing approximations to reference PK parameters obtained from single plasma samples drawn 3-12h following lacosamide administration, using data from double-blind, placebo-controlled, parallel-group PK study. Values of relative bias (accuracy) between -15% and +15%, and root mean square error (RMSE) values≤15% (precision) were considered acceptable for validation. Thirty-five healthy subjects (12 young males; 11 elderly males, 12 elderly females) received lacosamide 100mg/day for 4.5 days. Equation-derived PK values were compared to reference mean Cpeak,ss, Ctrough,ss and AUCτ,ss values. Equation-derived PK data had a precision of 6.2% and accuracy of -8.0%, 2.9%, and -0.11%, respectively. Equation-derived versus reference PK values for individual samples obtained 3-12h after lacosamide administration showed correlation (R2) range of 0.88-0.97 for AUCτ,ss. Correlation range for Cpeak,ss and Ctrough,ss was 0.65-0.87. Error analyses for individual sample comparisons were independent of time. Derived equations approximated lacosamide Cpeak,ss, Ctrough,ss and AUCτ,ss using one steady-state plasma sample within validation range. Approximated PK parameters were within accepted validation criteria when compared to reference PK values. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution

    NASA Astrophysics Data System (ADS)

    Samohyl, Robert Wayne

    2017-10-01

    This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The article presents R code throughout.

  11. Clinically acceptable agreement between the ViMove wireless motion sensor system and the Vicon motion capture system when measuring lumbar region inclination motion in the sagittal and coronal planes.

    PubMed

    Mjøsund, Hanne Leirbekk; Boyle, Eleanor; Kjaer, Per; Mieritz, Rune Mygind; Skallgård, Tue; Kent, Peter

    2017-03-21

    Wireless, wearable, inertial motion sensor technology introduces new possibilities for monitoring spinal motion and pain in people during their daily activities of work, rest and play. There are many types of these wireless devices currently available but the precision in measurement and the magnitude of measurement error from such devices is often unknown. This study investigated the concurrent validity of one inertial motion sensor system (ViMove) for its ability to measure lumbar inclination motion, compared with the Vicon motion capture system. To mimic the variability of movement patterns in a clinical population, a sample of 34 people were included - 18 with low back pain and 16 without low back pain. ViMove sensors were attached to each participant's skin at spinal levels T12 and S2, and Vicon surface markers were attached to the ViMove sensors. Three repetitions of end-range flexion inclination, extension inclination and lateral flexion inclination to both sides while standing were measured by both systems concurrently with short rest periods in between. Measurement agreement through the whole movement range was analysed using a multilevel mixed-effects regression model to calculate the root mean squared errors and the limits of agreement were calculated using the Bland Altman method. We calculated root mean squared errors (standard deviation) of 1.82° (±1.00°) in flexion inclination, 0.71° (±0.34°) in extension inclination, 0.77° (±0.24°) in right lateral flexion inclination and 0.98° (±0.69°) in left lateral flexion inclination. 95% limits of agreement ranged between -3.86° and 4.69° in flexion inclination, -2.15° and 1.91° in extension inclination, -2.37° and 2.05° in right lateral flexion inclination and -3.11° and 2.96° in left lateral flexion inclination. We found a clinically acceptable level of agreement between these two methods for measuring standing lumbar inclination motion in these two cardinal movement planes. Further research should investigate the ViMove system's ability to measure lumbar motion in more complex 3D functional movements and to measure changes of movement patterns related to treatment effects.

  12. The application of Gaussian mixture models for signal quantification in MALDI-TOF mass spectrometry of peptides.

    PubMed

    Spainhour, John Christian G; Janech, Michael G; Schwacke, John H; Velez, Juan Carlos Q; Ramakrishnan, Viswanathan

    2014-01-01

    Matrix assisted laser desorption/ionization time-of-flight (MALDI-TOF) coupled with stable isotope standards (SIS) has been used to quantify native peptides. This peptide quantification by MALDI-TOF approach has difficulties quantifying samples containing peptides with ion currents in overlapping spectra. In these overlapping spectra the currents sum together, which modify the peak heights and make normal SIS estimation problematic. An approach using Gaussian mixtures based on known physical constants to model the isotopic cluster of a known compound is proposed here. The characteristics of this approach are examined for single and overlapping compounds. The approach is compared to two commonly used SIS quantification methods for single compound, namely Peak Intensity method and Riemann sum area under the curve (AUC) method. For studying the characteristics of the Gaussian mixture method, Angiotensin II, Angiotensin-2-10, and Angiotenisn-1-9 and their associated SIS peptides were used. The findings suggest, Gaussian mixture method has similar characteristics as the two methods compared for estimating the quantity of isolated isotopic clusters for single compounds. All three methods were tested using MALDI-TOF mass spectra collected for peptides of the renin-angiotensin system. The Gaussian mixture method accurately estimated the native to labeled ratio of several isolated angiotensin peptides (5.2% error in ratio estimation) with similar estimation errors to those calculated using peak intensity and Riemann sum AUC methods (5.9% and 7.7%, respectively). For overlapping angiotensin peptides, (where the other two methods are not applicable) the estimation error of the Gaussian mixture was 6.8%, which is within the acceptable range. In summary, for single compounds the Gaussian mixture method is equivalent or marginally superior compared to the existing methods of peptide quantification and is capable of quantifying overlapping (convolved) peptides within the acceptable margin of error.

  13. Preemption versus Entrenchment: Towards a Construction-General Solution to the Problem of the Retreat from Verb Argument Structure Overgeneralization

    PubMed Central

    Ambridge, Ben; Bidgood, Amy; Twomey, Katherine E.; Pine, Julian M.; Rowland, Caroline F.; Freudenthal, Daniel

    2015-01-01

    Participants aged 5;2-6;8, 9;2-10;6 and 18;1-22;2 (72 at each age) rated verb argument structure overgeneralization errors (e.g., *Daddy giggled the baby) using a five-point scale. The study was designed to investigate the feasibility of two proposed construction-general solutions to the question of how children retreat from, or avoid, such errors. No support was found for the prediction of the preemption hypothesis that the greater the frequency of the verb in the single most nearly synonymous construction (for this example, the periphrastic causative; e.g., Daddy made the baby giggle), the lower the acceptability of the error. Support was found, however, for the prediction of the entrenchment hypothesis that the greater the overall frequency of the verb, regardless of construction, the lower the acceptability of the error, at least for the two older groups. Thus while entrenchment appears to be a robust solution to the problem of the retreat from error, and one that generalizes across different error types, we did not find evidence that this is the case for preemption. The implication is that the solution to the retreat from error lies not with specialized mechanisms, but rather in a probabilistic process of construction competition. PMID:25919003

  14. Preemption versus Entrenchment: Towards a Construction-General Solution to the Problem of the Retreat from Verb Argument Structure Overgeneralization.

    PubMed

    Ambridge, Ben; Bidgood, Amy; Twomey, Katherine E; Pine, Julian M; Rowland, Caroline F; Freudenthal, Daniel

    2014-01-01

    Participants aged 5;2-6;8, 9;2-10;6 and 18;1-22;2 (72 at each age) rated verb argument structure overgeneralization errors (e.g., *Daddy giggled the baby) using a five-point scale. The study was designed to investigate the feasibility of two proposed construction-general solutions to the question of how children retreat from, or avoid, such errors. No support was found for the prediction of the preemption hypothesis that the greater the frequency of the verb in the single most nearly synonymous construction (for this example, the periphrastic causative; e.g., Daddy made the baby giggle), the lower the acceptability of the error. Support was found, however, for the prediction of the entrenchment hypothesis that the greater the overall frequency of the verb, regardless of construction, the lower the acceptability of the error, at least for the two older groups. Thus while entrenchment appears to be a robust solution to the problem of the retreat from error, and one that generalizes across different error types, we did not find evidence that this is the case for preemption. The implication is that the solution to the retreat from error lies not with specialized mechanisms, but rather in a probabilistic process of construction competition.

  15. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  16. Registration of pencil beam proton radiography data with X-ray CT.

    PubMed

    Deffet, Sylvain; Macq, Benoît; Righetto, Roberto; Vander Stappen, François; Farace, Paolo

    2017-10-01

    Proton radiography seems to be a promising tool for assessing the quality of the stopping power computation in proton therapy. However, range error maps obtained on the basis of proton radiographs are very sensitive to small misalignment between the planning CT and the proton radiography acquisitions. In order to be able to mitigate misalignment in postprocessing, the authors implemented a fast method for registration between pencil proton radiography data obtained with a multilayer ionization chamber (MLIC) and an X-ray CT acquired on a head phantom. The registration was performed by optimizing a cost function which performs a comparison between the acquired data and simulated integral depth-dose curves. Two methodologies were considered, one based on dual orthogonal projections and the other one on a single projection. For each methodology, the robustness of the registration algorithm with respect to three confounding factors (measurement noise, CT calibration errors, and spot spacing) was investigated by testing the accuracy of the method through simulations based on a CT scan of a head phantom. The present registration method showed robust convergence towards the optimal solution. For the level of measurement noise and the uncertainty in the stopping power computation expected in proton radiography using a MLIC, the accuracy appeared to be better than 0.3° for angles and 0.3 mm for translations by use of the appropriate cost function. The spot spacing analysis showed that a spacing larger than the 5 mm used by other authors for the investigation of a MLIC for proton radiography led to results with absolute accuracy better than 0.3° for angles and 1 mm for translations when orthogonal proton radiographs were fed into the algorithm. In the case of a single projection, 6 mm was the largest spot spacing presenting an acceptable registration accuracy. For registration of proton radiography data with X-ray CT, the use of a direct ray-tracing algorithm to compute sums of squared differences and corrections of range errors showed very good accuracy and robustness with respect to three confounding factors: measurement noise, calibration error, and spot spacing. It is therefore a suitable algorithm to use in the in vivo range verification framework, allowing to separate in postprocessing the proton range uncertainty due to setup errors from the other sources of uncertainty. © 2017 American Association of Physicists in Medicine.

  17. Context affects nestmate recognition errors in honey bees and stingless bees.

    PubMed

    Couvillon, Margaret J; Segers, Francisca H I D; Cooper-Bowman, Roseanne; Truslove, Gemma; Nascimento, Daniela L; Nascimento, Fabio S; Ratnieks, Francis L W

    2013-08-15

    Nestmate recognition studies, where a discriminator first recognises and then behaviourally discriminates (accepts/rejects) another individual, have used a variety of methodologies and contexts. This is potentially problematic because recognition errors in discrimination behaviour are predicted to be context-dependent. Here we compare the recognition decisions (accept/reject) of discriminators in two eusocial bees, Apis mellifera and Tetragonisca angustula, under different contexts. These contexts include natural guards at the hive entrance (control); natural guards held in plastic test arenas away from the hive entrance that vary either in the presence or absence of colony odour or the presence or absence of an additional nestmate discriminator; and, for the honey bee, the inside of the nest. For both honey bee and stingless bee guards, total recognition errors of behavioural discrimination made by guards (% nestmates rejected + % non-nestmates accepted) are much lower at the colony entrance (honey bee: 30.9%; stingless bee: 33.3%) than in the test arenas (honey bee: 60-86%; stingless bee: 61-81%; P<0.001 for both). Within the test arenas, the presence of colony odour specifically reduced the total recognition errors in honey bees, although this reduction still fell short of bringing error levels down to what was found at the colony entrance. Lastly, in honey bees, the data show that the in-nest collective behavioural discrimination by ca. 30 workers that contact an intruder is insufficient to achieve error-free recognition and is not as effective as the discrimination by guards at the entrance. Overall, these data demonstrate that context is a significant factor in a discriminators' ability to make appropriate recognition decisions, and should be considered when designing recognition study methodologies.

  18. SU-F-T-383: Robustness for Patient Setup Error in Total Body Irradiation Using Volumetric Modulated Arc Therapy (VMAT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, Y; National Cancer Center, Kashiwa, Chiba; Tachibana, H

    Purpose: Total body irradiation (TBI) and total marrow irradiation (TMI) using Tomotherapy have been reported. A gantry-based linear accelerator uses one isocenter during one rotational irradiation. Thus, 3–5 isocenter points should be used for a whole plan of TBI-VMAT during smoothing out the junctional dose distribution. IGRT provides accurate and precise patient setup for the multiple junctions, however it is evident that some setup errors should occur and affect accuracy of dose distribution in the area. In this study, we evaluated the robustness for patient’s setup error in VMAT-TBI. Methods: VMAT-TBI Planning was performed in an adult whole-body human phantommore » using Eclipse. Eight full arcs with four isocenter points using 6MV-X were used to cover the entire whole body. Dose distribution was optimized using two structures of patient’s body as PTV and lung. The two arcs were shared with one isocenter and the two arcs were 5 cm-overlapped with the other two arcs. Point absolute dose using ionization-chamber and planer relative dose distribution using film in the junctional regions were performed using water-equivalent slab phantom. In the measurements, several setup errors of (+5∼−5mm) were added. Results: The result of the chamber measurement shows the deviations were within ±3% when the setup errors were within ±3 mm. In the planer evaluation, the pass ratio of gamma evaluation (3%/2mm) shows more than 90% if the errors within ±3 mm. However, there were hot/cold areas in the edge of the junction even with acceptable gamma pass ratio. 5 mm setup error caused larger hot and cold areas and the dosimetric acceptable areas were decreased in the overlapped areas. Conclusion: It can be clinically acceptable for VMAT-TBI when patient setup error is within ±3mm. Averaging effects from patient random error would be helpful to blur the hot/cold area in the junction.« less

  19. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation.

    PubMed

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors Δ ω N was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal Δ ω N was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  20. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation

    PubMed Central

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám

    2016-01-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations. PMID:27493566

  1. Quality Assurance of Chemical Measurements.

    ERIC Educational Resources Information Center

    Taylor, John K.

    1981-01-01

    Reviews aspects of quality control (methods to control errors) and quality assessment (verification that systems are operating within acceptable limits) including an analytical measurement system, quality control by inspection, control charts, systematic errors, and use of SRMs, materials for which properties are certified by the National Bureau…

  2. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    EPA Science Inventory

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  3. Error compensation for thermally induced errors on a machine tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  4. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  5. Design of robust iterative learning control schemes for systems with polytopic uncertainties and sector-bounded nonlinearities

    NASA Astrophysics Data System (ADS)

    Boski, Marcin; Paszke, Wojciech

    2017-01-01

    This paper deals with designing of iterative learning control schemes for uncertain systems with static nonlinearities. More specifically, the nonlinear part is supposed to be sector bounded and system matrices are assumed to range in the polytope of matrices. For systems with such nonlinearities and uncertainties the repetitive process setting is exploited to develop a linear matrix inequality based conditions for computing the feedback and feedforward (learning) controllers. These controllers guarantee acceptable dynamics along the trials and ensure convergence of the trial-to-trial error dynamics, respectively. Numerical examples illustrate the theoretical results and confirm effectiveness of the designed control scheme.

  6. Evaluating the effects of modeling errors for isolated finite three-dimensional targets

    NASA Astrophysics Data System (ADS)

    Henn, Mark-Alexander; Barnes, Bryan M.; Zhou, Hui

    2017-10-01

    Optical three-dimensional (3-D) nanostructure metrology utilizes a model-based metrology approach to determine critical dimensions (CDs) that are well below the inspection wavelength. Our project at the National Institute of Standards and Technology is evaluating how to attain key CD and shape parameters from engineered in-die capable metrology targets. More specifically, the quantities of interest are determined by varying the input parameters for a physical model until the simulations agree with the actual measurements within acceptable error bounds. As in most applications, establishing a reasonable balance between model accuracy and time efficiency is a complicated task. A well-established simplification is to model the intrinsically finite 3-D nanostructures as either periodic or infinite in one direction, reducing the computationally expensive 3-D simulations to usually less complex two-dimensional (2-D) problems. Systematic errors caused by this simplified model can directly influence the fitting of the model to the measurement data and are expected to become more apparent with decreasing lengths of the structures. We identify these effects using selected simulation results and present experimental setups, e.g., illumination numerical apertures and focal ranges, that can increase the validity of the 2-D approach.

  7. A device for characterising the mechanical properties of the plantar soft tissue of the foot.

    PubMed

    Parker, D; Cooper, G; Pearson, S; Crofts, G; Howard, D; Busby, P; Nester, C

    2015-11-01

    The plantar soft tissue is a highly functional viscoelastic structure involved in transferring load to the human body during walking. A Soft Tissue Response Imaging Device was developed to apply a vertical compression to the plantar soft tissue whilst measuring the mechanical response via a combined load cell and ultrasound imaging arrangement. Accuracy of motion compared to input profiles; validation of the response measured for standard materials in compression; variability of force and displacement measures for consecutive compressive cycles; and implementation in vivo with five healthy participants. Static displacement displayed average error of 0.04 mm (range of 15 mm), and static load displayed average error of 0.15 N (range of 250 N). Validation tests showed acceptable agreement compared to a Houndsfield tensometer for both displacement (CMC > 0.99 RMSE > 0.18 mm) and load (CMC > 0.95 RMSE < 4.86 N). Device motion was highly repeatable for bench-top tests (ICC = 0.99) and participant trials (CMC = 1.00). Soft tissue response was found repeatable for intra (CMC > 0.98) and inter trials (CMC > 0.70). The device has been shown to be capable of implementing complex loading patterns similar to gait, and of capturing the compressive response of the plantar soft tissue for a range of loading conditions in vivo. Copyright © 2015. Published by Elsevier Ltd.

  8. 22 CFR 34.18 - Waivers of indebtedness.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... known through the exercise of due diligence that an error existed but failed to take corrective action... elapsed between the erroneous payment and discovery of the error and notification of the employee; (D... to duty because of disability (supported by an acceptable medical certificate); and (D) Whether...

  9. Home medication support for childhood cancer: family-centered design and testing.

    PubMed

    Walsh, Kathleen E; Biggins, Colleen; Blasko, Deb; Christiansen, Steven M; Fischer, Shira H; Keuker, Christopher; Klugman, Robert; Mazor, Kathleen M

    2014-11-01

    Errors in the use of medications at home by children with cancer are common, and interventions to support correct use are needed. We sought to (1) engage stakeholders in the design and development of an intervention to prevent errors in home medication use, and (2) evaluate the acceptability and usefulness of the intervention. We convened a multidisciplinary team of parents, clinicians, technology experts, and researchers to develop an intervention using a two-step user-centered design process. First, parents and oncologists provided input on the design. Second, a parent panel and two oncology nurses refined draft materials. In a feasibility study, we used questionnaires to assess usefulness and acceptability. Medication error rates were assessed via monthly telephone interviews with parents. We successfully partnered with parents, clinicians, and IT experts to develop Home Medication Support (HoMeS), a family-centered Web-based intervention. HoMeS includes a medication calendar with decision support, a communication tool, adverse effect information, a metric conversion chart, and other information. The 15 families in the feasibility study gave HoMeS high ratings for acceptability and usefulness. Half recorded information on the calendar to indicate to other caregivers that doses were given; 34% brought it to the clinic to communicate with their clinician about home medication use. There was no change in the rate of medication errors in this feasibility study. We created and tested a stakeholder-designed, Web-based intervention to support home chemotherapy use, which parents rated highly. This tool may prevent serious medication errors in a larger study. Copyright © 2014 by American Society of Clinical Oncology.

  10. Derivative spectrophotometric method for simultaneous determination of clindamycin phosphate and tretinoin in pharmaceutical dosage forms.

    PubMed

    Barazandeh Tehrani, Maliheh; Namadchian, Melika; Fadaye Vatan, Sedigheh; Souri, Effat

    2013-04-10

    A derivative spectrophotometric method was proposed for the simultaneous determination of clindamycin and tretinoin in pharmaceutical dosage forms. The measurement was achieved using the first and second derivative signals of clindamycin at (1D) 251 nm and (2D) 239 nm and tretinoin at (1D) 364 nm and (2D) 387 nm.The proposed method showed excellent linearity at both first and second derivative order in the range of 60-1200 and 1.25-25 μg/ml for clindamycin phosphate and tretinoin respectively. The within-day and between-day precision and accuracy was in acceptable range (CV<3.81%, error<3.20%). Good agreement between the found andadded concentrations indicates successful application of the proposed method for simultaneous determination of clindamycin and tretinoin in synthetic mixtures and pharmaceutical dosage form.

  11. Joint multifractal analysis based on wavelet leaders

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Yang, Yan-Hong; Wang, Gang-Jin; Zhou, Wei-Xing

    2017-12-01

    Mutually interacting components form complex systems and these components usually have long-range cross-correlated outputs. Using wavelet leaders, we propose a method for characterizing the joint multifractal nature of these long-range cross correlations; we call this method joint multifractal analysis based on wavelet leaders (MF-X-WL). We test the validity of the MF-X-WL method by performing extensive numerical experiments on dual binomial measures with multifractal cross correlations and bivariate fractional Brownian motions (bFBMs) with monofractal cross correlations. Both experiments indicate that MF-X-WL is capable of detecting cross correlations in synthetic data with acceptable estimating errors. We also apply the MF-X-WL method to pairs of series from financial markets (returns and volatilities) and online worlds (online numbers of different genders and different societies) and determine intriguing joint multifractal behavior.

  12. SU-E-T-484: In Vivo Dosimetry Tolerances in External Beam Fast Neutron Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, L; Gopan, O

    Purpose: Optical stimulated luminescence (OSL) dosimetry with Landauer Al2O3:C nanodots was developed at our institution as a passive in vivo dosimetry (IVD) system for patients treated with fast neutron therapy. The purpose of this study was to establish clinically relevant tolerance limits for detecting treatment errors requiring further investigation. Methods: Tolerance levels were estimated by conducting a series of IVD expected dose calculations for square field sizes ranging between 2.8 and 28.8 cm. For each field size evaluated, doses were calculated for open and internal wedged fields with angles of 30°, 45°, or 60°. Theoretical errors were computed for variationsmore » of incorrect beam configurations. Dose errors, defined as the percent difference from the expected dose calculation, were measured with groups of three nanodots placed in a 30 x 30 cm solid water phantom, at beam isocenter (150 cm SAD, 1.7 cm Dmax). The tolerances were applied to IVD patient measurements. Results: The overall accuracy of the nanodot measurements is 2–3% for open fields. Measurement errors agreed with calculated errors to within 3%. Theoretical estimates of dosimetric errors showed that IVD measurements with OSL nanodots will detect the absence of an internal wedge or a wrong wedge angle. Incorrect nanodot placement on a wedged field is more likely to be caught if the offset is in the direction of the “toe” of the wedge where the dose difference in percentage is about 12%. Errors caused by an incorrect flattening filter size produced a 2% measurement error that is not detectable by IVD measurement alone. Conclusion: IVD with nanodots will detect treatment errors associated with the incorrect implementation of the internal wedge. The results of this study will streamline the physicists’ investigations in determining the root cause of an IVD reading that is out of normally accepted tolerances.« less

  13. The values of the parameters of some multilayer distributed RC null networks

    NASA Technical Reports Server (NTRS)

    Huelsman, L. P.; Raghunath, S.

    1974-01-01

    In this correspondence, the values of the parameters of some multilayer distributed RC notch networks are determined, and the usually accepted values are shown to be in error. The magnitude of the error is illustrated by graphs of the frequency response of the networks.

  14. Reduction in Hospital-Wide Clinical Laboratory Specimen Identification Errors following Process Interventions: A 10-Year Retrospective Observational Study

    PubMed Central

    Ning, Hsiao-Chen; Lin, Chia-Ni; Chiu, Daniel Tsun-Yee; Chang, Yung-Ta; Wen, Chiao-Ni; Peng, Shu-Yu; Chu, Tsung-Lan; Yu, Hsin-Ming; Wu, Tsu-Lan

    2016-01-01

    Background Accurate patient identification and specimen labeling at the time of collection are crucial steps in the prevention of medical errors, thereby improving patient safety. Methods All patient specimen identification errors that occurred in the outpatient department (OPD), emergency department (ED), and inpatient department (IPD) of a 3,800-bed academic medical center in Taiwan were documented and analyzed retrospectively from 2005 to 2014. To reduce such errors, the following series of strategies were implemented: a restrictive specimen acceptance policy for the ED and IPD in 2006; a computer-assisted barcode positive patient identification system for the ED and IPD in 2007 and 2010, and automated sample labeling combined with electronic identification systems introduced to the OPD in 2009. Results Of the 2000345 specimens collected in 2005, 1023 (0.0511%) were identified as having patient identification errors, compared with 58 errors (0.0015%) among 3761238 specimens collected in 2014, after serial interventions; this represents a 97% relative reduction. The total number (rate) of institutional identification errors contributed from the ED, IPD, and OPD over a 10-year period were 423 (0.1058%), 556 (0.0587%), and 44 (0.0067%) errors before the interventions, and 3 (0.0007%), 52 (0.0045%) and 3 (0.0001%) after interventions, representing relative 99%, 92% and 98% reductions, respectively. Conclusions Accurate patient identification is a challenge of patient safety in different health settings. The data collected in our study indicate that a restrictive specimen acceptance policy, computer-generated positive identification systems, and interdisciplinary cooperation can significantly reduce patient identification errors. PMID:27494020

  15. Testing for Granger Causality in the Frequency Domain: A Phase Resampling Method.

    PubMed

    Liu, Siwei; Molenaar, Peter

    2016-01-01

    This article introduces phase resampling, an existing but rarely used surrogate data method for making statistical inferences of Granger causality in frequency domain time series analysis. Granger causality testing is essential for establishing causal relations among variables in multivariate dynamic processes. However, testing for Granger causality in the frequency domain is challenging due to the nonlinear relation between frequency domain measures (e.g., partial directed coherence, generalized partial directed coherence) and time domain data. Through a simulation study, we demonstrate that phase resampling is a general and robust method for making statistical inferences even with short time series. With Gaussian data, phase resampling yields satisfactory type I and type II error rates in all but one condition we examine: when a small effect size is combined with an insufficient number of data points. Violations of normality lead to slightly higher error rates but are mostly within acceptable ranges. We illustrate the utility of phase resampling with two empirical examples involving multivariate electroencephalography (EEG) and skin conductance data.

  16. Split torque transmission load sharing

    NASA Technical Reports Server (NTRS)

    Krantz, T. L.; Rashidi, M.; Kish, J. G.

    1992-01-01

    Split torque transmissions are attractive alternatives to conventional planetary designs for helicopter transmissions. The split torque designs can offer lighter weight and fewer parts but have not been used extensively for lack of experience, especially with obtaining proper load sharing. Two split torque designs that use different load sharing methods have been studied. Precise indexing and alignment of the geartrain to produce acceptable load sharing has been demonstrated. An elastomeric torque splitter that has large torsional compliance and damping produces even better load sharing while reducing dynamic transmission error and noise. However, the elastomeric torque splitter as now configured is not capable over the full range of operating conditions of a fielded system. A thrust balancing load sharing device was evaluated. Friction forces that oppose the motion of the balance mechanism are significant. A static analysis suggests increasing the helix angle of the input pinion of the thrust balancing design. Also, dynamic analysis of this design predicts good load sharing and significant torsional response to accumulative pitch errors of the gears.

  17. A simple algorithm for distance estimation without radar and stereo vision based on the bionic principle of bee eyes

    NASA Astrophysics Data System (ADS)

    Khamukhin, A. A.

    2017-02-01

    Simple navigation algorithms are needed for small autonomous unmanned aerial vehicles (UAVs). These algorithms can be implemented in a small microprocessor with low power consumption. This will help to reduce the weight of the UAVs computing equipment and to increase the flight range. The proposed algorithm uses only the number of opaque channels (ommatidia in bees) through which a target can be seen by moving an observer from location 1 to 2 toward the target. The distance estimation is given relative to the distance between locations 1 and 2. The simple scheme of an appositional compound eye to develop calculation formula is proposed. The distance estimation error analysis shows that it decreases with an increase of the total number of opaque channels to a certain limit. An acceptable error of about 2 % is achieved with the angle of view from 3 to 10° when the total number of opaque channels is 21600.

  18. WHEN HAS A MODEL BEEN SUFFICIENTLY CALIBRATED AND TESTED TO BE PUT TO EFFICIENT USE?

    EPA Science Inventory

    The question of what degree of predictive error is acceptable for environmental models is explored. Two schools of thought are presented. The universalist school would argue that it is possible to agree on general acceptance criteria for specific categories of models, particula...

  19. Extraction, Scrub, and Strip Test Results for the Salt Waste Processing Facility Caustic Side Solvent Extraction Solvent Sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, T. B.

    An Extraction, Scrub, and Strip (ESS) test was performed on a sample of Salt Waste Processing Facility (SWPF) Caustic-Side Solvent Extraction (CSSX) solvent and salt simulant to determine cesium distribution ratios (D( Cs)), and cesium concentration in the strip effluent (SE) and decontaminated salt solution (DSS) streams; this data will be used by Parsons to help determine if the solvent is qualified for use at the SWPF. The ESS test showed acceptable performance of the solvent for extraction, scrub, and strip operations. The extraction D( Cs) measured 12.5, exceeding the required value of 8. This value is consistent with resultsmore » from previous ESS tests using similar solvent formulations. Similarly, scrub and strip cesium distribution ratios fell within acceptable ranges. This revision was created to correct an error. The previous revision used an incorrect set of temperature correction coefficients which resulted in slight deviations from the correct D( Cs) results.« less

  20. Retrieval of carbon dioxide vertical profiles from solar occultation observations and associated error budgets for ACE-FTS and CASS-FTS

    NASA Astrophysics Data System (ADS)

    Sioris, C. E.; Boone, C. D.; Nassar, R.; Sutton, K. J.; Gordon, I. E.; Walker, K. A.; Bernath, P. F.

    2014-02-01

    An algorithm is developed to retrieve the vertical profile of carbon dioxide in the 5 to 25 km altitude range using mid-infrared solar occultation spectra from the main instrument of the ACE (Atmospheric Chemistry Experiment) mission, namely the Fourier Transform Spectrometer (FTS). The main challenge is to find an atmospheric phenomenon which can be used for accurate tangent height determination in the lower atmosphere, where the tangent heights (THs) calculated from geometric and timing information is not of sufficient accuracy. Error budgets for the retrieval of CO2 from ACE-FTS and the FTS on a potential follow-on mission named CASS (Chemical and Aerosol Sounding Satellite) are calculated and contrasted. Retrieved THs are typically within 60 m of those retrieved using the ACE version 3.x software after revisiting the temperature dependence of the N2 CIA (Collision-Induced Absorption) laboratory measurements and accounting for sulfate aerosol extinction. After correcting for the known residual high bias of ACE version 3.x THs expected from CO2 spectroscopic/isotopic inconsistencies, the remaining bias for tangent heights determined with the N2 CIA is -20m. CO2 in the 5-13 km range in the 2009-2011 time frame is validated against aircraft measurements from CARIBIC, CONTRAIL and HIPPO, yielding typical biases of -1.7 ppm in the 5-13 km range. The standard error of these biases in this vertical range is 0.4 ppm. The multi-year ACE-FTS dataset is valuable in determining the seasonal variation of the latitudinal gradient which arises from the strong seasonal cycle in the Northern Hemisphere troposphere. The annual growth of CO2 in this time frame is determined to be 2.5 ± 0.7 ppm yr-1, in agreement with the currently accepted global growth rate based on ground-based measurements.

  1. Syntactic error modeling and scoring normalization in speech recognition: Error modeling and scoring normalization in the speech recognition task for adult literacy training

    NASA Technical Reports Server (NTRS)

    Olorenshaw, Lex; Trawick, David

    1991-01-01

    The purpose was to develop a speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Better mechanisms are provided for using speech recognition in a literacy tutor application. Using a combination of scoring normalization techniques and cheater-mode decoding, a reasonable acceptance/rejection threshold was provided. In continuous speech, the system was tested to be able to provide above 80 pct. correct acceptance of words, while correctly rejecting over 80 pct. of incorrectly pronounced words.

  2. Error, contradiction and reversal in science and medicine.

    PubMed

    Coccheri, Sergio

    2017-06-01

    Error and contradictions are not "per se" detrimental in science and medicine. Going back to the history of philosophy, Sir Francis Bacon stated that "truth emerges more readily from error than from confusion", and recently Popper introduced the concept of an approximate temporary truth that constitutes the engine of scientific progress. In biomedical research and in clinical practice we assisted during the last decades to many overturnings or reversals of concepts and practices. This phenomenon may discourage patients from accepting ordinary medical care and may favour the choice of alternative medicine. The media often enhance the disappointment for these discrepancies. In this note I recommend to transfer to patients the concept of a confirmed and dependable knowledge at the present time. However, physicians should tolerate uncertainty and accept the idea that medical concepts and applications are subjected to continuous progression, change and displacement. Copyright © 2017 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  3. Decreasing range resolution of a SAR image to permit correction of motion measurement errors beyond the SAR range resolution

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-07-20

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  4. Comparing range data across the slow-time dimension to correct motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-08-17

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  5. Performance Evaluation of Five Turbidity Sensors in Three Primary Standards

    USGS Publications Warehouse

    Snazelle, Teri T.

    2015-10-28

    Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard deviation of 0.51 percent for the operating range, which was limited to 0.01–1600 NTU at the time of this report. Test results indicated an average percent error of 19.81 percent in the three standards for the EXO turbidity sensor and 9.66 percent for the YSI 6136. The significant variability in sensor performance in the three primary standards suggests that although all three types are accepted as primary calibration standards, they are not interchangeable, and sensor results in the three types of standards are not directly comparable.

  6. Myopia, contact lens use and self-esteem.

    PubMed

    Dias, Lynette; Manny, Ruth E; Weissberg, Erik; Fern, Karen D

    2013-09-01

    To evaluate whether contact lens (CL) use was associated with self-esteem in myopic children originally enrolled in the Correction of Myopia Evaluation Trial (COMET), that after 5 years continued as an observational study of myopia progression with CL use permitted. Usable data at the 6-year visit, one year after CL use was allowed (n = 423/469, age 12-17 years), included questions on CL use, refractive error measurements and self-reported self-esteem in several areas (scholastic/athletic competence, physical appearance, social acceptance, behavioural conduct and global self-worth). Self-esteem, scored from 1 (low) to 4 (high), was measured by the Self-Perception Profile for Children in participants under 14 years or the Self-Perception Profile for Adolescents, in those 14 years and older. Multiple regression analyses were used to evaluate associations between self-esteem and relevant factors identified by univariate analyses (e.g., CL use, gender, ethnicity), while adjusting for baseline self-esteem prior to CL use. Mean (±S.D.) self-esteem scores at the 6-year visit (mean age = 15.3 ± 1.3 years; mean refractive error = -4.6 ± 1.5 D) ranged from 2.74 (± 0.76) on athletic competence to 3.33 (± 0.53) on global self-worth. CL wearers (n = 224) compared to eyeglass wearers (n = 199) were more likely to be female (p < 0.0001). Those who chose to wear CLs had higher social acceptance, athletic competence and behavioural conduct scores (p < 0.05) at baseline compared to eyeglass users. CL users continued to report higher social acceptance scores at the 6-year visit (p = 0.03), after adjusting for baseline scores and other covariates. Ethnicity was also independently associated with social acceptance in the multivariable analyses (p = 0.011); African-Americans had higher scores than Asians, Whites and Hispanics. Age and refractive error were not associated with self-esteem or CL use. COMET participants who chose to wear CLs after 5 years of eyeglass use had higher self-esteem compared to those who remained in glasses both preceding and following CL use. This suggests that self-esteem may influence the decision to wear CLs and that CLs in turn are associated with higher self-esteem in individuals most likely to wear them. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  7. Phantom feet on digital radionuclide images and other scary computer tales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitas, J.E.; Dworkin, H.J.; Dees, S.M.

    1989-09-01

    Malfunction of a computer-assisted digital gamma camera is reported. Despite what appeared to be adequate acceptance testing, an error in the system gave rise to switching of images and identification text. A suggestion is made for using a hot marker, which would avoid the potential error of misinterpretation of patient images.

  8. Two Cultures in Modern Science and Technology: For Safety and Validity Does Medicine Have to Update?

    PubMed

    Becker, Robert E

    2016-01-11

    Two different scientific cultures go unreconciled in modern medicine. Each culture accepts that scientific knowledge and technologies are vulnerable to and easily invalidated by methods and conditions of acquisition, interpretation, and application. How these vulnerabilities are addressed separates the 2 cultures and potentially explains medicine's difficulties eradicating errors. A traditional culture, dominant in medicine, leaves error control in the hands of individual and group investigators and practitioners. A competing modern scientific culture accepts errors as inevitable, pernicious, and pervasive sources of adverse events throughout medical research and patient care too malignant for individuals or groups to control. Error risks to the validity of scientific knowledge and safety in patient care require systemwide programming able to support a culture in medicine grounded in tested, continually updated, widely promulgated, and uniformly implemented standards of practice for research and patient care. Experiences from successes in other sciences and industries strongly support the need for leadership from the Institute of Medicine's recommended Center for Patient Safely within the Federal Executive branch of government.

  9. Quality Leadership and Quality Control

    PubMed Central

    Badrick, Tony

    2003-01-01

    Different quality control rules detect different analytical errors with varying levels of efficiency depending on the type of error present, its prevalence and the number of observations. The efficiency of a rule can be gauged by inspection of a power function graph. Control rules are only part of a process and not an end in itself; just as important are the trouble-shooting systems employed when a failure occurs. 'Average of patient normals' may develop as a usual adjunct to conventional quality control serum based programmes. Acceptable error can be based on various criteria; biological variation is probably the most sensible. Once determined, acceptable error can be used as limits in quality control rule systems. A key aspect of an organisation is leadership, which links the various components of the quality system. Leadership is difficult to characterise but its key aspects include trust, setting an example, developing staff and critically setting the vision for the organisation. Organisations also have internal characteristics such as the degree of formalisation, centralisation, and complexity. Medical organisations can have internal tensions because of the dichotomy between the bureaucratic and the shadow medical structures. PMID:18568046

  10. Step-Count Accuracy of 3 Motion Sensors for Older and Frail Medical Inpatients.

    PubMed

    McCullagh, Ruth; Dillon, Christina; O'Connell, Ann Marie; Horgan, N Frances; Timmons, Suzanne

    2017-02-01

    To measure the step-count accuracy of an ankle-worn accelerometer, a thigh-worn accelerometer, and a pedometer in older and frail inpatients. Cross-sectional design study. Research room within a hospital. Convenience sample of inpatients (N=32; age, ≥65 years) who were able to walk 20m independently with or without a walking aid. Patients completed a 40-minute program of predetermined tasks while wearing the 3 motion sensors simultaneously. Video recording of the procedure provided the criterion measurement of step count. Mean percentage errors were calculated for all tasks, for slow versus fast walkers, for independent walkers versus walking-aid users, and over shorter versus longer distances. The intraclass correlation was calculated, and accuracy was graphically displayed by Bland-Altman plots. Thirty-two patients (mean age, 78.1±7.8y) completed the study. Fifteen (47%) were women, and 17 (51%) used walking aids. Their median speed was .46m/s (interquartile range [IQR], .36-.66m/s). The ankle-worn accelerometer overestimated steps (median error, 1% [IQR, -3% to 13%]). The other motion sensors underestimated steps (median error, 40% [IQR, -51% to -35%] and 38% [IQR -93% to -27%], respectively). The ankle-worn accelerometer proved to be more accurate over longer distances (median error, 3% [IQR, 0%-9%]) than over shorter distances (median error, 10% [IQR, -23% to 9%]). The ankle-worn accelerometer gave the most accurate step-count measurement and was most accurate over longer distances. Neither of the other motion sensors had acceptable margins of error. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  11. LWIR pupil imaging and prospects for background compensation

    NASA Astrophysics Data System (ADS)

    LeVan, Paul; Sakoglu, Ünal; Stegall, Mark; Pierce, Greg

    2015-08-01

    A previous paper described LWIR Pupil Imaging with a sensitive, low-flux focal plane array, and behavior of this type of system for higher flux operations as understood at the time. We continue this investigation, and report on a more detailed characterization of the system over a broad range of pixel fluxes. This characterization is then shown to enable non-uniformity correction over the flux range, using a standard approach. Since many commercial tracking platforms include a "guider port" that accepts pulse width modulation (PWM) error signals, we have also investigated a variation on the use of this port to "dither" the tracking platform in synchronization with the continuous collection of infrared images. The resulting capability has a broad range of applications that extend from generating scene motion in the laboratory for quantifying performance of "realtime, scene-based non-uniformity correction" approaches, to effectuating subtraction of bright backgrounds by alternating viewing aspect between a point source and adjacent, source-free backgrounds.

  12. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W [Albuquerque, NM; Heard, Freddie E [Albuquerque, NM; Cordaro, J Thomas [Albuquerque, NM

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  13. Beam pointing angle optimization and experiments for vehicle laser Doppler velocimetry

    NASA Astrophysics Data System (ADS)

    Fan, Zhe; Hu, Shuling; Zhang, Chunxi; Nie, Yanju; Li, Jun

    2015-10-01

    Beam pointing angle (BPA) is one of the key parameters that affects the operation performance of the laser Doppler velocimetry (LDV) system. By considering velocity sensitivity and echo power, for the first time, the optimized BPA of vehicle LDV is analyzed. Assuming mounting error is within ±1.0 deg, the reflectivity and roughness are variable for different scenarios, the optimized BPA is obtained in the range from 29 to 43 deg. Therefore, velocity sensitivity is in the range of 1.25 to 1.76 MHz/(m/s), and the percentage of normalized echo power at optimized BPA with respect to that at 0 deg is greater than 53.49%. Laboratory experiments with a rotating table are done with different BPAs of 10, 35, and 66 deg, and the results coincide with the theoretical analysis. Further, vehicle experiment with optimized BPA of 35 deg is conducted by comparison with microwave radar (accuracy of ±0.5% full scale output). The root-mean-square error of LDV's results is smaller than the Microstar II's, 0.0202 and 0.1495 m/s, corresponding to LDV and Microstar II, respectively, and the mean velocity discrepancy is 0.032 m/s. It is also proven that with the optimized BPA both high velocity sensitivity and acceptable echo power can simultaneously be guaranteed.

  14. Characterization of performance-emission indices of a diesel engine using ANFIS operating in dual-fuel mode with LPG

    NASA Astrophysics Data System (ADS)

    Chakraborty, Amitav; Roy, Sumit; Banerjee, Rahul

    2018-03-01

    This experimental work highlights the inherent capability of an adaptive-neuro fuzzy inference system (ANFIS) based model to act as a robust system identification tool (SIT) in prognosticating the performance and emission parameters of an existing diesel engine running of diesel-LPG dual fuel mode. The developed model proved its adeptness by successfully harnessing the effects of the input parameters of load, injection duration and LPG energy share on output parameters of BSFCEQ, BTE, NOX, SOOT, CO and HC. Successive evaluation of the ANFIS model, revealed high levels of resemblance with the already forecasted ANN results for the same input parameters and it was evident that similar to ANN, ANFIS also has the innate ability to act as a robust SIT. The ANFIS predicted data harmonized the experimental data with high overall accuracy. The correlation coefficient (R) values are stretched in between 0.99207 to 0.999988. The mean absolute percentage error (MAPE) tallies were recorded in the range of 0.02-0.173% with the root mean square errors (RMSE) in acceptable margins. Hence the developed model is capable of emulating the actual engine parameters with commendable ranges of accuracy, which in turn would act as a robust prediction platform in the future domains of optimization.

  15. Metadata-driven Delphi rating on the Internet.

    PubMed

    Deshpande, Aniruddha M; Shiffman, Richard N; Nadkarni, Prakash M

    2005-01-01

    Paper-based data collection and analysis for consensus development is inefficient and error-prone. Computerized techniques that could improve efficiency, however, have been criticized as costly, inconvenient and difficult to use. We designed and implemented a metadata-driven Web-based Delphi rating and analysis tool, employing the flexible entity-attribute-value schema to create generic, reusable software. The software can be applied to various domains by altering the metadata; the programming code remains intact. This approach greatly reduces the marginal cost of re-using the software. We implemented our software to prepare for the Conference on Guidelines Standardization. Twenty-three invited experts completed the first round of the Delphi rating on the Web. For each participant, the software generated individualized reports that described the median rating and the disagreement index (calculated from the Interpercentile Range Adjusted for Symmetry) as defined by the RAND/UCLA Appropriateness Method. We evaluated the software with a satisfaction survey using a five-level Likert scale. The panelists felt that Web data entry was convenient (median 4, interquartile range [IQR] 4.0-5.0), acceptable (median 4.5, IQR 4.0-5.0) and easily accessible (median 5, IQR 4.0-5.0). We conclude that Web-based Delphi rating for consensus development is a convenient and acceptable alternative to the traditional paper-based method.

  16. Maintaining data integrity in a rural clinical trial.

    PubMed

    Van den Broeck, Jan; Mackay, Melanie; Mpontshane, Nontobeko; Kany Kany Luabeya, Angelique; Chhagan, Meera; Bennish, Michael L

    2007-01-01

    Clinical trials conducted in rural resource-poor settings face special challenges in ensuring quality of data collection and handling. The variable nature of these challenges, ways to overcome them, and the resulting data quality are rarely reported in the literature. To provide a detailed example of establishing local data handling capacity for a clinical trial conducted in a rural area, highlight challenges and solutions in establishing such capacity, and to report the data quality obtained by the trial. We provide a descriptive case study of a data system for biological samples and questionnaire data, and the problems encountered during its implementation. To determine the quality of data we analyzed test-retest studies using Kappa statistics of inter- and intra-observer agreement on categorical data. We calculated Technical Errors of Measurement of anthropometric measurements, audit trail analysis was done to assess error correction rates, and residual error rates were calculated by database-to-source document comparison. Initial difficulties included the unavailability of experienced research nurses, programmers and data managers in this rural area and the difficulty of designing new software tools and a complex database while making them error-free. National and international collaboration and external monitoring helped ensure good data handling and implementation of good clinical practice. Data collection, fieldwork supervision and query handling depended on streamlined transport over large distances. The involvement of a community advisory board was helpful in addressing cultural issues and establishing community acceptability of data collection methods. Data accessibility for safety monitoring required special attention. Kappa values and Technical Errors of Measurement showed acceptable values. Residual error rates in key variables were low. The article describes the experience of a single-site trial and does not address challenges particular to multi-site trials. Obtaining and maintaining data integrity in rural clinical trials is feasible, can result in acceptable data quality and can be used to develop capacity in developing country sites. It does, however, involve special challenges and requirements.

  17. Type I and Type II error concerns in fMRI research: re-balancing the scale

    PubMed Central

    Cunningham, William A.

    2009-01-01

    Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017

  18. Determining suitable lego-structures to estimate stability of larger peptide nanostructures using computational methods.

    PubMed

    Beke, Tamás; Czajlik, András; Csizmadia, Imre G; Perczel, András

    2006-02-02

    Nanofibers, nanofilms and nanotubes constructed of one to four strands of oligo-alpha- and oligo-beta-peptides were obtained by using carefully selected building units. Lego-type approaches based on thermoneutral isodesmic reactions can be used to reconstruct the total energies of both linear and tubular periodic nanostructures with acceptable accuracy. Total energies of several different nanostructures were accurately determined with errors typically falling in the subchemical range. Thus, attention will be focused on the description of suitable isodesmic reactions that have enabled the determination of the total energy of polypeptides and therefore offer a very fast, efficient and accurate method to obtain energetic information on large and even very large nanosystems.

  19. Relaxing the rule of ten events per variable in logistic and Cox regression.

    PubMed

    Vittinghoff, Eric; McCulloch, Charles E

    2007-03-15

    The rule of thumb that logistic and Cox models should be used with a minimum of 10 outcome events per predictor variable (EPV), based on two simulation studies, may be too conservative. The authors conducted a large simulation study of other influences on confidence interval coverage, type I error, relative bias, and other model performance measures. They found a range of circumstances in which coverage and bias were within acceptable levels despite less than 10 EPV, as well as other factors that were as influential as or more influential than EPV. They conclude that this rule can be relaxed, in particular for sensitivity analyses undertaken to demonstrate adequate control of confounding.

  20. Cost-effective surgical registration using consumer depth cameras

    NASA Astrophysics Data System (ADS)

    Potter, Michael; Yaniv, Ziv

    2016-03-01

    The high costs associated with technological innovation have been previously identified as both a major contributor to the rise of health care expenses, and as a limitation for widespread adoption of new technologies. In this work we evaluate the use of two consumer grade depth cameras, the Microsoft Kinect v1 and 3DSystems Sense, as a means for acquiring point clouds for registration. These devices have the potential to replace professional grade laser range scanning devices in medical interventions that do not require sub-millimetric registration accuracy, and may do so at a significantly reduced cost. To facilitate the use of these devices we have developed a near real-time (1-4 sec/frame) rigid registration framework combining several alignment heuristics with the Iterative Closest Point (ICP) algorithm. Using nearest neighbor registration error as our evaluation criterion we found the optimal scanning distances for the Sense and Kinect to be 50-60cm and 70-80cm respectively. When imaging a skull phantom at these distances, RMS error values of 1.35mm and 1.14mm were obtained. The registration framework was then evaluated using cranial MR scans of two subjects. For the first subject, the RMS error using the Sense was 1.28 +/- 0.01 mm. Using the Kinect this error was 1.24 +/- 0.03 mm. For the second subject, whose MR scan was significantly corrupted by metal implants, the errors increased to 1.44 +/- 0.03 mm and 1.74 +/- 0.06 mm but the system nonetheless performed within acceptable bounds.

  1. Derivative spectrophotometric method for simultaneous determination of clindamycin phosphate and tretinoin in pharmaceutical dosage forms

    PubMed Central

    2013-01-01

    A derivative spectrophotometric method was proposed for the simultaneous determination of clindamycin and tretinoin in pharmaceutical dosage forms. The measurement was achieved using the first and second derivative signals of clindamycin at (1D) 251 nm and (2D) 239 nm and tretinoin at (1D) 364 nm and (2D) 387 nm. The proposed method showed excellent linearity at both first and second derivative order in the range of 60–1200 and 1.25–25 μg/ml for clindamycin phosphate and tretinoin respectively. The within-day and between-day precision and accuracy was in acceptable range (CV<3.81%, error<3.20%). Good agreement between the found and added concentrations indicates successful application of the proposed method for simultaneous determination of clindamycin and tretinoin in synthetic mixtures and pharmaceutical dosage form. PMID:23575006

  2. Comparison of predictive equations for resting metabolic rate in healthy nonobese and obese adults: a systematic review.

    PubMed

    Frankenfield, David; Roth-Yousey, Lori; Compher, Charlene

    2005-05-01

    An assessment of energy needs is a necessary component in the development and evaluation of a nutrition care plan. The metabolic rate can be measured or estimated by equations, but estimation is by far the more common method. However, predictive equations might generate errors large enough to impact outcome. Therefore, a systematic review of the literature was undertaken to document the accuracy of predictive equations preliminary to deciding on the imperative to measure metabolic rate. As part of a larger project to determine the role of indirect calorimetry in clinical practice, an evidence team identified published articles that examined the validity of various predictive equations for resting metabolic rate (RMR) in nonobese and obese people and also in individuals of various ethnic and age groups. Articles were accepted based on defined criteria and abstracted using evidence analysis tools developed by the American Dietetic Association. Because these equations are applied by dietetics practitioners to individuals, a key inclusion criterion was research reports of individual data. The evidence was systematically evaluated, and a conclusion statement and grade were developed. Four prediction equations were identified as the most commonly used in clinical practice (Harris-Benedict, Mifflin-St Jeor, Owen, and World Health Organization/Food and Agriculture Organization/United Nations University [WHO/FAO/UNU]). Of these equations, the Mifflin-St Jeor equation was the most reliable, predicting RMR within 10% of measured in more nonobese and obese individuals than any other equation, and it also had the narrowest error range. No validation work concentrating on individual errors was found for the WHO/FAO/UNU equation. Older adults and US-residing ethnic minorities were underrepresented both in the development of predictive equations and in validation studies. The Mifflin-St Jeor equation is more likely than the other equations tested to estimate RMR to within 10% of that measured, but noteworthy errors and limitations exist when it is applied to individuals and possibly when it is generalized to certain age and ethnic groups. RMR estimation errors would be eliminated by valid measurement of RMR with indirect calorimetry, using an evidence-based protocol to minimize measurement error. The Expert Panel advises clinical judgment regarding when to accept estimated RMR using predictive equations in any given individual. Indirect calorimetry may be an important tool when, in the judgment of the clinician, the predictive methods fail an individual in a clinically relevant way. For members of groups that are greatly underrepresented by existing validation studies of predictive equations, a high level of suspicion regarding the accuracy of the equations is warranted.

  3. A Cycle of Redemption in a Medical Error Disclosure and Apology Program.

    PubMed

    Carmack, Heather J

    2014-06-01

    Physicians accept that they have an ethical responsibility to disclose and apologize for medical errors; however, when physicians make a medical error, they are often not given the opportunity to disclose and apologize for the mistake. In this article, I explore how one hospital negotiated the aftermath of medical mistakes through a disclosure and apology program. Specifically, I used Burke's cycle of redemption to position the hospital's disclosure and apology program as a redemption process and explore how the hospital physicians and administrators worked through the experiences of disclosing and apologizing for medical errors. © The Author(s) 2014.

  4. 50 Gb/s NRZ and 4-PAM data transmission over OM5 fiber in the SWDM wavelength range

    NASA Astrophysics Data System (ADS)

    Agustin, M.; Ledentsov, N.; Kropp, J.-R.; Shchukin, V. A.; Kalosha, V. P.; Chi, K. L.; Khan, Z.; Shi, J. W.; Ledentsov, N. N.

    2018-02-01

    The development of advanced OM5 wideband multimode fiber (WBMMF) allowing high modal bandwidth in the spectral range 840-950 nm motivates research in vertical-cavity-surface-emitting-lasers (VCSELs) at wavelengths beyond the previously accepted for short reach communications. Thus, short wavelength division multiplexing (SWDM) solutions can be implemented as a strategy to satisfy the increasing demand of data rate in datacenter environments. As an alternative solution to 850 nm parallel links, four wavelengths with 30 nm separation between 850 nm and 940 nm can be multiplexed on a single OM5-MMF, so the number of fibers deployed is reduced by a factor of four. In this paper high speed transmission is studied for VCSELs in the 850 nm - 950 nm range. The devices had a modulating bandwidth of 26-28 GHz. 50 Gb/s non-return-to-zero (NRZ) operation is demonstrated at each wavelength without preemphasis and equalization, with bit-error-rate (BER) below 7% forward error correction (FEC) threshold. Furthermore, the use of single-mode VCSELs (SM-VCSELs) as a way to mitigate the effects of chromatic dispersions in order to extend the maximum transmission distance over OM5 is explored. Analysis of loss as a function of wavelength in OM5 fiber is also performed. Significant decrease is observed, from 2.2 dB/km to less than 1.7 dB/km at 910 nm wavelength of the VCSEL.

  5. Transferring Error Characteristics of Satellite Rainfall Data from Ground Validation (gauged) into Non-ground Validation (ungauged)

    NASA Astrophysics Data System (ADS)

    Tang, L.; Hossain, F.

    2009-12-01

    Understanding the error characteristics of satellite rainfall data at different spatial/temporal scales is critical, especially when the scheduled Global Precipitation Mission (GPM) plans to provide High Resolution Precipitation Products (HRPPs) at global scales. Satellite rainfall data contain errors which need ground validation (GV) data for characterization, while satellite rainfall data will be most useful in the regions that are lacking in GV. Therefore, a critical step is to develop a spatial interpolation scheme for transferring the error characteristics of satellite rainfall data from GV regions to Non-GV regions. As a prelude to GPM, The TRMM Multi-satellite Precipitation Analysis (TMPA) products of 3B41RT and 3B42RT (Huffman et al., 2007) over the US spanning a record of 6 years are used as a representative example of satellite rainfall data. Next Generation Radar (NEXRAD) Stage IV rainfall data are used as the reference for GV data. Initial work by the authors (Tang et al., 2009, GRL) has shown promise in transferring error from GV to Non-GV regions, based on a six-year climatologic average of satellite rainfall data assuming only 50% of GV coverage. However, this transfer of error characteristics needs to be investigated for a range of GV data coverage. In addition, it is also important to investigate if proxy-GV data from an accurate space-borne sensor, such as the TRMM PR (or the GPM DPR), can be leveraged for the transfer of error at sparsely gauged regions. The specific question we ask in this study is, “what is the minimum coverage of GV data required for error transfer scheme to be implemented at acceptable accuracy in hydrological relevant scale?” Three geostatistical interpolation methods are compared: ordinary kriging, indicator kriging and disjunctive kriging. Various error metrics are assessed for transfer such as, Probability of Detection for rain and no rain, False Alarm Ratio, Frequency Bias, Critical Success Index, RMSE etc. Understanding the proper space-time scales at which these metrics can be reasonably transferred is also explored in this study. Keyword: Satellite rainfall, error transfer, spatial interpolation, kriging methods.

  6. Weighted linear regression using D2H and D2 as the independent variables

    Treesearch

    Hans T. Schreuder; Michael S. Williams

    1998-01-01

    Several error structures for weighted regression equations used for predicting volume were examined for 2 large data sets of felled and standing loblolly pine trees (Pinus taeda L.). The generally accepted model with variance of error proportional to the value of the covariate squared ( D2H = diameter squared times height or D...

  7. Minimizing Experimental Error in Thinning Research

    Treesearch

    C. B. Briscoe

    1964-01-01

    Many diverse approaches have been made prescribing and evaluating thinnings on an objective basis. None of the techniques proposed hasbeen widely accepted. Indeed. none has been proven superior to the others nor even widely applicable. There are at least two possible reasons for this: none of the techniques suggested is of any general utility and/or experimental error...

  8. Grammaticality Judgments of an Extended Optional Infinitive Grammar: Evidence from English-Speaking Children with Specific Language Impairment.

    ERIC Educational Resources Information Center

    Rice, Mabel L.; Wexler, Kenneth; Redmond, Sean M.

    1999-01-01

    This longitudinal study evaluated grammatical judgments of "well formedness" of children (N=21) with specific language impairment (SLI). Comparison with two control groups found that children with SLI rejected morphosyntactic errors they didn't commit but accepted errors they were likely to make. Findings support the extended optional infinitive…

  9. Proposing a new iterative learning control algorithm based on a non-linear least square formulation - Minimising draw-in errors

    NASA Astrophysics Data System (ADS)

    Endelt, B.

    2017-09-01

    Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.

  10. Engineering Test Report Paint Waste Reduction Fluidized Bed Process Demonstration at Letterkenny Army Depot Chambersburg, Pennsylvania

    DTIC Science & Technology

    1991-07-01

    predicted by equation using actual chart response obtained from each calibration gas response. (Concentration of cal. gas,l Calibration error, % span • ppm...Analyzer predicted by cali- Col. gas Chart divisions equation* bration Cylinder conc., error,** Drift,***INo. ppm or % Pretest Posttest Pretest Posttest...2m ~J * Correlation coef. * qgq’jq **Analyzer ca.error, % spn (Cal. gas conc. conc. predicted ) x 1003 cal spanSpan value Acceptable limit x ɚ% of

  11. In Search of Grid Converged Solutions

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2010-01-01

    Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.

  12. Let it be and keep on going! Acceptance and daily occupational well-being in relation to negative work events.

    PubMed

    Kuba, Katharina; Scheibe, Susanne

    2017-01-01

    [Correction Notice: An Erratum for this article was reported in Vol 22(1) of Journal of Occupational Health Psychology (see record 2016-25216-001). In the article, there were errors in the Participants subsection in the Method section. The last three sentences should read "Job tenure ranged from less than 1 year to 32 years, with an average of 8.83 years (SD 7.80). Participants interacted with clients on average 5.44 hr a day (SD 2.41). The mean working time was 7.36 hr per day (SD 1.91)."] Negative work events can diminish daily occupational well-being, yet the degree to which they do so depends on the way in which people deal with their emotions. The aim of the current study was to examine the role of acceptance in the link between daily negative work events and occupational well-being. We hypothesized that acceptance would be associated with better daily occupational well-being, operationalized as low end-of-day negative emotions and fatigue, and high work engagement. Furthermore, we predicted that acceptance would buffer the adverse impact of negative work events on daily well-being. A microlongitudinal study across 10 work days was carried out with 92 employees of the health care sector, yielding a total of 832 daily observations. As expected, acceptance was associated with lower end-of-day negative emotions and fatigue (though there was no association with work engagement) across the 10-day period. Furthermore, acceptance moderated the effect of negative event occurrence on daily well-being: Highly accepting employees experienced less increase in negative emotions and less reduction in work engagement (though comparable end-of-day fatigue) on days with negative work events, relative to days without negative work events, than did less accepting employees. These findings highlight affective, resource-saving, and motivational benefits of acceptance for daily occupational well-being and demonstrate that acceptance is associated with enhanced resilience to daily negative work events. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Flight Test Results of an Angle of Attack and Angle of Sideslip Calibration Method Using Output-Error Optimization

    NASA Technical Reports Server (NTRS)

    Siu, Marie-Michele; Martos, Borja; Foster, John V.

    2013-01-01

    As part of a joint partnership between the NASA Aviation Safety Program (AvSP) and the University of Tennessee Space Institute (UTSI), research on advanced air data calibration methods has been in progress. This research was initiated to expand a novel pitot-static calibration method that was developed to allow rapid in-flight calibration for the NASA Airborne Subscale Transport Aircraft Research (AirSTAR) facility. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. Subscale flight tests demonstrated small 2-s error bounds with significant reduction in test time compared to other methods. Recent UTSI full scale flight tests have shown airspeed calibrations with the same accuracy or better as the Federal Aviation Administration (FAA) accepted GPS 'four-leg' method in a smaller test area and in less time. The current research was motivated by the desire to extend this method for inflight calibration of angle of attack (AOA) and angle of sideslip (AOS) flow vanes. An instrumented Piper Saratoga research aircraft from the UTSI was used to collect the flight test data and evaluate flight test maneuvers. Results showed that the output-error approach produces good results for flow vane calibration. In addition, maneuvers for pitot-static and flow vane calibration can be integrated to enable simultaneous and efficient testing of each system.

  14. Without his shirt off he saved the child from almost drowning: interpreting an uncertain input

    PubMed Central

    Frazier, Lyn; Clifton, Charles

    2014-01-01

    Unedited speech and writing often contains errors, e.g., the blending of alternative ways of expressing a message. As a result comprehenders are faced with decisions about what the speaker may have intended, which may not be the same as the grammatically-licensed compositional interpretation of what was said. Two experiments investigated the comprehension of inputs that may have resulted from blending two syntactic forms. The results of the experiments suggest that readers and listeners tend to repair such utterances, restoring them to the presumed intended structure, and they assign the interpretation of the corrected utterance. Utterances that are repaired are expected to also be acceptable when they are easy to diagnose/repair and they are “familiar”, i.e., they correspond to natural speech errors. The results of the experiments established a continuum ranging from outright linguistic illusions with no indication that listeners and readers detected the error (the inclusion of almost in A passerby rescued a child from almost being run over by a bus.), to a majority of unblended interpretations for doubled quantifier sentences (Many students often turn in their assignments late) to only a third undoubled implicit negation (I just like the way the president looks without his shirt off.) The repair or speech error reversal account offered here is contrasted with the noisy channel approach (Gibson et al., 2013) and the good enough processing approach (Ferreiera et al., 2002). PMID:25984551

  15. Error Analysis and Validation for Insar Height Measurement Induced by Slant Range

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Li, T.; Fan, W.; Geng, X.

    2018-04-01

    InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.

  16. iGen: An automated generator of simplified models with provable error bounds.

    NASA Astrophysics Data System (ADS)

    Tang, D.; Dobbie, S.

    2009-04-01

    Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.

  17. Assessing the Impact of Analytical Error on Perceived Disease Severity.

    PubMed

    Kroll, Martin H; Garber, Carl C; Bi, Caixia; Suffin, Stephen C

    2015-10-01

    The perception of the severity of disease from laboratory results assumes that the results are free of analytical error; however, analytical error creates a spread of results into a band and thus a range of perceived disease severity. To assess the impact of analytical errors by calculating the change in perceived disease severity, represented by the hazard ratio, using non-high-density lipoprotein (nonHDL) cholesterol as an example. We transformed nonHDL values into ranges using the assumed total allowable errors for total cholesterol (9%) and high-density lipoprotein cholesterol (13%). Using a previously determined relationship between the hazard ratio and nonHDL, we calculated a range of hazard ratios for specified nonHDL concentrations affected by analytical error. Analytical error, within allowable limits, created a band of values of nonHDL, with a width spanning 30 to 70 mg/dL (0.78-1.81 mmol/L), depending on the cholesterol and high-density lipoprotein cholesterol concentrations. Hazard ratios ranged from 1.0 to 2.9, a 16% to 50% error. Increased bias widens this range and decreased bias narrows it. Error-transformed results produce a spread of values that straddle the various cutoffs for nonHDL. The range of the hazard ratio obscures the meaning of results, because the spread of ratios at different cutoffs overlap. The magnitude of the perceived hazard ratio error exceeds that for the allowable analytical error, and significantly impacts the perceived cardiovascular disease risk. Evaluating the error in the perceived severity (eg, hazard ratio) provides a new way to assess the impact of analytical error.

  18. SU-E-T-192: FMEA Severity Scores - Do We Really Know?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonigan, J; Johnson, J; Kry, S

    2014-06-01

    Purpose: Failure modes and effects analysis (FMEA) is a subjective risk mitigation technique that has not been applied to physics-specific quality management practices. There is a need for quantitative FMEA data as called for in the literature. This work focuses specifically on quantifying FMEA severity scores for physics components of IMRT delivery and comparing to subjective scores. Methods: Eleven physical failure modes (FMs) for head and neck IMRT dose calculation and delivery are examined near commonly accepted tolerance criteria levels. Phantom treatment planning studies and dosimetry measurements (requiring decommissioning in several cases) are performed to determine the magnitude of dosemore » delivery errors for the FMs (i.e., severity of the FM). Resultant quantitative severity scores are compared to FMEA scores obtained through an international survey and focus group studies. Results: Physical measurements for six FMs have resulted in significant PTV dose errors up to 4.3% as well as close to 1 mm significant distance-to-agreement error between PTV and OAR. Of the 129 survey responses, the vast majority of the responders used Varian machines with Pinnacle and Eclipse planning systems. The average years of experience was 17, yet familiarity with FMEA less than expected. Survey reports perception of dose delivery error magnitude varies widely, in some cases 50% difference in dose delivery error expected amongst respondents. Substantial variance is also seen for all FMs in occurrence, detectability, and severity scores assigned with average variance values of 5.5, 4.6, and 2.2, respectively. Survey shows for MLC positional FM(2mm) average of 7.6% dose error expected (range 0–50%) compared to 2% error seen in measurement. Analysis of ranking in survey, treatment planning studies, and quantitative value comparison will be presented. Conclusion: Resultant quantitative severity scores will expand the utility of FMEA for radiotherapy and verify accuracy of FMEA results compared to highly variable subjective scores.« less

  19. Validity of Torque-Data Collection at Multiple Sites: A Framework for Collaboration on Clinical-Outcomes Research in Sports Medicine.

    PubMed

    Kuenze, Christopher; Eltouhky, Moataz; Thomas, Abbey; Sutherlin, Mark; Hart, Joseph

    2016-05-01

    Collecting torque data using a multimode dynamometer is common in sports-medicine research. The error in torque measurements across multiple sites and dynamometers has not been established. To assess the validity of 2 calibration protocols across 3 dynamometers and the error associated with torque measurement for each system. Observational study. 3 university laboratories at separate institutions. 2 Biodex System 3 dynamometers and 1 Biodex System 4 dynamometer. System calibration was completed using the manufacturer-recommended single-weight method and an experimental calibration method using a series of progressive weights. Both calibration methods were compared with a manually calculated theoretical torque across a range of applied weights. Relative error, absolute error, and percent error were calculated at each weight. Each outcome variable was compared between systems using 95% confidence intervals across low (0-65 Nm), moderate (66-110 Nm), and high (111-165 Nm) torque categorizations. Calibration coefficients were established for each system using both calibration protocols. However, within each system the calibration coefficients generated using the single-weight (System 4 = 2.42 [0.90], System 3a = 1.37 [1.11], System 3b = -0.96 [1.45]) and experimental calibration protocols (System 4 = 3.95 [1.08], System 3a = -0.79 [1.23], System 3b = 2.31 [1.66]) were similar and displayed acceptable mean relative error compared with calculated theoretical torque values. Overall, percent error was greatest for all 3 systems in low-torque conditions (System 4 = 11.66% [6.39], System 3a = 6.82% [11.98], System 3b = 4.35% [9.49]). The System 4 significantly overestimated torque across all 3 weight increments, and the System 3b overestimated torque over the moderate-torque increment. Conversion of raw voltage to torque values using the single-calibration-weight method is valid and comparable to a more complex multiweight calibration process; however, it is clear that calibration must be done for each individual system to ensure accurate data collection.

  20. Detecting imipenem resistance in Acinetobacter baumannii by automated systems (BD Phoenix, Microscan WalkAway, Vitek 2); high error rates with Microscan WalkAway

    PubMed Central

    2009-01-01

    Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to validate imipenem susceptibility. Etest, whereever available, may be used as an easy method to confirm imipenem susceptibility. PMID:19291298

  1. Early Career Teachers' Ability to Focus on Typical Students Errors in Relation to the Complexity of a Mathematical Topic

    ERIC Educational Resources Information Center

    Pankow, Lena; Kaiser, Gabriele; Busse, Andreas; König, Johannes; Blömeke, Sigrid; Hoth, Jessica; Döhrmann, Martina

    2016-01-01

    The paper presents results from a computer-based assessment in which 171 early career mathematics teachers from Germany were asked to anticipate typical student errors on a given mathematical topic and identify them under time constraints. Fast and accurate perception and knowledge-based judgments are widely accepted characteristics of teacher…

  2. Fine-resolution imaging of solar features using Phase-Diverse Speckle

    NASA Technical Reports Server (NTRS)

    Paxman, Richard G.

    1995-01-01

    Phase-diverse speckle (PDS) is a novel imaging technique intended to overcome the degrading effects of atmospheric turbulence on fine-resolution imaging. As its name suggests, PDS is a blend of phase-diversity and speckle-imaging concepts. PDS reconstructions on solar data were validated by simulation, by demonstrating internal consistency of PDS estimates, and by comparing PDS reconstructions with those produced from well accepted speckle-imaging processing. Several sources of error in data collected with the Swedish Vacuum Solar Telescope (SVST) were simulated: CCD noise, quantization error, image misalignment, and defocus error, as well as atmospheric turbulence model error. The simulations demonstrate that fine-resolution information can be reliably recovered out to at least 70% of the diffraction limit without significant introduction of image artifacts. Additional confidence in the SVST restoration is obtained by comparing its spatial power spectrum with previously-published power spectra derived from both space-based images and earth-based images corrected with traditional speckle-imaging techniques; the shape of the spectrum is found to match well the previous measurements. In addition, the imagery is found to be consistent with, but slightly sharper than, imagery reconstructed with accepted speckle-imaging techniques.

  3. A simulation of GPS and differential GPS sensors

    NASA Technical Reports Server (NTRS)

    Rankin, James M.

    1993-01-01

    The Global Positioning System (GPS) is a revolutionary advance in navigation. Users can determine latitude, longitude, and altitude by receiving range information from at least four satellites. The statistical accuracy of the user's position is directly proportional to the statistical accuracy of the range measurement. Range errors are caused by clock errors, ephemeris errors, atmospheric delays, multipath errors, and receiver noise. Selective Availability, which the military uses to intentionally degrade accuracy for non-authorized users, is a major error source. The proportionality constant relating position errors to range errors is the Dilution of Precision (DOP) which is a function of the satellite geometry. Receivers separated by relatively short distances have the same satellite and atmospheric errors. Differential GPS (DGPS) removes these errors by transmitting pseudorange corrections from a fixed receiver to a mobile receiver. The corrected pseudorange at the moving receiver is now corrupted only by errors from the receiver clock, multipath, and measurement noise. This paper describes a software package that models position errors for various GPS and DGPS systems. The error model is used in the Real-Time Simulator and Cockpit Technology workstation simulations at NASA-LaRC. The GPS/DGPS sensor can simulate enroute navigation, instrument approaches, or on-airport navigation.

  4. Adaptive Impact-Driven Detection of Silent Data Corruption for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Cappello, Franck

    For exascale HPC applications, silent data corruption (SDC) is one of the most dangerous problems because there is no indication that there are errors during the execution. We propose an adaptive impact-driven method that can detect SDCs dynamically. The key contributions are threefold. (1) We carefully characterize 18 real-world HPC applications and discuss the runtime data features, as well as the impact of the SDCs on their execution results. (2) We propose an impact-driven detection model that does not blindly improve the prediction accuracy, but instead detects only influential SDCs to guarantee user-acceptable execution results. (3) Our solution can adaptmore » to dynamic prediction errors based on local runtime data and can automatically tune detection ranges for guaranteeing low false alarms. Experiments show that our detector can detect 80-99.99% of SDCs with a false alarm rate less that 1% of iterations for most cases. The memory cost and detection overhead are reduced to 15% and 6.3%, respectively, for a large majority of applications.« less

  5. Submillimeter, millimeter, and microwave spectral line catalogue, revision 3

    NASA Technical Reports Server (NTRS)

    Pickett, H. M.; Poynter, R. L.; Cohen, E. A.

    1992-01-01

    A computer-accessible catalog of submillimeter, millimeter, and microwave spectral lines in the frequency range between 0 and 10,000 GHz (i.e., wavelengths longer than 30 micrometers) is described. The catalog can be used as a planning or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, the lower state energy, and the quantum number assignment. This edition of the catalog has information on 206 atomic and molecular species and includes a total of 630,924 lines. The catalog was constructed by using theoretical least square fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalog will add more atoms and molecules and update the present listings as new data appear. The catalog is available as a magnetic data tape recorded in card images, with one card image per spectral line, from the National Space Science Data Center, located at Goddard Space Flight Center.

  6. Beam dynamics and electromagnetic studies of a 3 MeV, 325 MHz radio frequency quadrupole accelerator

    NASA Astrophysics Data System (ADS)

    Gaur, Rahul; Kumar, Vinit

    2018-05-01

    We present the beam dynamics and electromagnetic studies of a 3 MeV, 325 MHz H- radio frequency quadrupole (RFQ) accelerator for the proposed Indian Spallation Neutron Source project. We have followed a design approach, where the emittance growth and the losses are minimized by keeping the tune depression ratio larger than 0.5. The transverse cross-section of RFQ is designed at a frequency lower than the operating frequency, so that the tuners have their nominal position inside the RFQ cavity. This has resulted in an improvement of the tuning range, and the efficiency of tuners to correct the field errors in the RFQ. The vane-tip modulations have been modelled in CST-MWS code, and its effect on the field flatness and the resonant frequency has been studied. The deterioration in the field flatness due to vane-tip modulations is reduced to an acceptable level with the help of tuners. Details of the error study and the higher order mode study along with mode stabilization technique are also described in the paper.

  7. What is the acceptable hemolysis index for the measurements of plasma potassium, LDH and AST?

    PubMed

    Rousseau, Nathalie; Pige, Raphaëlle; Cohen, Richard; Pecquet, Matthieu

    2016-06-01

    Hemolysis is a cause of variability in test results for plasma potassium, LDH and AST and is a non-negligible part of measurement uncertainty. However, allowable levels of hemolysis provided by reagent suppliers take neither analytical variability (trueness and precision) nor the measurand into account. Using a calibration range of hemolysis, we measured the plasma concentrations of potassium, LDH and AST, and hemolysis indices with a Cobas C501 analyzer (Roche Diagnostics(®), Meylan, France). Based on the allowable total error (according to Ricós et al.) and the expanded measurement uncertainty equation we calculated the maximum allowable bias for two concentrations of each measurand. Finally, we determined the allowable hemolysis indices for all three measurands. We observed a linear relationship between the observed increases of concentration and hemolysis indices. The LDH measurement was the most sensitive to hemolysis, followed by AST and potassium measurements. The determination of the allowable hemolysis index depends on the targeted measurand, its concentration and the chosen level of requirement of allowable total error.

  8. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  9. A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system

    NASA Astrophysics Data System (ADS)

    Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan

    2018-01-01

    This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.

  10. Field experiment of 800× off-axis XR-Köhler concentrator module on a carousel tracker

    NASA Astrophysics Data System (ADS)

    Yamada, Noboru; Okamoto, Kazuya; Ijiro, Toshikazu; Suzuki, Takao; Maemura, Toshihiko; Kawaguchi, Takashi; Takahashi, Hiroshi; Sato, Takashi; Hernandez, Maikel; Benitez, Pablo; Chaves, Julio; Cvetkovic, Aleksandra; Vilaplana, Juan; Mohedano, Ruben; Mendes-Lopes, Joao; Miñano, Juan Carlos

    2013-09-01

    This paper presents the design and preliminary experimental results of a concentrator-type photovoltaic module based on a free-form off-axis 800×XR-Köhler concentrator. The off-axis XR-Köhler concentrator is one of the advanced concentrators that perform high concentration with a large acceptance angle and excellent irradiance uniformity on a solar cell. As a result of on-sun characterization of the unglazed single-cell unit test rig, the temperature-corrected DC module efficiency was 32.2% at 25 °C without an anti-reflective (AR) coating on the secondary optics, and the acceptance angle was more than ±1.0°. In addition, the non-corrected DC efficiency of an individual cell in a glazed 8-cell unit module mounted on a carousel tracking system was measured. The individual efficiency deviated in the range of 24.3-27.4%, owing to the mirror shape and alignment errors. The resultant series-connected efficiency was approximately 25% at direct normal irradiation (DNI) of 770 W/m2.

  11. Total energy based flight control system

    NASA Technical Reports Server (NTRS)

    Lambregts, Antonius A. (Inventor)

    1985-01-01

    An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.

  12. A Technological Innovation to Reduce Prescribing Errors Based on Implementation Intentions: The Acceptability and Feasibility of MyPrescribe.

    PubMed

    Keyworth, Chris; Hart, Jo; Thoong, Hong; Ferguson, Jane; Tully, Mary

    2017-08-01

    Although prescribing of medication in hospitals is rarely an error-free process, prescribers receive little feedback on their mistakes and ways to change future practices. Audit and feedback interventions may be an effective approach to modifying the clinical practice of health professionals, but these may pose logistical challenges when used in hospitals. Moreover, such interventions are often labor intensive. Consequently, there is a need to develop effective and innovative interventions to overcome these challenges and to improve the delivery of feedback on prescribing. Implementation intentions, which have been shown to be effective in changing behavior, link critical situations with an appropriate response; however, these have rarely been used in the context of improving prescribing practices. Semistructured qualitative interviews were conducted to evaluate the acceptability and feasibility of providing feedback on prescribing errors via MyPrescribe, a mobile-compatible website informed by implementation intentions. Data relating to 200 prescribing errors made by 52 junior doctors were collected by 11 hospital pharmacists. These errors were populated into MyPrescribe, where prescribers were able to construct their own personalized action plans. Qualitative interviews with a subsample of 15 junior doctors were used to explore issues regarding feasibility and acceptability of MyPrescribe and their experiences of using implementation intentions to construct prescribing action plans. Framework analysis was used to identify prominent themes, with findings mapped to the behavioral components of the COM-B model (capability, opportunity, motivation, and behavior) to inform the development of future interventions. MyPrescribe was perceived to be effective in providing opportunities for critical reflection on prescribing errors and to complement existing training (such as junior doctors' e-portfolio). The participants were able to provide examples of how they would use "If-Then" plans for patient management. Technology, as opposed to other methods of learning (eg, traditional "paper based" learning), was seen as a positive advancement for continued learning. MyPrescribe was perceived as an acceptable and feasible learning tool for changing prescribing practices, with participants suggesting that it would make an important addition to medical prescribers' training in reflective practice. MyPrescribe is a novel theory-based technological innovation that provides the platform for doctors to create personalized implementation intentions. Applying the COM-B model allows for a more detailed understanding of the perceived mechanisms behind prescribing practices and the ways in which interventions aimed at changing professional practice can be implemented. ©Chris Keyworth, Jo Hart, Hong Thoong, Jane Ferguson, Mary Tully. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 01.08.2017.

  13. A Technological Innovation to Reduce Prescribing Errors Based on Implementation Intentions: The Acceptability and Feasibility of MyPrescribe

    PubMed Central

    Hart, Jo; Thoong, Hong; Ferguson, Jane; Tully, Mary

    2017-01-01

    Background Although prescribing of medication in hospitals is rarely an error-free process, prescribers receive little feedback on their mistakes and ways to change future practices. Audit and feedback interventions may be an effective approach to modifying the clinical practice of health professionals, but these may pose logistical challenges when used in hospitals. Moreover, such interventions are often labor intensive. Consequently, there is a need to develop effective and innovative interventions to overcome these challenges and to improve the delivery of feedback on prescribing. Implementation intentions, which have been shown to be effective in changing behavior, link critical situations with an appropriate response; however, these have rarely been used in the context of improving prescribing practices. Objective Semistructured qualitative interviews were conducted to evaluate the acceptability and feasibility of providing feedback on prescribing errors via MyPrescribe, a mobile-compatible website informed by implementation intentions. Methods Data relating to 200 prescribing errors made by 52 junior doctors were collected by 11 hospital pharmacists. These errors were populated into MyPrescribe, where prescribers were able to construct their own personalized action plans. Qualitative interviews with a subsample of 15 junior doctors were used to explore issues regarding feasibility and acceptability of MyPrescribe and their experiences of using implementation intentions to construct prescribing action plans. Framework analysis was used to identify prominent themes, with findings mapped to the behavioral components of the COM-B model (capability, opportunity, motivation, and behavior) to inform the development of future interventions. Results MyPrescribe was perceived to be effective in providing opportunities for critical reflection on prescribing errors and to complement existing training (such as junior doctors’ e-portfolio). The participants were able to provide examples of how they would use “If-Then” plans for patient management. Technology, as opposed to other methods of learning (eg, traditional “paper based” learning), was seen as a positive advancement for continued learning. Conclusions MyPrescribe was perceived as an acceptable and feasible learning tool for changing prescribing practices, with participants suggesting that it would make an important addition to medical prescribers’ training in reflective practice. MyPrescribe is a novel theory-based technological innovation that provides the platform for doctors to create personalized implementation intentions. Applying the COM-B model allows for a more detailed understanding of the perceived mechanisms behind prescribing practices and the ways in which interventions aimed at changing professional practice can be implemented. PMID:28765104

  14. Research on Modelling of Aviation Piston Engine for the Hardware-in-the-loop Simulation

    NASA Astrophysics Data System (ADS)

    Yu, Bing; Shu, Wenjun; Bian, Wenchao

    2016-11-01

    In order to build the aero piston engine model which is real-time and accurate enough to operating conditions of the real engine for hardware in the loop simulation, the mean value model is studied. Firstly, the air-inlet model, the fuel model and the power-output model are established separately. Then, these sub models are combined and verified in MATLAB/SIMULINK. The results show that the model could reflect the steady-state and dynamic performance of aero engine, the errors between the simulation results and the bench test data are within the acceptable range. The model could be applied to verify the logic performance and control strategy of controller in the hardware-in-the-loop (HIL) simulation.

  15. Amorphous In-Ga-Zn-O Thin Film Transistor Current-Scaling Pixel Electrode Circuit for Active-Matrix Organic Light-Emitting Displays

    NASA Astrophysics Data System (ADS)

    Chen, Charlene; Abe, Katsumi; Fung, Tze-Ching; Kumomi, Hideya; Kanicki, Jerzy

    2009-03-01

    In this paper, we analyze application of amorphous In-Ga-Zn-O thin film transistors (a-InGaZnO TFTs) to current-scaling pixel electrode circuit that could be used for 3-in. quarter video graphics array (QVGA) full color active-matrix organic light-emitting displays (AM-OLEDs). Simulation results, based on a-InGaZnO TFT and OLED experimental data, show that both device sizes and operational voltages can be reduced when compare to the same circuit using hydrogenated amorphous silicon (a-Si:H) TFTs. Moreover, the a-InGaZnO TFT pixel circuit can compensate for the drive TFT threshold voltage variation (ΔVT) within acceptable operating error range.

  16. Scheduling periodic jobs using imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1987-01-01

    One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed.

  17. Non-destructive analysis of sucrose, caffeine and trigonelline on single green coffee beans by hyperspectral imaging.

    PubMed

    Caporaso, Nicola; Whitworth, Martin B; Grebby, Stephen; Fisk, Ian D

    2018-04-01

    Hyperspectral imaging (HSI) is a novel technology for the food sector that enables rapid non-contact analysis of food materials. HSI was applied for the first time to whole green coffee beans, at a single seed level, for quantitative prediction of sucrose, caffeine and trigonelline content. In addition, the intra-bean distribution of coffee constituents was analysed in Arabica and Robusta coffees on a large sample set from 12 countries, using a total of 260 samples. Individual green coffee beans were scanned by reflectance HSI (980-2500nm) and then the concentration of sucrose, caffeine and trigonelline analysed with a reference method (HPLC-MS). Quantitative prediction models were subsequently built using Partial Least Squares (PLS) regression. Large variations in sucrose, caffeine and trigonelline were found between different species and origin, but also within beans from the same batch. It was shown that estimation of sucrose content is possible for screening purposes (R 2 =0.65; prediction error of ~0.7% w/w coffee, with observed range of ~6.5%), while the performance of the PLS model was better for caffeine and trigonelline prediction (R 2 =0.85 and R 2 =0.82, respectively; prediction errors of 0.2 and 0.1%, on a range of 2.3 and 1.1% w/w coffee, respectively). The prediction error is acceptable mainly for laboratory applications, with the potential application to breeding programmes and for screening purposes for the food industry. The spatial distribution of coffee constituents was also successfully visualised for single beans and this enabled mapping of the analytes across the bean structure at single pixel level. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Simulation study of communication link for Pioneer Saturn/Uranus atmospheric entry probe. [signal acquisition by candidate modem for radio link

    NASA Technical Reports Server (NTRS)

    Hinrichs, C. A.

    1974-01-01

    A digital simulation is presented for a candidate modem in a modeled atmospheric scintillation environment with Doppler, Doppler rate, and signal attenuation typical of the radio link conditions for an outer planets atmospheric entry probe. The results indicate that the signal acquisition characteristics and the channel error rate are acceptable for the system requirements of the radio link. The simulation also outputs data for calculating other error statistics and a quantized symbol stream from which error correction decoding can be analyzed.

  19. Simulation of rare events in quantum error correction

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Vargo, Alexander

    2013-12-01

    We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.

  20. Voice recognition software can be used for scientific articles.

    PubMed

    Pommergaard, Hans-Christian; Huang, Chenxi; Burcharth, Jacob; Rosenberg, Jacob

    2015-02-01

    Dictation of scientific articles has been recognised as an efficient method for producing high-quality, first article drafts. However, standardised transcription service by a secretary may not be available for all researchers and voice recognition software (VRS) may therefore be an alternative. The purpose of this study was to evaluate the out-of-the-box accuracy of VRS. Eleven young researchers without dictation experience dictated the first draft of their own scientific article after thorough preparation according to a pre-defined schedule. The dictate transcribed by VRS was compared with the same dictate transcribed by an experienced research secretary, and the effect of adding words to the vocabulary of the VRS was investigated. The number of errors per hundred words was used as outcome. Furthermore, three experienced researchers assessed the subjective readability using a Likert scale (0-10). Dragon Nuance Premium version 12.5 was used as VRS. The median number of errors per hundred words was 18 (range: 8.5-24.3), which improved when 15,000 words were added to the vocabulary. Subjective readability assessment showed that the texts were understandable with a median score of five (range: 3-9), which was improved with the addition of 5,000 words. The out-of-the-box performance of VRS was acceptable and improved after additional words were added. Further studies are needed to investigate the effect of additional software accuracy training.

  1. Assessing performance of closed-loop insulin delivery systems by continuous glucose monitoring: drawbacks and way forward.

    PubMed

    Hovorka, Roman; Nodale, Marianna; Haidar, Ahmad; Wilinska, Malgorzata E

    2013-01-01

    We investigated whether continuous glucose monitoring (CGM) levels can accurately assess glycemic control while directing closed-loop insulin delivery. Data were analyzed retrospectively from 33 subjects with type 1 diabetes who underwent closed-loop and conventional pump therapy on two separate nights. Glycemic control was evaluated by reference plasma glucose and contrasted against three methods based on Navigator (Abbott Diabetes Care, Alameda, CA) CGM levels. Glucose mean and variability were estimated by unmodified CGM levels with acceptable clinical accuracy. Time when glucose was in target range was overestimated by CGM during closed-loop nights (CGM vs. plasma glucose median [interquartile range], 86% [65-97%] vs. 75% [59-91%]; P=0.04) but not during conventional pump therapy (57% [32-72%] vs. 51% [29-68%]; P=0.82) providing comparable treatment effect (mean [SD], 28% [29%] vs. 23% [21%]; P=0.11). Using the CGM measurement error of 15% derived from plasma glucose-CGM pairs (n=4,254), stochastic interpretation of CGM gave unbiased estimate of time in target during both closed-loop (79% [62-86%] vs. 75% [59-91%]; P=0.24) and conventional pump therapy (54% [33-66%] vs. 51% [29-68%]; P=0.44). Treatment effect (23% [24%] vs. 23% [21%]; P=0.96) and time below target were accurately estimated by stochastic CGM. Recalibrating CGM using reference plasma glucose values taken at the start and end of overnight closed-loop was not superior to stochastic CGM. CGM is acceptable to estimate glucose mean and variability, but without adjustment it may overestimate benefit of closed-loop. Stochastic CGM provided unbiased estimate of time when glucose is in target and below target and may be acceptable for assessment of closed-loop in the outpatient setting.

  2. Magnetoencephalography Phantom Comparison and Validation: Hospital Universiti Sains Malaysia (HUSM) Requisite.

    PubMed

    Omar, Hazim; Ahmad, Alwani Liyan; Hayashi, Noburo; Idris, Zamzuri; Abdullah, Jafri Malin

    2015-12-01

    Magnetoencephalography (MEG) has been extensively used to measure small-scale neuronal brain activity. Although it is widely acknowledged as a sensitive tool for deciphering brain activity and source localisation, the accuracy of the MEG system must be critically evaluated. Typically, on-site calibration with the provided phantom (Local phantom) is used. However, this method is still questionable due to the uncertainty that may originate from the phantom itself. Ideally, the validation of MEG data measurements would require cross-site comparability. A simple method of phantom testing was used twice in addition to a measurement taken with a calibrated reference phantom (RefPhantom) obtained from Elekta Oy of Helsinki, Finland. The comparisons of two main aspects were made in terms of the dipole moment (Qpp) and the difference in the dipole distance from the origin (d) after the tests of statistically equal means and variance were confirmed. The result of Qpp measurements for the LocalPhantom and RefPhantom were 978 (SD24) nAm and 988 (SD32) nAm, respectively, and were still optimally within the accepted range of 900 to 1100 nAm. Moreover, the shifted d results for the LocalPhantom and RefPhantom were 1.84 mm (SD 0.53) and 2.14 mm (SD 0.78), respectively, and these values were below the maximum acceptance range of within 5.0 mm of the nominal dipole location. The Local phantom seems to outperform the reference phantom as indicated by the small standard error of the former (SE 0.094) compared with the latter (SE 0.138). The result indicated that HUSM MEG system was in excellent working condition in terms of the dipole magnitude and localisation measurements as these values passed the acceptance limits criteria of the phantom test.

  3. Measurement error of mean sac diameter and crown-rump length among pregnant women at Mulago hospital, Uganda.

    PubMed

    Ali, Sam; Byanyima, Rosemary Kusaba; Ononge, Sam; Ictho, Jerry; Nyamwiza, Jean; Loro, Emmanuel Lako Ernesto; Mukisa, John; Musewa, Angella; Nalutaaya, Annet; Ssenyonga, Ronald; Kawooya, Ismael; Temper, Benjamin; Katamba, Achilles; Kalyango, Joan; Karamagi, Charles

    2018-05-04

    Ultrasonography is essential in the prenatal diagnosis and care for the pregnant mothers. However, the measurements obtained often contain a small percentage of unavoidable error that may have serious clinical implications if substantial. We therefore evaluated the level of intra and inter-observer error in measuring mean sac diameter (MSD) and crown-rump length (CRL) in women between 6 and 10 weeks' gestation at Mulago hospital. This was a cross-sectional study conducted from January to March 2016. We enrolled 56 women with an intrauterine single viable embryo. The women were scanned using a transvaginal (TVS) technique by two observers who were blinded of each other's measurements. Each observer measured the CRL twice and the MSD once for each woman. Intra-class correlation coefficients (ICCs), 95% limits of agreement (LOA) and technical error of measurement (TEM) were used for analysis. Intra-observer ICCs for CRL measurements were 0.995 and 0.993 while inter-observer ICCs were 0.988 for CRL and 0.955 for MSD measurements. Intra-observer 95% LOA for CRL were ± 2.04 mm and ± 1.66 mm. Inter-observer LOA were ± 2.35 mm for CRL and ± 4.87 mm for MSD. The intra-observer relative TEM for CRL were 4.62% and 3.70% whereas inter-observer relative TEM were 5.88% and 5.93% for CRL and MSD respectively. Intra- and inter-observer error of CRL and MSD measurements among pregnant women at Mulago hospital were acceptable. This implies that at Mulago hospital, the error in pregnancy dating is within acceptable margins of ±3 days in first trimester, and the CRL and MSD cut offs of ≥7 mm and ≥ 25 mm respectively are fit for diagnosis of miscarriage on TVS. These findings should be extrapolated to the whole country with caution. Sonographers can achieve acceptable and comparable diagnostic accuracy levels of MSD and CLR measurements with proper training and adherence to practice guidelines.

  4. Retrieval of carbon dioxide vertical profiles from solar occultation observations and associated error budgets for ACE-FTS and CASS-FTS

    NASA Astrophysics Data System (ADS)

    Sioris, C. E.; Boone, C. D.; Nassar, R.; Sutton, K. J.; Gordon, I. E.; Walker, K. A.; Bernath, P. F.

    2014-07-01

    An algorithm is developed to retrieve the vertical profile of carbon dioxide in the 5 to 25 km altitude range using mid-infrared solar occultation spectra from the main instrument of the ACE (Atmospheric Chemistry Experiment) mission, namely the Fourier transform spectrometer (FTS). The main challenge is to find an atmospheric phenomenon which can be used for accurate tangent height determination in the lower atmosphere, where the tangent heights (THs) calculated from geometric and timing information are not of sufficient accuracy. Error budgets for the retrieval of CO2 from ACE-FTS and the FTS on a potential follow-on mission named CASS (Chemical and Aerosol Sounding Satellite) are calculated and contrasted. Retrieved THs have typical biases of 60 m relative to those retrieved using the ACE version 3.x software after revisiting the temperature dependence of the N2 CIA (collision-induced absorption) laboratory measurements and accounting for sulfate aerosol extinction. After correcting for the known residual high bias of ACE version 3.x THs expected from CO2 spectroscopic/isotopic inconsistencies, the remaining bias for tangent heights determined with the N2 CIA is -20 m. CO2 in the 5-13 km range in the 2009-2011 time frame is validated against aircraft measurements from CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container), CONTRAIL (Comprehensive Observation Network for Trace gases by Airline), and HIPPO (HIAPER Pole-to-Pole Observations), yielding typical biases of -1.7 ppm in the 5-13 km range. The standard error of these biases in this vertical range is 0.4 ppm. The multi-year ACE-FTS data set is valuable in determining the seasonal variation of the latitudinal gradient which arises from the strong seasonal cycle in the Northern Hemisphere troposphere. The annual growth of CO2 in this time frame is determined to be 2.6 ± 0.4 ppm year-1, in agreement with the currently accepted global growth rate based on ground-based measurements.

  5. Older Adults' Acceptance of Activity Trackers

    PubMed Central

    Preusse, Kimberly C.; Mitzner, Tracy L.; Fausset, Cara Bailey; Rogers, Wendy A.

    2016-01-01

    Objective To assess the usability and acceptance of activity tracking technologies by older adults. Method First in our multi-method approach, we conducted heuristic evaluations of two activity trackers that revealed potential usability barriers to acceptance. Next, questionnaires and interviews were administered to 16 older adults (Mage=70, SDage=3.09, rangeage= 65-75) before and after a 28-day field study to understand facilitators and additional barriers to acceptance. These measurements were supplemented with diary and usage data and assessed if and why users overcame usability issues. Results The heuristic evaluation revealed usability barriers in System Status Visibility; Error Prevention; and Consistency and Standards. The field study revealed additional barriers (e.g., accuracy, format), and acceptance-facilitators (e.g., goal-tracking, usefulness, encouragement). Discussion The acceptance of wellness management technologies, such as activity trackers, may be increased by addressing acceptance-barriers during deployment (e.g., providing tutorials on features that were challenging, communicating usefulness). PMID:26753803

  6. Quality of Impressions and Work Authorizations Submitted by Dental Students Supervised by Prosthodontists and General Dentists.

    PubMed

    Imbery, Terence A; Diaz, Nicholas; Greenfield, Kristy; Janus, Charles; Best, Al M

    2016-10-01

    Preclinical fixed prosthodontics is taught by Department of Prosthodontics faculty members at Virginia Commonwealth University School of Dentistry; however, 86% of all clinical cases in academic year 2012 were staffed by faculty members from the Department of General Practice. The aims of this retrospective study were to quantify the quality of impressions, accuracy of laboratory work authorizations, and most common errors and to determine if there were differences between the rate of errors in cases supervised by the prosthodontists and the general dentists. A total of 346 Fixed Prosthodontic Laboratory Tracking Sheets for the 2012 academic year were reviewed. The results showed that, overall, 73% of submitted impressions were acceptable at initial evaluation, 16% had to be poured first and re-evaluated for quality prior to pindexing, 7% had multiple impressions submitted for transfer dies, and 4% were rejected for poor quality. There were higher acceptance rates for impressions and work authorizations for cases staffed by prosthodontists than by general dentists, but the differences were not statistically significant (p=0.0584 and p=0.0666, respectively). Regarding the work authorizations, 43% overall did not provide sufficient information or had technical errors that delayed prosthesis fabrication. The most common errors were incorrect mountings, absence of solid casts, inadequate description of margins for porcelain fused to metal crowns, inaccurate die trimming, and margin marking. The percentages of errors in cases supervised by general dentists and prosthodontists were similar for 17 of the 18 types of errors identified; only for margin description was the percentage of errors statistically significantly higher for general dentist-supervised than prosthodontist-supervised cases. These results highlighted the ongoing need for faculty development and calibration to ensure students receive the highest quality education from all faculty members teaching fixed prosthodontics.

  7. Screening of the spine in adolescents: inter- and intra-rater reliability and measurement error of commonly used clinical tests.

    PubMed

    Aartun, Ellen; Degerfalk, Anna; Kentsdotter, Linn; Hestbaek, Lise

    2014-02-10

    Evidence on the reliability of clinical tests used for the spinal screening of children and adolescents is currently lacking. The aim of this study was to determine the inter- and intra-rater reliability and measurement error of clinical tests commonly used when screening young spines. Two experienced chiropractors independently assessed 111 adolescents aged 12-14 years who were recruited from a primary school in Denmark. A standardised examination protocol was used to test inter-rater reliability including tests for scoliosis, hypermobility, general mobility, inter-segmental mobility and end range pain in the spine. Seventy-five of the 111 subjects were re-examined after one to four hours to test intra-rater reliability. Percentage agreement and Cohen's Kappa were calculated for binary variables, and interclass correlation (ICC) and Bland-Altman plots with Limits of Agreement (LoA) were calculated for continuous measures. Inter-rater percentage agreement for binary data ranged from 59.5% to 100%. Kappa ranged from 0.06-1.00. Kappa ≥ 0.40 was seen for elbow, thumb, fifth finger and trunk/hip flexion hypermobility, pain response in inter-segmental mobility and end range pain in lumbar flexion and extension. For continuous data, ICCs ranged from 0.40-0.95. Only forward flexion as measured by finger-to-floor distance reached an acceptable ICC(≥ 0.75). Overall, results for intra-rater reliability were better than for inter-rater reliability but for both components, the LoA were quite wide compared with the range of assessments. Some clinical tests showed good, and some tests poor, reliability when applied in a spinal screening of adolescents. The results could probably be improved by additional training and further test standardization. This is the first step in evaluating the value of these tests for the spinal screening of adolescents. Future research should determine the association between these tests and current and/or future neck and back pain.

  8. A cloud medication safety support system using QR code and Web services for elderly outpatients.

    PubMed

    Tseng, Ming-Hseng; Wu, Hui-Ching

    2014-01-01

    Drug is an important part of disease treatment, but medication errors happen frequently and have significant clinical and financial consequences. The prevalence of prescription medication use among the ambulatory adult population increases with advancing age. Because of the global aging society, outpatients need to improve medication safety more than inpatients. The elderly with multiple chronic conditions face the complex task of medication management. To reduce the medication errors for the elder outpatients with chronic diseases, a cloud medication safety supporting system is designed, demonstrated and evaluated. The proposed system is composed of a three-tier architecture: the front-end tier, the mobile tier and the cloud tier. The mobile tier will host the personalized medication safety supporting application on Android platforms that provides some primary functions including reminders for medication, assistance with pill-dispensing, recording of medications, position of medications and notices of forgotten medications for elderly outpatients. Finally, the hybrid technology acceptance model is employed to understand the intention and satisfaction level of the potential users to use this mobile medication safety support application system. The result of the system acceptance testing indicates that this developed system, implementing patient-centered services, is highly accepted by the elderly. This proposed M-health system could assist elderly outpatients' homecare in preventing medication errors and improving their medication safety.

  9. Correlations for CO{sub 2} production from combustion of Turkish coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oezdogan, S.

    1996-12-31

    Carbon dioxide is identified as the major contributor to greenhouse gas emissions. About 75% of the anthropogenic CO{sub 2} emissions are due to energy use, and primarily due to fossil-fuel combustion. Future patterns of energy use will dominate the global climate change. Within this frame, comparative evaluation of various carbon-based primary energy sources and related utilization options is of utmost importance. The amount of CO{sub 2} emission per unit energy production is considered as the mutual basis of evaluation among the fuel options. In this study, 39 Turkish coals were selected to represent the broad spectrum of Turkish coal characteristics.more » The lower heating values of the samples range from 6.8 to 30.6 MJ/kg on the as-received basis. The corresponding higher heating value range is 8.2 to 31.6 MJ/kg. The volatile matter to fixed carbon ratios of the selected coals change between 0.520 and 2.05 whereas the C to H weight ratios of dry coals cover a range from 16.4 to 9.8. The exact amount of CO{sub 2} emission per unit heating value is calculated from experimental data. The analysis of the results indicates that linear correlations exist between CO{sub 2} emissions per unit amount lower or higher heating value and the inverse of heating values. The calculated standard errors of estimate are within acceptable limits. The average and maximum errors are 3% and 11%, respectively. The developed formulas are applied to different ranks of coal from Turkey and abroad and results are interpreted.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, R; Zhu, X; Li, S

    Purpose: High Dose Rate (HDR) brachytherapy forward planning is principally an iterative process; hence, plan quality is affected by planners’ experiences and limited planning time. Thus, this may lead to sporadic errors and inconsistencies in planning. A statistical tool based on previous approved clinical treatment plans would help to maintain the consistency of planning quality and improve the efficiency of second checking. Methods: An independent dose calculation tool was developed from commercial software. Thirty-three previously approved cervical HDR plans with the same prescription dose (550cGy), applicator type, and treatment protocol were examined, and ICRU defined reference point doses (bladder, vaginalmore » mucosa, rectum, and points A/B) along with dwell times were collected. Dose calculation tool then calculated appropriate range with a 95% confidence interval for each parameter obtained, which would be used as the benchmark for evaluation of those parameters in future HDR treatment plans. Model quality was verified using five randomly selected approved plans from the same dataset. Results: Dose variations appears to be larger at the reference point of bladder and mucosa as compared with rectum. Most reference point doses from verification plans fell between the predicted range, except the doses of two points of rectum and two points of reference position A (owing to rectal anatomical variations & clinical adjustment in prescription points, respectively). Similar results were obtained for tandem and ring dwell times despite relatively larger uncertainties. Conclusion: This statistical tool provides an insight into clinically acceptable range of cervical HDR plans, which could be useful in plan checking and identifying potential planning errors, thus improving the consistency of plan quality.« less

  11. Atmospheric refraction effects on baseline error in satellite laser ranging systems

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Gardner, C. S.

    1982-01-01

    Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.

  12. Human Factors Process Task Analysis Liquid Oxygen Pump Acceptance Test Procedure for the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.

    2002-01-01

    A process task analysis effort was undertaken by Dynacs Inc. commencing in June 2002 under contract from NASA YA-D6. Funding was provided through NASA's Ames Research Center (ARC), Code M/HQ, and Industrial Engineering and Safety (IES). The John F. Kennedy Space Center (KSC) Engineering Development Contract (EDC) Task Order was 5SMA768. The scope of the effort was to conduct a Human Factors Process Failure Modes and Effects Analysis (HF PFMEA) of a hazardous activity and provide recommendations to eliminate or reduce the effects of errors caused by human factors. The Liquid Oxygen (LOX) Pump Acceptance Test Procedure (ATP) was selected for this analysis. The HF PFMEA table (see appendix A) provides an analysis of six major categories evaluated for this study. These categories include Personnel Certification, Test Procedure Format, Test Procedure Safety Controls, Test Article Data, Instrumentation, and Voice Communication. For each specific requirement listed in appendix A, the following topics were addressed: Requirement, Potential Human Error, Performance-Shaping Factors, Potential Effects of the Error, Barriers and Controls, Risk Priority Numbers, and Recommended Actions. This report summarizes findings and gives recommendations as determined by the data contained in appendix A. It also includes a discussion of technology barriers and challenges to performing task analyses, as well as lessons learned. The HF PFMEA table in appendix A recommends the use of accepted and required safety criteria in order to reduce the risk of human error. The items with the highest risk priority numbers should receive the greatest amount of consideration. Implementation of the recommendations will result in a safer operation for all personnel.

  13. Technical Quality of Root Canal Treatment Performed by Undergraduate Clinical Students of Isfahan Dental School.

    PubMed

    Saatchi, Masoud; Mohammadi, Golshan; Vali Sichani, Armita; Moshkforoush, Saba

    2018-01-01

    The aim of the present study was to evaluate the radiographic quality of RCTs performed by undergraduate clinical students of Dental School of Isfahan University of Medical Sciences. In this cross sectional study, records and periapical radiographs of 1200 root filled teeth were randomly selected from the records of patients who had received RCTs in Dental School of Isfahan University of Medical Sciences from 2013 to 2015. After excluding 416 records, the final sample consisted of 784 root-treated teeth (1674 root canals). Two variables including the length and the density of the root fillings were examined. Moreover, the presence of ledge, foramen perforation, root perforation and fractured instruments were also evaluated as procedural errors. Descriptive statistics were used for expressing the frequencies of criteria and chi square test was used for comparing tooth types, tooth locations and academic level of students ( P <0.05). The frequency of root canals with acceptable filling was 54.1%. Overfilling was found in 11% of root canals, underfilling in 8.3% and inadequate density in 34.6%. No significant difference was found between the frequency of acceptable root fillings in the maxilla and mandible ( P =0.072). More acceptable fillings were found in the root canals of premolars (61.3%) than molars (51.3%) ( P =0.001). The frequency of procedural errors was 18.6%. Ledge was found in 12.5% of root canals, foramen perforation in 2%, root perforation in 2.4% and fractured instrument in 2%. Procedural errors were more frequent in the root canals of molars (22.5%) than the anterior teeth (12.3%) ( P =0.003) and the premolars (9.5%) ( P <0.001). Technical quality of RCTs performed by clinical students was not satisfactory and incidence of procedural errors was considerable.

  14. Evaluation of the performance of a micromethod for measuring urinary iodine by using six sigma quality metrics.

    PubMed

    Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud

    2013-09-01

    The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)

  15. Can an online clinical data management service help in improving data collection and data quality in a developing country setting?

    PubMed

    Wildeman, Maarten A; Zandbergen, Jeroen; Vincent, Andrew; Herdini, Camelia; Middeldorp, Jaap M; Fles, Renske; Dalesio, Otilia; van der Donk, Emile; Tan, I Bing

    2011-08-08

    Data collection by electronic medical record (EMR) systems have been proven to be helpful in data collection for scientific research and in improving healthcare. For a multi-centre trial in Indonesia and the Netherlands a web based system was selected to enable all participating centres to easily access data. This study assesses whether the introduction of a clinical trial data management service (CTDMS) composed of electronic case report forms (eCRF) can result in effective data collection and treatment monitoring. Data items entered were checked for inconsistencies automatically when submitted online. The data were divided into primary and secondary data items. We analysed both the total number of errors and the change in error rate, for both primary and secondary items, over the first five month of the trial. In the first five months 51 patients were entered. The primary data error rate was 1.6%, whilst that for secondary data was 2.7% against acceptable error rates for analysis of 1% and 2.5% respectively. The presented analysis shows that after five months since the introduction of the CTDMS the primary and secondary data error rates reflect acceptable levels of data quality. Furthermore, these error rates were decreasing over time. The digital nature of the CTDMS, as well as the online availability of that data, gives fast and easy insight in adherence to treatment protocols. As such, the CTDMS can serve as a tool to train and educate medical doctors and can improve treatment protocols.

  16. The NEEDS Data Base Management and Archival Mass Memory System

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.; Bryant, S. B.; Thomas, D. T.; Wagnon, F. W.

    1980-01-01

    A Data Base Management System and an Archival Mass Memory System are being developed that will have a 10 to the 12th bit on-line and a 10 to the 13th off-line storage capacity. The integrated system will accept packetized data from the data staging area at 50 Mbps, create a comprehensive directory, provide for file management, record the data, perform error detection and correction, accept user requests, retrieve the requested data files and provide the data to multiple users at a combined rate of 50 Mbps. Stored and replicated data files will have a bit error rate of less than 10 to the -9th even after ten years of storage. The integrated system will be demonstrated to prove the technology late in 1981.

  17. Validity of an ultra-wideband local positioning system to measure locomotion in indoor sports.

    PubMed

    Serpiello, F R; Hopkins, W G; Barnes, S; Tavrou, J; Duthie, G M; Aughey, R J; Ball, K

    2018-08-01

    The validity of an Ultra-wideband (UWB) positioning system was investigated during linear and change-of-direction (COD) running drills. Six recreationally-active men performed ten repetitions of four activities (walking, jogging, maximal acceleration, and 45º COD) on an indoor court. Activities were repeated twice, in the centre of the court and on the side. Participants wore a receiver tag (Clearsky T6, Catapult Sports) and two reflective markers placed on the tag to allow for comparisons with the criterion system (Vicon). Distance, mean and peak velocity, acceleration, and deceleration were assessed. Validity was assessed via percentage least-square means difference (Clearsky-Vicon) with 90% confidence interval and magnitude-based inference; typical error was expressed as within-subject standard deviation. The mean differences for distance, mean/peak speed, and mean/peak accelerations in the linear drills were in the range of 0.2-12%, with typical errors between 1.2 and 9.3%. Mean and peak deceleration had larger differences and errors between systems. In the COD drill, moderate-to-large differences were detected for the activity performed in the centre of the court, increasing to large/very large on the side. When filtered and smoothed following a similar process, the UWB-based positioning system had acceptable validity, compared to Vicon, to assess movements representative of indoor sports.

  18. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    NASA Astrophysics Data System (ADS)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  19. Tolerance assignment in optical design

    NASA Astrophysics Data System (ADS)

    Youngworth, Richard Neil

    2002-09-01

    Tolerance assignment is necessary in any engineering endeavor because fabricated systems---due to the stochastic nature of manufacturing and assembly processes---necessarily deviate from the nominal design. This thesis addresses the problem of optical tolerancing. The work can logically be split into three different components that all play an essential role. The first part addresses the modeling of manufacturing errors in contemporary fabrication and assembly methods. The second component is derived from the design aspect---the development of a cost-based tolerancing procedure. The third part addresses the modeling of image quality in an efficient manner that is conducive to the tolerance assignment process. The purpose of the first component, modeling manufacturing errors, is twofold---to determine the most critical tolerancing parameters and to understand better the effects of fabrication errors. Specifically, mid-spatial-frequency errors, typically introduced in sub-aperture grinding and polishing fabrication processes, are modeled. The implication is that improving process control and understanding better the effects of the errors makes the task of tolerance assignment more manageable. Conventional tolerancing methods do not directly incorporate cost. Consequently, tolerancing approaches tend to focus more on image quality. The goal of the second part of the thesis is to develop cost-based tolerancing procedures that facilitate optimum system fabrication by generating the loosest acceptable tolerances. This work has the potential to impact a wide range of optical designs. The third element, efficient modeling of image quality, is directly related to the cost-based optical tolerancing method. Cost-based tolerancing requires efficient and accurate modeling of the effects of errors on the performance of optical systems. Thus it is important to be able to compute the gradient and the Hessian, with respect to the parameters that need to be toleranced, of the figure of merit that measures the image quality of a system. An algebraic method for computing the gradient and the Hessian is developed using perturbation theory.

  20. CO2 laser ranging systems study

    NASA Technical Reports Server (NTRS)

    Filippi, C. A.

    1975-01-01

    The conceptual design and error performance of a CO2 laser ranging system are analyzed. Ranging signal and subsystem processing alternatives are identified, and their comprehensive evaluation yields preferred candidate solutions which are analyzed to derive range and range rate error contributions. The performance results are presented in the form of extensive tables and figures which identify the ranging accuracy compromises as a function of the key system design parameters and subsystem performance indexes. The ranging errors obtained are noted to be within the high accuracy requirements of existing NASA/GSFC missions with a proper system design.

  1. The relationship between hand hygiene and health care-associated infection: it’s complicated

    PubMed Central

    McLaws, Mary-Louise

    2015-01-01

    The reasoning that improved hand hygiene compliance contributes to the prevention of health care-associated infections is widely accepted. It is also accepted that high hand hygiene alone cannot impact formidable risk factors, such as older age, immunosuppression, admission to the intensive care unit, longer length of stay, and indwelling devices. When hand hygiene interventions are concurrently undertaken with other routine or special preventive strategies, there is a potential for these concurrent strategies to confound the effect of the hand hygiene program. The result may be an overestimation of the hand hygiene intervention unless the design of the intervention or analysis controls the effect of the potential confounders. Other epidemiologic principles that may also impact the result of a hand hygiene program include failure to consider measurement error of the content of the hand hygiene program and the measurement error of compliance. Some epidemiological errors in hand hygiene programs aimed at reducing health care-associated infections are inherent and not easily controlled. Nevertheless, the inadvertent omission by authors to report these common epidemiological errors, including concurrent infection prevention strategies, suggests to readers that the effect of hand hygiene is greater than the sum of all infection prevention strategies. Worse still, this omission does not assist evidence-based practice. PMID:25678805

  2. [Analysis of drug-related problems in a tertiary university hospital in Barcelona (Spain)].

    PubMed

    Ferrández, Olivia; Casañ, Borja; Grau, Santiago; Louro, Javier; Salas, Esther; Castells, Xavier; Sala, Maria

    2018-05-07

    To describe drug-related problems identified in hospitalized patients and to assess physicians' acceptance rate of pharmacists' recommendations. Retrospective observational study that included all drug-related problems detected in hospitalized patients during 2014-2015. Statistical analysis included a descriptive analysis of the data and a multivariate logistic regression to evaluate the association between pharmacists' recommendation acceptance rate and the variable of interest. During the study period 4587 drug-related problems were identified in 44,870 hospitalized patients. Main drug-related problems were prescription errors due to incorrect use of the computerized physician order entry (18.1%), inappropriate drug-drug combination (13.3%) and dose adjustment by renal and/or hepatic function (11.5%). Acceptance rate of pharmacist therapy advice in evaluable cases was 81.0%. Medical versus surgical admitting department, specific types of intervention (addition of a new drug, drug discontinuation and correction of a prescription error) and oral communication of the recommendation were associated with a higher acceptance rate. The results of this study allow areas to be identified on which to implement optimization strategies. These include training courses for physicians on the computerized physician order entry, on drugs that need dose adjustment with renal impairment, and on relevant drug interactions. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  3. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2014-01-01

    Human error cannot be defined unambiguously in advance of it happening, it often becomes an error after the fact. The same action can result in a tragic accident for one situation or a heroic action given a more favorable outcome. People often forget that we employ humans in business and industry for the flexibility and capability to change when needed. In complex systems, operations are driven by their specifications of the system and the system structure. People provide the flexibility to make it work. Human error has been reported as being responsible for 60%-80% of failures, accidents and incidents in high-risk industries. We don't have to accept that all human errors are inevitable. Through the use of some basic techniques, many potential human error events can be addressed. There are actions that can be taken to reduce the risk of human error.

  4. P-value interpretation and alpha allocation in clinical trials.

    PubMed

    Moyé, L A

    1998-08-01

    Although much value has been placed on type I error event probabilities in clinical trials, interpretive difficulties often arise that are directly related to clinical trial complexity. Deviations of the trial execution from its protocol, the presence of multiple treatment arms, and the inclusion of multiple end points complicate the interpretation of an experiment's reported alpha level. The purpose of this manuscript is to formulate the discussion of P values (and power for studies showing no significant differences) on the basis of the event whose relative frequency they represent. Experimental discordance (discrepancies between the protocol's directives and the experiment's execution) is linked to difficulty in alpha and beta interpretation. Mild experimental discordance leads to an acceptable adjustment for alpha or beta, while severe discordance results in their corruption. Finally, guidelines are provided for allocating type I error among a collection of end points in a prospectively designed, randomized controlled clinical trial. When considering secondary end point inclusion in clinical trials, investigators should increase the sample size to preserve the type I error rates at acceptable levels.

  5. Evaluation of an Airborne Spacing Concept, On-Board Spacing Tool, and Pilot Interface

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt; Murdoch, Jennifer L.; Baxley, Brian; Hubbs, Clay

    2011-01-01

    The number of commercial aircraft operations is predicted to increase in the next ten years, creating a need for improved operational efficiency. Two areas believed to offer significant increases in efficiency are optimized profile descents and dependent parallel runway operations. It is envisioned that during both of these types of operations, flight crews will precisely space their aircraft behind preceding aircraft at air traffic control assigned intervals to increase runway throughput and maximize the use of existing infrastructure. This paper describes a human-in-the-loop experiment designed to study the performance of an onboard spacing algorithm and pilots ratings of the usability and acceptability of an airborne spacing concept that supports dependent parallel arrivals. Pilot participants flew arrivals into the Dallas Fort-Worth terminal environment using one of three different simulators located at the National Aeronautics and Space Administration s (NASA) Langley Research Center. Scenarios were flown using Interval Management with Spacing (IM-S) and Required Time of Arrival (RTA) control methods during conditions of no error, error in the forecast wind, and offset (disturbance) to the arrival flow. Results indicate that pilots delivered their aircraft to the runway threshold within +/- 3.5 seconds of their assigned arrival time and reported that both the IM-S and RTA procedures were associated with low workload levels. In general, pilots found the IM-S concept, procedures, speeds, and interface acceptable; with 92% of pilots rating the procedures as complete and logical, 218 out of 240 responses agreeing that the IM-S speeds were acceptable, and 63% of pilots reporting that the displays were easy to understand and displayed in appropriate locations. The 22 (out of 240) responses, indicating that the commanded speeds were not acceptable and appropriate occurred during scenarios containing wind error and offset error. Concerns cited included the occurrence of multiple speed changes within a short time period, speed changes required within twenty miles of the runway, and an increase in airspeed followed shortly by a decrease in airspeed. Within this paper, appropriate design recommendations are provided, and the need for continued, iterative human-centered design is discussed.

  6. [Clinical economics: a concept to optimize healthcare services].

    PubMed

    Porzsolt, F; Bauer, K; Henne-Bruns, D

    2012-03-01

    Clinical economics strives to support healthcare decisions by economic considerations. Making economic decisions does not mean saving costs but rather comparing the gained added value with the burden which has to be accepted. The necessary rules are offered in various disciplines, such as economy, epidemiology and ethics. Medical doctors have recognized these rules but are not applying them in daily clinical practice. This lacking orientation leads to preventable errors. Examples of these errors are shown for diagnosis, screening, prognosis and therapy. As these errors can be prevented by application of clinical economic principles the possible consequences for optimization of healthcare are discussed.

  7. Decision-making when data and inferences are not conclusive: risk-benefit and acceptable regret approach.

    PubMed

    Hozo, Iztok; Schell, Michael J; Djulbegovic, Benjamin

    2008-07-01

    The absolute truth in research is unobtainable, as no evidence or research hypothesis is ever 100% conclusive. Therefore, all data and inferences can in principle be considered as "inconclusive." Scientific inference and decision-making need to take into account errors, which are unavoidable in the research enterprise. The errors can occur at the level of conclusions that aim to discern the truthfulness of research hypothesis based on the accuracy of research evidence and hypothesis, and decisions, the goal of which is to enable optimal decision-making under present and specific circumstances. To optimize the chance of both correct conclusions and correct decisions, the synthesis of all major statistical approaches to clinical research is needed. The integration of these approaches (frequentist, Bayesian, and decision-analytic) can be accomplished through formal risk:benefit (R:B) analysis. This chapter illustrates the rational choice of a research hypothesis using R:B analysis based on decision-theoretic expected utility theory framework and the concept of "acceptable regret" to calculate the threshold probability of the "truth" above which the benefit of accepting a research hypothesis outweighs its risks.

  8. Uncertainty reduction in intensity modulated proton therapy by inverse Monte Carlo treatment planning

    NASA Astrophysics Data System (ADS)

    Morávek, Zdenek; Rickhey, Mark; Hartmann, Matthias; Bogner, Ludwig

    2009-08-01

    Treatment plans for intensity-modulated proton therapy may be sensitive to some sources of uncertainty. One source is correlated with approximations of the algorithms applied in the treatment planning system and another one depends on how robust the optimization is with regard to intra-fractional tissue movements. The irradiated dose distribution may substantially deteriorate from the planning when systematic errors occur in the dose algorithm. This can influence proton ranges and lead to improper modeling of the Braggpeak degradation in heterogeneous structures or particle scatter or the nuclear interaction part. Additionally, systematic errors influence the optimization process, which leads to the convergence error. Uncertainties with regard to organ movements are related to the robustness of a chosen beam setup to tissue movements on irradiation. We present the inverse Monte Carlo treatment planning system IKO for protons (IKO-P), which tries to minimize the errors described above to a large extent. Additionally, robust planning is introduced by beam angle optimization according to an objective function penalizing paths representing strongly longitudinal and transversal tissue heterogeneities. The same score function is applied to optimize spot planning by the selection of a robust choice of spots. As spots can be positioned on different energy grids or on geometric grids with different space filling factors, a variety of grids were used to investigate the influence on the spot-weight distribution as a result of optimization. A tighter distribution of spot weights was assumed to result in a more robust plan with respect to movements. IKO-P is described in detail and demonstrated on a test case and a lung cancer case as well. Different options of spot planning and grid types are evaluated, yielding a superior plan quality with dose delivery to the spots from all beam directions over optimized beam directions. This option shows a tighter spot-weight distribution and should therefore be less sensitive to movements compared to optimized directions. But accepting a slight loss in plan quality, the latter choice could potentially improve robustness even further by accepting only spots from the most proper direction. The choice of a geometric grid instead of an energy grid for spot positioning has only a minor influence on the plan quality, at least for the investigated lung case.

  9. Use of localized performance-based functions for the specification and correction of hybrid imaging systems

    NASA Astrophysics Data System (ADS)

    Lisson, Jerold B.; Mounts, Darryl I.; Fehniger, Michael J.

    1992-08-01

    Localized wavefront performance analysis (LWPA) is a system that allows the full utilization of the system optical transfer function (OTF) for the specification and acceptance of hybrid imaging systems. We show that LWPA dictates the correction of wavefront errors with the greatest impact on critical imaging spatial frequencies. This is accomplished by the generation of an imaging performance map-analogous to a map of the optic pupil error-using a local OTF. The resulting performance map a function of transfer function spatial frequency is directly relatable to the primary viewing condition of the end-user. In addition to optimizing quality for the viewer it will be seen that the system has the potential for an improved matching of the optical and electronic bandpass of the imager and for the development of more realistic acceptance specifications. 1. LOCAL WAVEFRONT PERFORMANCE ANALYSIS The LWPA system generates a local optical quality factor (LOQF) in the form of a map analogous to that used for the presentation and evaluation of wavefront errors. In conjunction with the local phase transfer function (LPTF) it can be used for maximally efficient specification and correction of imaging system pupil errors. The LOQF and LPTF are respectively equivalent to the global modulation transfer function (MTF) and phase transfer function (PTF) parts of the OTF. The LPTF is related to difference of the average of the errors in separated regions of the pupil. Figure

  10. Fusion of range camera and photogrammetry: a systematic procedure for improving 3-D models metric accuracy.

    PubMed

    Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C

    2003-01-01

    The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.

  11. Differential sea-state bias: A case study using TOPEX/POSEIDON data

    NASA Technical Reports Server (NTRS)

    Stewart, Robert H.; Devalla, B.

    1994-01-01

    We used selected data from the NASA altimeter TOPEX/POSEIDON to calculate differences in range measured by the C and Ku-band altimeters when the satellite overflew 5 to 15 m waves late at night. The range difference is due to free electrons in the ionosphere and to errors in sea-state bias. For the selected data the ionospheric influence on Ku range is less than 2 cm. Any difference in range over short horizontal distances is due only to a small along-track variability of the ionosphere and to errors in calculating the differential sea-state bias. We find that there is a barely detectable error in the bias in the geophysical data records. The wave-induced error in the ionospheric correction is less than 0.2% of significant wave height. The equivalent error in differential range is less than 1% of wave height. Errors in the differential sea-state bias calculations appear to be small even for extreme wave heights that greatly exceed the conditions on which the bias is based. The results also improved our confidence in the sea-state bias correction used for calculating the geophysical data records. Any error in the correction must influence Ku and C-band ranges almost equally.

  12. 3He(γ,pd) cross sections with tagged photons below the Δ resonance

    NASA Astrophysics Data System (ADS)

    Kolb, N. R.; Cairns, E. B.; Hackett, E. D.; Korkmaz, E.; Nakano, T.; Opper, A. K.; Quraan, M. A.; Rodning, N. L.; Rozon, F. M.; Asai, J.; Feldman, G.; Hallin, E.; O'rielly, G. V.; Pywell, R. E.; Skopik, D. M.

    1994-05-01

    The reaction cross section for 3He(γ,pd) has been measured using the Saskatchewan-Alberta Large Acceptance Detector (SALAD) with tagged photons in the energy range from 166 to 213 MeV. The energy and angle of the proton and the deuteron were measured with SALAD while the tagger determined the photon energy. Differential cross sections have been determined for 40°<θ*p<150°. The results are in agreement with the Bonn and Saclay photodisintegration measurements. The most recent photodisintegration measurement performed at Bates is higher by a factor of 1.3, which is just within the combined errors of the experiments. The proton capture results differ by a factor of 1.7 from the present experiment. Comparisons are made with microscopic calculations of the cross sections.

  13. Three Dimensional Visualization of GOES Cloud Data Using Octress

    DTIC Science & Technology

    1993-06-01

    structure for CAD of integrated circuits that can subdivide the cubes into more complex polyhedrons . Medical imaging is also taking advantage of the...CIGOES 501 FORMAT(A) CALL OPENDBCPARAM’, ISTATRM) IF (ISTATRM .NE. 0) CALL FRIMERRC Error opening database .’, "+ ISTATRM) CALL OLDIMAGE(1, CIGOES, STATUS...image name (no .ext):’ ACCEPT 501, CIGOES 501 FORMAT(A) CALL OPENDB(’PARAM’, ISTATRM) IF (ISTATRM .NE. 0) CALL FRIMERRC Error opening database

  14. Software reliability experiments data analysis and investigation

    NASA Technical Reports Server (NTRS)

    Walker, J. Leslie; Caglayan, Alper K.

    1991-01-01

    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  15. How accurate are quotations and references in medical journals?

    PubMed

    de Lacey, G; Record, C; Wade, J

    1985-09-28

    The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors--that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal.

  16. How accurate are quotations and references in medical journals?

    PubMed Central

    de Lacey, G; Record, C; Wade, J

    1985-01-01

    The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors--that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal. PMID:3931753

  17. Eliminative Argumentation: A Basis for Arguing Confidence in System Properties

    DTIC Science & Technology

    2015-02-01

    errors to acceptable system reliability is unsound . But this is not an acceptable undercutting defeater; it does not put the conclusion about system...first to note sources of unsoundness in arguments, namely, questionable inference rules and weaknesses in proffered evidence. However, the notions of...This material is based upon work funded and supported by the Department of Defense under Contract No. FA8721-05-C-0003 with Carnegie Mellon University

  18. Authorities to Use US Military Force Since the Passage of the 1973 War Powers Resolution

    DTIC Science & Technology

    2016-05-26

    Cambridge University Press, 2013), 55. 59 Third, the legislative branch and executive branches’ acceptance and reliance on the “all volunteer ...Andrew Bacevich recently explained the disadvantages of the All- Volunteer Force, Today, the people have by-and-large tuned out war or accept it as...than replicating the errors of Vietnam, the All- Volunteer Force has fostered new ones, chief among them a collective abrogation of civic

  19. Contamination characteristics and source apportionment of trace metals in soils around Miyun Reservoir.

    PubMed

    Chen, Haiyang; Teng, Yanguo; Chen, Ruihui; Li, Jiao; Wang, Jinsheng

    2016-08-01

    Due to their toxicity and bioaccumulation, trace metals in soils can result in a wide range of toxic effects on animals, plants, microbes, and even humans. Recognizing the contamination characteristics of soil metals and especially apportioning their potential sources are the necessary preconditions for pollution prevention and control. Over the past decades, several receptor models have been developed for source apportionment. Among them, positive matrix factorization (PMF) has gained popularity and was recommended by the US Environmental Protection Agency as a general modeling tool. In this study, an extended chemometrics model, multivariate curve resolution-alternating least squares based on maximum likelihood principal component analysis (MCR-ALS/MLPCA), was proposed for source apportionment of soil metals and applied to identify the potential sources of trace metals in soils around Miyun Reservoir. Similar to PMF, the MCR-ALS/MLPCA model can incorporate measurement error information and non-negativity constraints in its calculation procedures. Model validation with synthetic dataset suggested that the MCR-ALS/MLPCA could extract acceptable recovered source profiles even considering relatively larger error levels. When applying to identify the sources of trace metals in soils around Miyun Reservoir, the MCR-ALS/MLPCA model obtained the highly similar profiles with PMF. On the other hand, the assessment results of contamination status showed that the soils around reservoir were polluted by trace metals in slightly moderate degree but potentially posed acceptable risks to the public. Mining activities, fertilizers and agrochemicals, and atmospheric deposition were identified as the potential anthropogenic sources with contributions of 24.8, 14.6, and 13.3 %, respectively. In order to protect the drinking water source of Beijing, special attention should be paid to the metal inputs to soils from mining and agricultural activities.

  20. Ozone Profile Retrievals from the OMPS on Suomi NPP

    NASA Astrophysics Data System (ADS)

    Bak, J.; Liu, X.; Kim, J. H.; Haffner, D. P.; Chance, K.; Yang, K.; Sun, K.; Gonzalez Abad, G.

    2017-12-01

    We verify and correct the Ozone Mapping and Profiler Suite (OMPS) Nadir Mapper (NM) L1B v2.0 data with the aim of producing accurate ozone profile retrievals using an optimal estimation based inversion method in the 302.5-340 nm fitting. The evaluation of available slit functions demonstrates that preflight-measured slit functions well represent OMPS measurements compared to derived Gaussian slit functions. Our OMPS fitting residuals contain significant wavelength and cross-track dependent biases, and thereby serious cross-track striping errors are found in preliminary retrievals, especially in the troposphere. To eliminate the systematic component of the fitting residuals, we apply "soft calibration" to OMPS radiances. With the soft calibration the amplitude of fitting residuals decreases from 1 % to 0.2 % over low/mid latitudes, and thereby the consistency of tropospheric ozone retrievals between OMPS and Ozone Monitoring Instrument (OMI) are substantially improved. A common mode correction is implemented for additional radiometric calibration, which improves retrievals especially at high latitudes where the amplitude of fitting residuals decreases by a factor of 2. We estimate the floor noise error of OMPS measurements from standard deviations of the fitting residuals. The derived error in the Huggins band ( 0.1 %) is 2 times smaller than OMI floor noise error and 2 times larger than OMPS L1B measurement error. The OMPS floor noise errors better constrain our retrievals for maximizing measurement information and stabilizing our fitting residuals. The final precision of the fitting residuals is less than 0.1 % in the low/mid latitude, with 1 degrees of freedom for signal for the tropospheric ozone, so that we meet the general requirements for successful tropospheric ozone retrievals. To assess if the quality of OMPS ozone retrievals could be acceptable for scientific use, we will characterize OMPS ozone profile retrievals, present error analysis, and validate retrievals using a reference dataset. The useful information on the vertical distribution of ozone is limited below 40 km only from OMPS NM measurements due to the absence of Hartley ozone wavelength. This shortcoming will be improved with the joint ozone profile retrieval using Nadir Profiler (NP) measurements covering the 250 to 310 nm range.

  1. Portfolio: a prototype workstation for development and evaluation of tools for analysis and management of digital portal images.

    PubMed

    Boxwala, A A; Chaney, E L; Fritsch, D S; Friedman, C P; Rosenman, J G

    1998-09-01

    The purpose of this investigation was to design and implement a prototype physician workstation, called PortFolio, as a platform for developing and evaluating, by means of controlled observer studies, user interfaces and interactive tools for analyzing and managing digital portal images. The first observer study was designed to measure physician acceptance of workstation technology, as an alternative to a view box, for inspection and analysis of portal images for detection of treatment setup errors. The observer study was conducted in a controlled experimental setting to evaluate physician acceptance of the prototype workstation technology exemplified by PortFolio. PortFolio incorporates a windows user interface, a compact kit of carefully selected image analysis tools, and an object-oriented data base infrastructure. The kit evaluated in the observer study included tools for contrast enhancement, registration, and multimodal image visualization. Acceptance was measured in the context of performing portal image analysis in a structured protocol designed to simulate clinical practice. The acceptability and usage patterns were measured from semistructured questionnaires and logs of user interactions. Radiation oncologists, the subjects for this study, perceived the tools in PortFolio to be acceptable clinical aids. Concerns were expressed regarding user efficiency, particularly with respect to the image registration tools. The results of our observer study indicate that workstation technology is acceptable to radiation oncologists as an alternative to a view box for clinical detection of setup errors from digital portal images. Improvements in implementation, including more tools and a greater degree of automation in the image analysis tasks, are needed to make PortFolio more clinically practical.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorpe, J. I.; Livas, J.; Maghami, P.

    Arm locking is a proposed laser frequency stabilization technique for the Laser Interferometer Space Antenna (LISA), a gravitational-wave observatory sensitive in the milliHertz frequency band. Arm locking takes advantage of the geometric stability of the triangular constellation of three spacecraft that compose LISA to provide a frequency reference with a stability in the LISA measurement band that exceeds that available from a standard reference such as an optical cavity or molecular absorption line. We have implemented a time-domain simulation of a Kalman-filter-based arm-locking system that includes the expected limiting noise sources as well as the effects of imperfect a priorimore » knowledge of the constellation geometry on which the design is based. We use the simulation to study aspects of the system performance that are difficult to capture in a steady-state frequency-domain analysis such as frequency pulling of the master laser due to errors in estimates of heterodyne frequency. We find that our implementation meets requirements on both the noise and dynamic range of the laser frequency with acceptable tolerances and that the design is sufficiently insensitive to errors in the estimated constellation geometry that the required performance can be maintained for the longest continuous measurement intervals expected for the LISA mission.« less

  3. Parameter Estimation for GRACE-FO Geometric Ranging Errors

    NASA Astrophysics Data System (ADS)

    Wegener, H.; Mueller, V.; Darbeheshti, N.; Naeimi, M.; Heinzel, G.

    2017-12-01

    Onboard GRACE-FO, the novel Laser Ranging Instrument (LRI) serves as a technology demonstrator, but it is a fully functional instrument to provide an additional high-precision measurement of the primary mission observable: the biased range between the two spacecraft. Its (expectedly) two largest error sources are laser frequency noise and tilt-to-length (TTL) coupling. While not much can be done about laser frequency noise, the mechanics of the TTL error are widely understood. They depend, however, on unknown parameters. In order to improve the quality of the ranging data, it is hence essential to accurately estimate these parameters and remove the resulting TTL error from the data.Means to do so will be discussed. In particular, the possibility of using calibration maneuvers, the utility of the attitude information provided by the LRI via Differential Wavefront Sensing (DWS), and the benefit from combining ranging data from LRI with ranging data from the established microwave ranging, will be mentioned.

  4. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  5. Development and Validation of a Spanish Version of the Grit-S Scale

    PubMed Central

    Arco-Tirado, Jose L.; Fernández-Martín, Francisco D.; Hoyle, Rick H.

    2018-01-01

    This paper describes the development and initial validation of a Spanish version of the Short Grit (Grit-S) Scale. The Grit-S Scale was adapted and translated into Spanish using the Translation, Review, Adjudication, Pre-testing, and Documentation model and responses to a preliminary set of items from a large sample of university students (N = 1,129). The resultant measure was validated using data from a large stratified random sample of young adults (N = 1,826). Initial validation involved evaluating the internal consistency of the adapted scale and its subscales and comparing the factor structure of the adapted version to that of the original scale. The results were comparable to results from similar analyses of the English version of the scale. Although the internal consistency of the subscales was low, the internal consistency of the full scale was well-within the acceptable range. A two-factor model offered an acceptable account of the data; however, when a single correlated error involving two highly similar items was included, a single factor model fit the data very well. The results support the use of overall scores from the Spanish Grit-S Scale in future research. PMID:29467705

  6. Anatomical Society core regional anatomy syllabus for undergraduate medicine: the Delphi process.

    PubMed

    Smith, C F; Finn, G M; Stewart, J; McHanwell, S

    2016-01-01

    A modified Delphi method was employed to seek consensus when revising the UK and Ireland's core syllabus for regional anatomy in undergraduate medicine. A Delphi panel was constructed involving 'expert' (individuals with at least 5 years' experience in teaching medical students anatomy at the level required for graduation). The panel (n = 39) was selected and nominated by members of Council and/or the Education Committee of the Anatomical Society and included a range of specialists including surgeons, radiologists and anatomists. The experts were asked in two stages to 'accept', 'reject' or 'modify' (first stage only) each learning outcome. A third stage, which was not part of the Delphi method, then allowed the original authors of the syllabus to make changes either to correct any anatomical errors or to make minor syntax changes. From the original syllabus of 182 learning outcomes, removing the neuroanatomy component (163), 23 learning outcomes (15%) remained unchanged, seven learning outcomes were removed and two new learning outcomes added. The remaining 133 learning outcomes were modified. All learning outcomes on the new core syllabus achieved over 90% acceptance by the panel. © 2015 Anatomical Society.

  7. Validation of the Malay version of the Inventory of Functional Status after Childbirth questionnaire.

    PubMed

    Noor, Norhayati Mohd; Aziz, Aniza Abd; Mostapa, Mohd Rosmizaki; Awang, Zainudin

    2015-01-01

    This study was designed to examine the psychometric properties of Malay version of the Inventory of Functional Status after Childbirth (IFSAC). A cross-sectional study. A total of 108 postpartum mothers attending Obstetrics and Gynaecology Clinic, in a tertiary teaching hospital in Malaysia, were involved. Construct validity and internal consistency were performed after the translation, content validity, and face validity process. The data were analyzed using Analysis of Moment Structure version 18 and Statistical Packages for the Social Sciences version 20. The final model consists of four constructs, namely, infant care, personal care, household activities, and social and community activities, with 18 items demonstrating acceptable factor loadings, domain to domain correlation, and best fit (Chi-squared/degree of freedom = 1.678; Tucker-Lewis index = 0.923; comparative fit index = 0.936; and root mean square error of approximation = 0.080). Composite reliability and average variance extracted of the domains ranged from 0.659 to 0.921 and from 0.499 to 0.628, respectively. The study suggested that the four-factor model with 18 items of the Malay version of IFSAC was acceptable to be used to measure functional status after childbirth because it is valid, reliable, and simple.

  8. TLD postal dose intercomparison for megavoltage units in Poland.

    PubMed

    Izewska, J; Gajewski, R; Gwiazdowska, B; Kania, M; Rostkowska, J

    1995-08-01

    The aim of the TLD pilot study was to investigate and to reduce the uncertainties involved in the measurements of absorbed dose and to improve the consistency in dose determination in the regional radiotherapy centres in Poland. The intercomparison was organized by the SSDL. It covered absorbed dose measurements under reference conditions for Co-60, high energy X-rays and electron beams. LiF powder type MT-N was used for the irradiations and read with the Harshaw TLD reader model 2000B/2000C. The TLD system was set up and an analysis of the factors influencing the accuracy of absorbed dose measurements with TL-detectors was performed to evaluate and minimize the measurement uncertainty. A fading not exceeding 2% in 12 weeks was found. The relative energy correction factor did not exceed 3% for X-rays in the range 4-15 MV, and 4% for electron beams between 6 and 20 MeV. A total of 34 beams was checked. Deviation of +/- 3.5% stated and evaluated dose was considered acceptable for photons and +/- 5% for electron beams. The results for Co-60, high energy X-rays and electron beams showed that there were two, three and no centres, respectively, beyond acceptance levels. The sources of errors for all deviations out of this range were thoroughly investigated, discussed and corrected, however two deviations remained unexplained. The pilot study resulted in an improvement of the accuracy and consistency of dosimetry in Poland.

  9. Single neutral pion electroproduction off the proton in the resonance region

    NASA Astrophysics Data System (ADS)

    Markov, Nikolay

    We study a pi0 electroproduction off the proton in the invariant mass range for the ppi0 system of W = 1.1 -- 1.8 GeV in the broad range of the photon virtualities Q2 = 0.4 -- 1.0 GeV2. The experiment was conducted in the Hall B at the Jefferson Lab with the CEBAF Large Acceptance Spectrometer (CLAS) detector which is uniquely suited for the spectroscopic measurements. The channel is identified by subsequent determination of the electron using information from the forward angle electromagnetic calorimeter and the drift chambers, and proton from the time of flight and drift chambers signals. Kinematical relations between the charged particles separate the single pion events. The detector efficiency and the geometrical acceptance are studied with the GEANT simulation of the CLAS. The exclusive channel radiative corrections are developed and applied. The full differential cross section of the pi0 electroproduction is measured with high statistical accuracy and small systematical error. The quality of the overall data analysis is checked against the firmly established benchmark reactions. The structure functions and Legendre multipoles are extracted and show the sensitivity of our measurements to the different resonance electroproduction amplitudes. The advanced phenomenological approach will be used to extract the Q2 evolution of the electromagnetic transition form factors of the different resonance states in the combined analysis of the major exclusive channels. This information will notably improve our understanding of the structure of the nucleon.

  10. Development of primary standards for mass spectrometry to increase accuracy in quantifying environmental contaminants.

    PubMed

    Oates, R P; Mcmanus, Michelle; Subbiah, Seenivasan; Klein, David M; Kobelski, Robert

    2017-07-14

    Internal standards are essential in electrospray ionization liquid chromatography-mass spectrometry (ESI-LC-MS) to correct for systematic error associated with ionization suppression and/or enhancement. A wide array of instrument setups and interfaces has created difficulty in comparing the quantitation of absolute analyte response across laboratories. This communication demonstrates the use of primary standards as operational qualification standards for LC-MS instruments and their comparison with commonly accepted internal standards. In monitoring the performance of internal standards for perfluorinated compounds, potassium hydrogen phthalate (KHP) presented lower inter-day variability in instrument response than a commonly accepted deuterated perfluorinated internal standard (d3-PFOS), with percent relative standard deviations less than or equal to 6%. The inter-day precision of KHP was greater than d3-PFOS over a 28-day monitoring of perfluorooctanesulfonic acid (PFOS), across concentrations ranging from 0 to 100μg/L. The primary standard trometamol (Trizma) performed as well as known internal standards simeton and tris (2-chloroisopropyl) phosphate (TCPP), with intra-day precision of Trizma response as low as 7% RSD on day 28. The inter-day precision of Trizma response was found to be greater than simeton and TCPP, across concentrations of neonicotinoids ranging from 1 to 100μg/L. This study explores the potential of primary standards to be incorporated into LC-MS/MS methodology to improve the quantitative accuracy in environmental contaminant analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.

  12. Active Correction of Aberrations of Low-Quality Telescope Optics

    NASA Technical Reports Server (NTRS)

    Hemmati, Hamid; Chen, Yijian

    2007-01-01

    A system of active optics that includes a wavefront sensor and a deformable mirror has been demonstrated to be an effective means of partly correcting wavefront aberrations introduced by fixed optics (lenses and mirrors) in telescopes. It is envisioned that after further development, active optics would be used to reduce wavefront aberrations of about one wave or less in telescopes having aperture diameters of the order of meters or tens of meters. Although this remaining amount of aberration would be considered excessive in scientific applications in which diffraction-limited performance is required, it would be acceptable for free-space optical- communication applications at wavelengths of the order of 1 m. To prevent misunderstanding, it is important to state the following: The technological discipline of active optics, in which the primary or secondary mirror of a telescope is directly and dynamically tilted, distorted, and/or otherwise varied to reduce wavefront aberrations, has existed for decades. The term active optics does not necessarily mean the same thing as does adaptive optics, even though active optics and adaptive optics are related. The term "adaptive optics" is often used to refer to wavefront correction at speeds characterized by frequencies ranging up to between hundreds of hertz and several kilohertz high enough to enable mitigation of adverse effects of fluctuations in atmospheric refraction upon propagation of light beams. The term active optics usually appears in reference to wavefront correction at significantly lower speeds, characterized by times ranging from about 1 second to as long as minutes. Hence, the novelty of the present development lies, not in the basic concept of active or adaptive optics, but in the envisioned application of active optics in conjunction with a deformable mirror to achieve acceptably small wavefront errors in free-space optical communication systems that include multi-meter-diameter telescope mirrors that are relatively inexpensive because their surface figures are characterized by errors as large as about 10 waves. Figure 1 schematically depicts the apparatus used in an experiment to demonstrate such an application on a reduced scale involving a 30-cm-diameter aperture.

  13. Artifact-resistant superimposition of digital dental models and cone-beam computed tomography images.

    PubMed

    Lin, Hsiu-Hsia; Chiang, Wen-Chung; Lo, Lun-Jou; Sheng-Pin Hsu, Sam; Wang, Chien-Hsuan; Wan, Shu-Yen

    2013-11-01

    Combining the maxillofacial cone-beam computed tomography (CBCT) model with its corresponding digital dental model enables an integrated 3-dimensional (3D) representation of skeletal structures, teeth, and occlusions. Undesired artifacts, however, introduce difficulties in the superimposition of both models. We have proposed an artifact-resistant surface-based registration method that is robust and clinically applicable and that does not require markers. A CBCT bone model and a laser-scanned dental model obtained from the same patient were used in developing the method and examining the accuracy of the superimposition. Our method included 4 phases. The first phase was to segment the maxilla from the mandible in the CBCT model. The second phase was to conduct an initial registration to bring the digital dental model and the maxilla and mandible sufficiently close to each other. Third, we manually selected at least 3 corresponding regions on both models by smearing patches on the 3D surfaces. The last phase was to superimpose the digital dental model into the maxillofacial model. Each superimposition process was performed twice by 2 operators with the same object to investigate the intra- and interoperator differences. All collected objects were divided into 3 groups with various degrees of artifacts: artifact-free, critical artifacts, and severe artifacts. The mean errors and root-mean-square (RMS) errors were used to evaluate the accuracy of the superimposition results. Repeated measures analysis of variance and the Wilcoxon rank sum test were used to calculate the intraoperator reproducibility and interoperator reliability. Twenty-four maxilla and mandible objects for evaluation were obtained from 14 patients. The experimental results showed that the mean errors between the 2 original models in the residing fused model ranged from 0.10 to 0.43 mm and that the RMS errors ranged from 0.13 to 0.53 mm. These data were consistent with previously used methods and were clinically acceptable. All measurements of the proposed study exhibited desirable intraoperator reproducibility and interoperator reliability. Regarding the intra- and interoperator mean errors and RMS errors in the nonartifact or critical artifact group, no significant difference between the repeated trials or between operators (P < .05) was observed. The results of the present study have shown that the proposed regional surface-based registration can robustly and accurately superimpose a digital dental model into its corresponding CBCT model. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  14. Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions

    PubMed Central

    Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.

    2010-01-01

    Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256

  15. Electro-driven extraction of polar compounds using agarose gel as a new membrane: Determination of amino acids in fruit juice and human plasma samples.

    PubMed

    Sedehi, Samira; Tabani, Hadi; Nojavan, Saeed

    2018-03-01

    In this work, polypropylene hollow fiber was replaced by agarose gel in conventional electro membrane extraction (EME) to develop a novel approach. The proposed EME method was then employed to extract two amino acids (tyrosine and phenylalanine) as model polar analytes, followed by HPLC-UV. The method showed acceptable results under optimized conditions. This green methodology outperformed conventional EME, and required neither organic solvents nor carriers. The effective parameters such as the pH values of the acceptor and the donor solutions, the thickness and pH of the gel, the extraction voltage, the stirring rate, and the extraction time were optimized. Under the optimized conditions (acceptor solution pH: 1.5; donor solution pH: 2.5; agarose gel thickness: 7mm; agarose gel pH: 1.5; stirring rate of the sample solution: 1000rpm; extraction potential: 40V; and extraction time: 15min), the limits of detection and quantification were 7.5ngmL -1 and 25ngmL -1 , respectively. The extraction recoveries were between 56.6% and 85.0%, and the calibration curves were linear with correlation coefficients above 0.996 over a concentration range of 25.0-1000.0ngmL -1 for both amino acids. The intra- and inter-day precisions were in the range of 5.5-12.5%, and relative errors were smaller than 12.0%. Finally, the optimized method was successfully applied to preconcentrate, clean up, and quantify amino acids in watermelon and grapefruit juices as well as a plasma sample, and acceptable relative recoveries in the range of 53.9-84.0% were obtained. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. An alternative sensor-based method for glucose monitoring in children and young people with diabetes

    PubMed Central

    Edge, Julie; Acerini, Carlo; Campbell, Fiona; Hamilton-Shield, Julian; Moudiotis, Chris; Rahman, Shakeel; Randell, Tabitha; Smith, Anne; Trevelyan, Nicola

    2017-01-01

    Objective To determine accuracy, safety and acceptability of the FreeStyle Libre Flash Glucose Monitoring System in the paediatric population. Design, setting and patients Eighty-nine study participants, aged 4–17 years, with type 1 diabetes were enrolled across 9 diabetes centres in the UK. A factory calibrated sensor was inserted on the back of the upper arm and used for up to 14 days. Sensor glucose measurements were compared with capillary blood glucose (BG) measurements. Sensor results were masked to participants. Results Clinical accuracy of sensor results versus BG results was demonstrated, with 83.8% of results in zone A and 99.4% of results in zones A and B of the consensus error grid. Overall mean absolute relative difference (MARD) was 13.9%. Sensor accuracy was unaffected by patient factors such as age, body weight, sex, method of insulin administration or time of use (day vs night). Participants were in the target glucose range (3.9–10.0 mmol/L) ∼50% of the time (mean 12.1 hours/day), with an average of 2.2 hours/day and 9.5 hours/day in hypoglycaemia and hyperglycaemia, respectively. Sensor application, wear/use of the device and comparison to self-monitoring of blood glucose were rated favourably by most participants/caregivers (84.3–100%). Five device related adverse events were reported across a range of participant ages. Conclusions Accuracy, safety and user acceptability of the FreeStyle Libre System were demonstrated for the paediatric population. Accuracy of the system was unaffected by subject characteristics, making it suitable for a broad range of children and young people with diabetes. Trial registration number NCT02388815. PMID:28137708

  17. Post-manufacturing, 17-times acceptable raw bit error rate enhancement, dynamic codeword transition ECC scheme for highly reliable solid-state drives, SSDs

    NASA Astrophysics Data System (ADS)

    Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken

    2011-04-01

    A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.

  18. Test and Evaluation Master Plan (TEMP) for the Navy Occupational Health Information Management System (NOHIMS). Appendix A through Appendix U.

    DTIC Science & Technology

    1985-04-24

    reliability/ downtime/ communication lines/ man-machine interface/ other: 2. A noticeable (to the user) failure happens about and that number has been...improving/ steady/ getting.worse. 3. The number of failures /errors for NOHIMS is acceptable/ somewhat acceptable/ somewhat unacceptable/ unacceptable...somewhat fast/ somewhat slow/ slow. 7. When a NWHIMS failure occurs, it affects the day-to-day provision of medical care because work procedures must

  19. Statistics of the residual refraction errors in laser ranging data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.

    1977-01-01

    A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.

  20. The use of consumer depth cameras for 3D surface imaging of people with obesity: A feasibility study.

    PubMed

    Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R

    2018-05-21

    Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  1. Use of error grid analysis to evaluate acceptability of a point of care prothrombin time meter.

    PubMed

    Petersen, John R; Vonmarensdorf, Hans M; Weiss, Heidi L; Elghetany, M Tarek

    2010-02-01

    Statistical methods (linear regression, correlation analysis, etc.) are frequently employed in comparing methods in the central laboratory (CL). Assessing acceptability of point of care testing (POCT) equipment, however, is more difficult because statistically significant biases may not have an impact on clinical care. We showed how error grid (EG) analysis can be used to evaluate POCT PT INR with the CL. We compared results from 103 patients seen in an anti-coagulation clinic that were on Coumadin maintenance therapy using fingerstick samples for POCT (Roche CoaguChek XS and S) and citrated venous blood samples for CL (Stago STAR). To compare clinical acceptability of results we developed an EG with zones A, B, C and D. Using 2nd order polynomial equation analysis, POCT results highly correlate with the CL for CoaguChek XS (R(2)=0. 955) and CoaguChek S (R(2)=0. 93), respectively but does not indicate if POCT results are clinically interchangeable with the CL. Using EG it is readily apparent which levels can be considered clinically identical to the CL despite analytical bias. We have demonstrated the usefulness of EG in determining acceptability of POCT PT INR testing and how it can be used to determine cut-offs where differences in POCT results may impact clinical care. Copyright 2009 Elsevier B.V. All rights reserved.

  2. Large-scale retrospective evaluation of regulated liquid chromatography-mass spectrometry bioanalysis projects using different total error approaches.

    PubMed

    Tan, Aimin; Saffaj, Taoufiq; Musuku, Adrien; Awaiye, Kayode; Ihssane, Bouchaib; Jhilal, Fayçal; Sosse, Saad Alaoui; Trabelsi, Fethi

    2015-03-01

    The current approach in regulated LC-MS bioanalysis, which evaluates the precision and trueness of an assay separately, has long been criticized for inadequate balancing of lab-customer risks. Accordingly, different total error approaches have been proposed. The aims of this research were to evaluate the aforementioned risks in reality and the difference among four common total error approaches (β-expectation, β-content, uncertainty, and risk profile) through retrospective analysis of regulated LC-MS projects. Twenty-eight projects (14 validations and 14 productions) were randomly selected from two GLP bioanalytical laboratories, which represent a wide variety of assays. The results show that the risk of accepting unacceptable batches did exist with the current approach (9% and 4% of the evaluated QC levels failed for validation and production, respectively). The fact that the risk was not wide-spread was only because the precision and bias of modern LC-MS assays are usually much better than the minimum regulatory requirements. Despite minor differences in magnitude, very similar accuracy profiles and/or conclusions were obtained from the four different total error approaches. High correlation was even observed in the width of bias intervals. For example, the mean width of SFSTP's β-expectation is 1.10-fold (CV=7.6%) of that of Saffaj-Ihssane's uncertainty approach, while the latter is 1.13-fold (CV=6.0%) of that of Hoffman-Kringle's β-content approach. To conclude, the risk of accepting unacceptable batches was real with the current approach, suggesting that total error approaches should be used instead. Moreover, any of the four total error approaches may be used because of their overall similarity. Lastly, the difficulties/obstacles associated with the application of total error approaches in routine analysis and their desirable future improvements are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, S

    2015-06-15

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less

  4. Percent body fat estimations in college women using field and laboratory methods: a three-compartment model approach

    PubMed Central

    Moon, Jordan R; Hull, Holly R; Tobkin, Sarah E; Teramoto, Masaru; Karabulut, Murat; Roberts, Michael D; Ryan, Eric D; Kim, So Jung; Dalbo, Vincent J; Walter, Ashley A; Smith, Abbie T; Cramer, Joel T; Stout, Jeffrey R

    2007-01-01

    Background Methods used to estimate percent body fat can be classified as a laboratory or field technique. However, the validity of these methods compared to multiple-compartment models has not been fully established. This investigation sought to determine the validity of field and laboratory methods for estimating percent fat (%fat) in healthy college-age women compared to the Siri three-compartment model (3C). Methods Thirty Caucasian women (21.1 ± 1.5 yrs; 164.8 ± 4.7 cm; 61.2 ± 6.8 kg) had their %fat estimated by BIA using the BodyGram™ computer program (BIA-AK) and population-specific equation (BIA-Lohman), NIR (Futrex® 6100/XL), a quadratic (SF3JPW) and linear (SF3WB) skinfold equation, air-displacement plethysmography (BP), and hydrostatic weighing (HW). Results All methods produced acceptable total error (TE) values compared to the 3C model. Both laboratory methods produced similar TE values (HW, TE = 2.4%fat; BP, TE = 2.3%fat) when compared to the 3C model, though a significant constant error (CE) was detected for HW (1.5%fat, p ≤ 0.006). The field methods produced acceptable TE values ranging from 1.8 – 3.8 %fat. BIA-AK (TE = 1.8%fat) yielded the lowest TE among the field methods, while BIA-Lohman (TE = 2.1%fat) and NIR (TE = 2.7%fat) produced lower TE values than both skinfold equations (TE > 2.7%fat) compared to the 3C model. Additionally, the SF3JPW %fat estimation equation resulted in a significant CE (2.6%fat, p ≤ 0.007). Conclusion Data suggest that the BP and HW are valid laboratory methods when compared to the 3C model to estimate %fat in college-age Caucasian women. When the use of a laboratory method is not feasible, NIR, BIA-AK, BIA-Lohman, SF3JPW, and SF3WB are acceptable field methods to estimate %fat in this population. PMID:17988393

  5. Online Deviation Detection for Medical Processes

    PubMed Central

    Christov, Stefan C.; Avrunin, George S.; Clarke, Lori A.

    2014-01-01

    Human errors are a major concern in many medical processes. To help address this problem, we are investigating an approach for automatically detecting when performers of a medical process deviate from the acceptable ways of performing that process as specified by a detailed process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before harm is done. In this paper, we identify important issues related to the feasibility of the proposed approach and empirically evaluate the approach for two medical procedures, chemotherapy and blood transfusion. For the evaluation, we use the process models to generate sample process executions that we then seed with synthetic errors. The process models describe the coordination of activities of different process performers in normal, as well as in exceptional situations. The evaluation results suggest that the proposed approach could be applied in clinical settings to help catch errors before harm is done. PMID:25954343

  6. June and August median streamflows estimated for ungaged streams in southern Maine

    USGS Publications Warehouse

    Lombard, Pamela J.

    2010-01-01

    Methods for estimating June and August median streamflows were developed for ungaged, unregulated streams in southern Maine. The methods apply to streams with drainage areas ranging in size from 0.4 to 74 square miles, with percentage of basin underlain by a sand and gravel aquifer ranging from 0 to 84 percent, and with distance from the centroid of the basin to a Gulf of Maine line paralleling the coast ranging from 14 to 94 miles. Equations were developed with data from 4 long-term continuous-record streamgage stations and 27 partial-record streamgage stations. Estimates of median streamflows at the continuous-record and partial-record stations are presented. A mathematical technique for estimating standard low-flow statistics, such as June and August median streamflows, at partial-record streamgage stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term (at least 10 years of record) continuous-record streamgage stations (index stations). Weighted least-squares regression analysis (WLS) was used to relate estimates of June and August median streamflows at streamgage stations to basin characteristics at these same stations to develop equations that can be used to estimate June and August median streamflows on ungaged streams. WLS accounts for different periods of record at the gaging stations. Three basin characteristics-drainage area, percentage of basin underlain by a sand and gravel aquifer, and distance from the centroid of the basin to a Gulf of Maine line paralleling the coast-are used in the final regression equation to estimate June and August median streamflows for ungaged streams. The three-variable equation to estimate June median streamflow has an average standard error of prediction from -35 to 54 percent. The three-variable equation to estimate August median streamflow has an average standard error of prediction from -45 to 83 percent. Simpler one-variable equations that use only drainage area to estimate June and August median streamflows were developed for use when less accuracy is acceptable. These equations have average standard errors of prediction from -46 to 87 percent and from -57 to 133 percent, respectively.

  7. Minimizing the Disruptive Effects of Prospective Memory in Simulated Air Traffic Control

    PubMed Central

    Loft, Shayne; Smith, Rebekah E.; Remington, Roger

    2015-01-01

    Prospective memory refers to remembering to perform an intended action in the future. Failures of prospective memory can occur in air traffic control. In two experiments, we examined the utility of external aids for facilitating air traffic management in a simulated air traffic control task with prospective memory requirements. Participants accepted and handed-off aircraft and detected aircraft conflicts. The prospective memory task involved remembering to deviate from a routine operating procedure when accepting target aircraft. External aids that contained details of the prospective memory task appeared and flashed when target aircraft needed acceptance. In Experiment 1, external aids presented either adjacent or non-adjacent to each of the 20 target aircraft presented over the 40min test phase reduced prospective memory error by 11% compared to a condition without external aids. In Experiment 2, only a single target aircraft was presented a significant time (39min–42min) after presentation of the prospective memory instruction, and the external aids reduced prospective memory error by 34%. In both experiments, costs to the efficiency of non-prospective memory air traffic management (non-target aircraft acceptance response time, conflict detection response time) were reduced by non-adjacent aids compared to no aids or adjacent aids. In contrast, in both experiments, the efficiency of the prospective memory air traffic management (target aircraft acceptance response time) was facilitated by adjacent aids compared to non-adjacent aids. Together, these findings have potential implications for the design of automated alerting systems to maximize multi-task performance in work settings where operators monitor and control demanding perceptual displays. PMID:24059825

  8. 17 CFR Appendix B to Part 37 - Guidance on, and Acceptable Practices in, Compliance with Core Principles

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... should include full customer restitution where customer harm is demonstrated, except where the amount of... or external audit findings, self-reported errors, or through validated complaints. (C) Requirements...

  9. Analysis of case-only studies accounting for genotyping error.

    PubMed

    Cheng, K F

    2007-03-01

    The case-only design provides one approach to assess possible interactions between genetic and environmental factors. It has been shown that if these factors are conditionally independent, then a case-only analysis is not only valid but also very efficient. However, a drawback of the case-only approach is that its conclusions may be biased by genotyping errors. In this paper, our main aim is to propose a method for analysis of case-only studies when these errors occur. We show that the bias can be adjusted through the use of internal validation data, which are obtained by genotyping some sampled individuals twice. Our analysis is based on a simple and yet highly efficient conditional likelihood approach. Simulation studies considered in this paper confirm that the new method has acceptable performance under genotyping errors.

  10. Microscopic saw mark analysis: an empirical approach.

    PubMed

    Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Peters, Charles

    2015-01-01

    Microscopic saw mark analysis is a well published and generally accepted qualitative analytical method. However, little research has focused on identifying and mitigating potential sources of error associated with the method. The presented study proposes the use of classification trees and random forest classifiers as an optimal, statistically sound approach to mitigate the potential for error of variability and outcome error in microscopic saw mark analysis. The statistical model was applied to 58 experimental saw marks created with four types of saws. The saw marks were made in fresh human femurs obtained through anatomical gift and were analyzed using a Keyence digital microscope. The statistical approach weighed the variables based on discriminatory value and produced decision trees with an associated outcome error rate of 8.62-17.82%. © 2014 American Academy of Forensic Sciences.

  11. Pesticides, Neurodevelopmental Disagreement, and Bradford Hill's Guidelines.

    PubMed

    Shrader-Frechette, Kristin; ChoGlueck, Christopher

    2016-06-27

    Neurodevelopmental disorders such as autism affect one-eighth of all U.S. newborns. Yet scientists, accessing the same data and using Bradford-Hill guidelines, draw different conclusions about the causes of these disorders. They disagree about the pesticide-harm hypothesis, that typical United States prenatal pesticide exposure can cause neurodevelopmental damage. This article aims to discover whether apparent scientific disagreement about this hypothesis might be partly attributable to questionable interpretations of the Bradford-Hill causal guidelines. Key scientists, who claim to employ Bradford-Hill causal guidelines, yet fail to accept the pesticide-harm hypothesis, fall into errors of trimming the guidelines, requiring statistically-significant data, and ignoring semi-experimental evidence. However, the main scientists who accept the hypothesis appear to commit none of these errors. Although settling disagreement over the pesticide-harm hypothesis requires extensive analysis, this article suggests that at least some conflicts may arise because of questionable interpretations of the guidelines.

  12. Thin film concentrator panel development

    NASA Technical Reports Server (NTRS)

    Zimmerman, D. K.

    1982-01-01

    The development and testing of a rigid panel concept that utilizes a thin film reflective surface for application to a low-cost point-focusing solar concentrator is discussed. It is shown that a thin film reflective surface is acceptable for use on solar concentrators, including 1500 F applications. Additionally, it is shown that a formed steel sheet substrate is a good choice for concentrator panels. The panel has good optical properties, acceptable forming tolerances, environmentally resistant substrate and stiffeners, and adaptability to low to mass production rates. Computer simulations of the concentrator optics were run using the selected reflector panel design. Experimentally determined values for reflector surface specularity and reflectivity along with dimensional data were used in the analysis. The simulations provided intercept factor and net energy into the aperture as a function of aperture size for different surface errors and pointing errors. Point source and Sun source optical tests were also performed.

  13. Effect of Processing on Postprandial Glycemic Response and Consumer Acceptability of Lentil-Containing Food Items.

    PubMed

    Ramdath, D Dan; Wolever, Thomas M S; Siow, Yaw Chris; Ryland, Donna; Hawke, Aileen; Taylor, Carla; Zahradka, Peter; Aliani, Michel

    2018-05-11

    The consumption of pulses is associated with many health benefits. This study assessed post-prandial blood glucose response (PPBG) and the acceptability of food items containing green lentils. In human trials we: (i) defined processing methods (boiling, pureeing, freezing, roasting, spray-drying) that preserve the PPBG-lowering feature of lentils; (ii) used an appropriate processing method to prepare lentil food items, and compared the PPBG and relative glycemic responses (RGR) of lentil and control foods; and (iii) conducted consumer acceptability of the lentil foods. Eight food items were formulated from either whole lentil puree (test) or instant potato (control). In separate PPBG studies, participants consumed fixed amounts of available carbohydrates from test foods, control foods, or a white bread standard. Finger prick blood samples were obtained at 0, 15, 30, 45, 60, 90, and 120 min after the first bite, analyzed for glucose, and used to calculate incremental area under the blood glucose response curve and RGR; glycemic index (GI) was measured only for processed lentils. Mean GI (± standard error of the mean) of processed lentils ranged from 25 ± 3 (boiled) to 66 ± 6 (spray-dried); the GI of spray-dried lentils was significantly ( p < 0.05) higher than boiled, pureed, or roasted lentil. Overall, lentil-based food items all elicited significantly lower RGR compared to potato-based items (40 ± 3 vs. 73 ± 3%; p < 0.001). Apricot chicken, chicken pot pie, and lemony parsley soup had the highest overall acceptability corresponding to "like slightly" to "like moderately". Processing influenced the PPBG of lentils, but food items formulated from lentil puree significantly attenuated PPBG. Formulation was associated with significant differences in sensory attributes.

  14. Effect of Processing on Postprandial Glycemic Response and Consumer Acceptability of Lentil-Containing Food Items

    PubMed Central

    Wolever, Thomas M. S.; Hawke, Aileen; Zahradka, Peter; Aliani, Michel

    2018-01-01

    The consumption of pulses is associated with many health benefits. This study assessed post-prandial blood glucose response (PPBG) and the acceptability of food items containing green lentils. In human trials we: (i) defined processing methods (boiling, pureeing, freezing, roasting, spray-drying) that preserve the PPBG-lowering feature of lentils; (ii) used an appropriate processing method to prepare lentil food items, and compared the PPBG and relative glycemic responses (RGR) of lentil and control foods; and (iii) conducted consumer acceptability of the lentil foods. Eight food items were formulated from either whole lentil puree (test) or instant potato (control). In separate PPBG studies, participants consumed fixed amounts of available carbohydrates from test foods, control foods, or a white bread standard. Finger prick blood samples were obtained at 0, 15, 30, 45, 60, 90, and 120 min after the first bite, analyzed for glucose, and used to calculate incremental area under the blood glucose response curve and RGR; glycemic index (GI) was measured only for processed lentils. Mean GI (± standard error of the mean) of processed lentils ranged from 25 ± 3 (boiled) to 66 ± 6 (spray-dried); the GI of spray-dried lentils was significantly (p < 0.05) higher than boiled, pureed, or roasted lentil. Overall, lentil-based food items all elicited significantly lower RGR compared to potato-based items (40 ± 3 vs. 73 ± 3%; p < 0.001). Apricot chicken, chicken pot pie, and lemony parsley soup had the highest overall acceptability corresponding to “like slightly” to “like moderately”. Processing influenced the PPBG of lentils, but food items formulated from lentil puree significantly attenuated PPBG. Formulation was associated with significant differences in sensory attributes. PMID:29751679

  15. Exploiting data representation for fault tolerance

    DOE PAGES

    Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...

    2015-01-06

    Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less

  16. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems

    PubMed Central

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  17. Modeling methodology for MLS range navigation system errors using flight test data

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.

  18. User type certification for advanced flight control systems

    NASA Technical Reports Server (NTRS)

    Gilson, Richard D.; Abbott, David W.

    1994-01-01

    Advanced avionics through flight management systems (FMS) coupled with autopilots can now precisely control aircraft from takeoff to landing. Clearly, this has been the most important improvement in aircraft since the jet engine. Regardless of the eventual capabilities of this technology, it is doubtful that society will soon accept pilotless airliners with the same aplomb they accept driverless passenger trains. Flight crews are still needed to deal with inputing clearances, taxiing, in-flight rerouting, unexpected weather decisions, and emergencies; yet it is well known that the contribution of human errors far exceed those of current hardware or software systems. Thus human errors remain, and are even increasing in percentage as the largest contributor to total system error. Currently, the flight crew is regulated by a layered system of certification: by operation, e.g., airline transport pilot versus private pilot; by category, e.g., airplane versus helicopter; by class, e.g., single engine land versus multi-engine land; and by type (for larger aircraft and jet powered aircraft), e.g., Boeing 767 or Airbus A320. Nothing in the certification process now requires an in-depth proficiency with specific types of avionics systems despite their prominent role in aircraft control and guidance.

  19. A large-scale test of free-energy simulation estimates of protein-ligand binding affinities.

    PubMed

    Mikulskis, Paulius; Genheden, Samuel; Ryde, Ulf

    2014-10-27

    We have performed a large-scale test of alchemical perturbation calculations with the Bennett acceptance-ratio (BAR) approach to estimate relative affinities for the binding of 107 ligands to 10 different proteins. Employing 20-Å truncated spherical systems and only one intermediate state in the perturbations, we obtain an error of less than 4 kJ/mol for 54% of the studied relative affinities and a precision of 0.5 kJ/mol on average. However, only four of the proteins gave acceptable errors, correlations, and rankings. The results could be improved by using nine intermediate states in the simulations or including the entire protein in the simulations using periodic boundary conditions. However, 27 of the calculated affinities still gave errors of more than 4 kJ/mol, and for three of the proteins the results were not satisfactory. This shows that the performance of BAR calculations depends on the target protein and that several transformations gave poor results owing to limitations in the molecular-mechanics force field or the restricted sampling possible within a reasonable simulation time. Still, the BAR results are better than docking calculations for most of the proteins.

  20. Modeling Water Temperature in the Yakima River, Washington, from Roza Diversion Dam to Prosser Dam, 2005-06

    USGS Publications Warehouse

    Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.

    2008-01-01

    A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).

  1. SU-F-J-65: Prediction of Patient Setup Errors and Errors in the Calibration Curve from Prompt Gamma Proton Range Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, J; Labarbe, R; Sterpin, E

    2016-06-15

    Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less

  2. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  3. Analysis of Performance of Stereoscopic-Vision Software

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  4. Effect of optical aberrations on intraocular pressure measurements using a microscale optical implant in ex vivo rabbit eyes

    NASA Astrophysics Data System (ADS)

    Han, Samuel J.; Park, Haeri; Lee, Jeong Oen; Choo, Hyuck

    2018-04-01

    Elevated intraocular pressure (IOP) is the only modifiable major risk factor of glaucoma. Recently, accurate and continuous IOP monitoring has been demonstrated in vivo using an implantable sensor based on optical resonance with remote optical readout to improve patient outcomes. Here, we investigate the relationship between optical aberrations of ex vivo rabbit eyes and the performance of the IOP sensor using a custom-built setup integrated with a Shack-Hartmann sensor. The sensor readouts became less accurate as the aberrations increased in magnitude, but they remained within the clinically acceptable range. For root-mean-square wavefront errors of 0.10 to 0.94 μm, the accuracy and the signal-to-noise ratio were 0.58 ± 0.32 mm Hg and 15.57 ± 4.85 dB, respectively.

  5. Study of the performance of image restoration under different wavefront aberrations

    NASA Astrophysics Data System (ADS)

    Wang, Xinqiu; Hu, Xinqi

    2016-10-01

    Image restoration is an effective way to improve the quality of images degraded by wave-front aberrations. If the wave-front aberration is too large, the performance of the image restoration will not be good. In this paper, the relationship between the performance of image restoration and the degree of wave-front aberrations is studied. A set of different wave-front aberrations is constructed by Zernike polynomials, and the corresponding PSF under white-light illumination is calculated. A set of blurred images is then obtained through convolution methods. Next we recover the images with the regularized Richardson-Lucy algorithm and use the RMS of the original image and the homologous deblurred image to evaluate the quality of restoration. Consequently, we determine the range of wave-front errors in which the recovered images are acceptable.

  6. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  7. Prescriptions analysis by clinical pharmacists in the post-operative period: a 4-year prospective study.

    PubMed

    Charpiat, B; Goutelle, S; Schoeffler, M; Aubrun, F; Viale, J-P; Ducerf, C; Leboucher, G; Allenet, B

    2012-09-01

    Clinical pharmacists can help prevent medication errors. However, data are scarce on their role in preventing medication prescription errors in the post-operative period, a high-risk period, as at least two prescribers can intervene, the surgeon and the anesthetist. We aimed to describe and quantify clinical pharmacist' intervention (PIs) during validation of drug prescriptions on a computerized physician order entry system in a post-surgical and post-transplantation ward. We illustrate these interventions, focusing on one clearly identified recurrent problem. In a prospective study lasting 4 years, we recorded drug-related problems (DRPs) detected by pharmacists and whether the physician accepted the PI when prescription modification was suggested. Among 7005 orders, 1975 DRPs were detected. The frequency of PIs remained constant throughout the study period, with 921 PIs (47%) accepted, 383 (19%) refused and 671 (34%) not assessable. The most frequent DRP concerned improper administration mode (26%), drug interactions (21%) and overdosage (20%). These resulted in a change in the method of administration (25%), dose adjustment (24%) and drug discontinuation (23%) with 307 drugs being concerned by at least one PI. Paracetamol was involved in 26% of overdosage PIs. Erythromycin as prokinetic agent, presented a recurrent risk of potentially severe drug-drug interactions especially with other QT interval-prolonging drugs. Following an educational seminar targeting this problem, the rate of acceptation of PI concerning this DRP increased. Pharmacists detected many prescription errors that may have clinical implications and could be the basis for educational measures. © 2012 The Authors. Acta Anaesthesiologica Scandinavica © 2012 The Acta Anaesthesiologica Scandinavica Foundation.

  8. A high speed sequential decoder

    NASA Technical Reports Server (NTRS)

    Lum, H., Jr.

    1972-01-01

    The performance and theory of operation for the High Speed Hard Decision Sequential Decoder are delineated. The decoder is a forward error correction system which is capable of accepting data from binary-phase-shift-keyed and quadriphase-shift-keyed modems at input data rates up to 30 megabits per second. Test results show that the decoder is capable of maintaining a composite error rate of 0.00001 at an input E sub b/N sub o of 5.6 db. This performance has been obtained with minimum circuit complexity.

  9. Analysis of a range estimator which uses MLS angle measurements

    NASA Technical Reports Server (NTRS)

    Downing, David R.; Linse, Dennis

    1987-01-01

    A concept that uses the azimuth signal from a microwave landing system (MLS) combined with onboard airspeed and heading data to estimate the horizontal range to the runway threshold is investigated. The absolute range error is evaluated for trajectories typical of General Aviation (GA) and commercial airline operations (CAO). These include constant intercept angles for GA and CAO, and complex curved trajectories for CAO. It is found that range errors of 4000 to 6000 feet at the entry of MLS coverage which then reduce to 1000-foot errors at runway centerline intercept are possible for GA operations. For CAO, errors at entry into MLS coverage of 2000 feet which reduce to 300 feet at runway centerline interception are possible.

  10. Treatable inborn errors of metabolism causing intellectual disability: a systematic literature review.

    PubMed

    van Karnebeek, Clara D M; Stockler, Sylvia

    2012-03-01

    Intellectual disability ('developmental delay' at age<5 years) affects 2.5% of population worldwide. Recommendations to investigate genetic causes of intellectual disability are based on frequencies of single conditions and on the yield of diagnostic methods, rather than availability of causal therapy. Inborn errors of metabolism constitute a subgroup of rare genetic conditions for which an increasing number of treatments has become available. To identify all currently treatable inborn errors of metabolism presenting with predominantly intellectual disability, we performed a systematic literature review. We applied Cochrane Collaboration guidelines in formulation of PICO and definitions, and searched in Pubmed (1960-2011) and relevant (online) textbooks to identify 'all inborn errors of metabolism presenting with intellectual disability as major feature'. We assessed levels of evidence of treatments and characterised the effect of treatments on IQ/development and related outcomes. We identified a total of 81 'treatable inborn errors of metabolism' presenting with intellectual disability as a major feature, including disorders of amino acids (n=12), cholesterol and bile acid (n=2), creatine (n=3), fatty aldehydes (n=1); glucose homeostasis and transport (n=2); hyperhomocysteinemia (n=7); lysosomes (n=12), metals (n=3), mitochondria (n=2), neurotransmission (n=7); organic acids (n=19), peroxisomes (n=1), pyrimidines (n=2), urea cycle (n=7), and vitamins/co-factors (n=8). 62% (n=50) of all disorders are identified by metabolic screening tests in blood (plasma amino acids, homocysteine) and urine (creatine metabolites, glycosaminoglycans, oligosaccharides, organic acids, pyrimidines). For the remaining disorders (n=31) a 'single test per single disease' approach including primary molecular analysis is required. Therapeutic modalities include: sick-day management, diet, co-factor/vitamin supplements, substrate inhibition, stemcell transplant, gene therapy. Therapeutic effects include improvement and/or stabilisation of psychomotor/cognitive development, behaviour/psychiatric disturbances, seizures, neurologic and systemic manifestations. The levels of available evidence for the various treatments range from Level 1b,c (n=5); Level 2a,b,c (n=14); Level 4 (n=45), Level 4-5 (n=27). In clinical practice more than 60% of treatments with evidence level 4-5 is internationally accepted as 'standard of care'. This literature review generated the evidence to prioritise treatability in the diagnostic evaluation of intellectual disability. Our results were translated into digital information tools for the clinician (www.treatable-id.org), which are part of a diagnostic protocol, currently implemented for evaluation of effectiveness in our institution. Treatments for these disorders are relatively accessible, affordable and with acceptable side-effects. Evidence for the majority of the therapies is limited however; international collaborations, patient registries, and novel trial methodologies are key in turning the tide for rare diseases such as these. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Research on Measurement Accuracy of Laser Tracking System Based on Spherical Mirror with Rotation Errors of Gimbal Mount Axes

    NASA Astrophysics Data System (ADS)

    Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang

    2018-02-01

    This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.

  12. Optimization of traffic data collection for specific pavement design applications.

    DOT National Transportation Integrated Search

    2006-05-01

    The objective of this study is to establish the minimum traffic data collection effort required for pavement design applications satisfying a maximum acceptable error under a prescribed confidence level. The approach consists of simulating the traffi...

  13. Measurement error is often neglected in medical literature: a systematic review.

    PubMed

    Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten

    2018-06-01

    In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Extrapolative capability of two models that estimating soil water retention curve between saturation and oven dryness.

    PubMed

    Lu, Sen; Ren, Tusheng; Lu, Yili; Meng, Ping; Sun, Shiyou

    2014-01-01

    Accurate estimation of soil water retention curve (SWRC) at the dry region is required to describe the relation between soil water content and matric suction from saturation to oven dryness. In this study, the extrapolative capability of two models for predicting the complete SWRC from limited ranges of soil water retention data was evaluated. When the model parameters were obtained from SWRC data in the 0-1500 kPa range, the FX model (Fredlund and Xing, 1994) estimations agreed well with measurements from saturation to oven dryness with RMSEs less than 0.01. The GG model (Groenevelt and Grant, 2004) produced larger errors at the dry region, with significantly larger RMSEs and MEs than the FX model. Further evaluations indicated that when SWRC measurements in the 0-100 kPa suction range was applied for model establishment, the FX model was capable of producing acceptable SWRCs across the entire water content range. For a higher accuracy, the FX model requires soil water retention data at least in the 0- to 300-kPa range to extend the SWRC to oven dryness. Comparing with the Khlosi et al. (2006) model, which requires measurements in the 0-500 kPa range to reproduce the complete SWRCs, the FX model has the advantage of requiring less SWRC measurements. Thus the FX modeling approach has the potential to eliminate the processes for measuring soil water retention in the dry range.

  15. Test-retest reliability of 3D ultrasound measurements of the thoracic spine.

    PubMed

    Fölsch, Christian; Schlögel, Stefanie; Lakemeier, Stefan; Wolf, Udo; Timmesfeld, Nina; Skwara, Adrian

    2012-05-01

    To explore the reliability of the Zebris CMS 20 ultrasound analysis system with pointer application for measuring end-range flexion, end-range extension, and neutral kyphosis angle of the thoracic spine. The study was performed within the School of Physiotherapy in cooperation with the Orthopedic Department at a University Hospital. The thoracic spines of 28 healthy subjects were measured. Measurements for neutral kyphosis angle, end-range flexion, and end-range extension were taken once at each time point. The bone landmarks were palpated by one examiner and marked with a pointer containing 2 transmitters using a frequency of 40 kHz. A third transmitter was fixed to the pelvis, and 3 microphones were used as receiver. The real angle was calculated by the software. Bland-Altman plots with 95% limits of agreement, intraclass correlations (ICC), standard deviations of mean measurements, and standard error of measurements were used for statistical analyses. The test-retest reliability in this study was measured within a 24-hour interval. Statistical parameters were used to judge reliability. The mean kyphosis angle was 44.8° with a standard deviation of 17.3° at the first measurement and a mean of 45.8° with a standard deviation of 16.2° the following day. The ICC was high at 0.95 for the neutral kyphosis angle, and the Bland-Altman 95% limits of agreement were within clinical acceptable margins. The ICC was 0.71 for end-range flexion and 0.34 for end-range extension, whereas the Bland-Altman 95% limits of agreement were wider than with the static measurement of kyphosis. Compared with static measurements, the analysis of motion with 3-dimensional ultrasound showed an increased standard deviation for test-retest measurements. The test-retest reliability of ultrasound measuring of the neutral kyphosis angle of the thoracic spine was demonstrated within 24 hours. Bland-Altman 95% limits of agreement and the standard deviation of differences did not appear to be clinically acceptable for measuring flexion and extension. Copyright © 2012 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  16. Reliability of the method of levels for determining cutaneous temperature sensitivity

    NASA Astrophysics Data System (ADS)

    Jakovljević, Miroljub; Mekjavić, Igor B.

    2012-09-01

    Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.

  17. Measuring personal beliefs and perceived norms about intimate partner violence: Population-based survey experiment in rural Uganda

    PubMed Central

    Kakuhikire, Bernard; McDonough, Amy Q.; Ogburn, Elizabeth L.; Downey, Jordan M.; Bangsberg, David R.

    2017-01-01

    Background Demographic and Health Surveys (DHS) conducted throughout sub-Saharan Africa indicate there is widespread acceptance of intimate partner violence, contributing to an adverse health risk environment for women. While qualitative studies suggest important limitations in the accuracy of the DHS methods used to elicit attitudes toward intimate partner violence, to date there has been little experimental evidence from sub-Saharan Africa that can be brought to bear on this issue. Methods and findings We embedded a randomized survey experiment in a population-based survey of 1,334 adult men and women living in Nyakabare Parish, Mbarara, Uganda. The primary outcomes were participants’ personal beliefs about the acceptability of intimate partner violence and perceived norms about intimate partner violence in the community. To elicit participants’ personal beliefs and perceived norms, we asked about the acceptability of intimate partner violence in five different vignettes. Study participants were randomly assigned to one of three survey instruments, each of which contained varying levels of detail about the extent to which the wife depicted in the vignette intentionally or unintentionally violated gendered standards of behavior. For the questions about personal beliefs, the mean (standard deviation) number of items where intimate partner violence was endorsed as acceptable was 1.26 (1.58) among participants assigned to the DHS-style survey variant (which contained little contextual detail about the wife’s intentions), 2.74 (1.81) among participants assigned to the survey variant depicting the wife as intentionally violating gendered standards of behavior, and 0.77 (1.19) among participants assigned to the survey variant depicting the wife as unintentionally violating these standards. In a partial proportional odds regression model adjusting for sex and village of residence, with participants assigned to the DHS-style survey variant as the referent group, participants assigned the survey variant that depicted the wife as intentionally violating gendered standards of behavior were more likely to condone intimate partner violence in a greater number of vignettes (adjusted odds ratios [AORs] ranged from 3.87 to 5.74, with all p < 0.001), while participants assigned the survey variant that depicted the wife as unintentionally violating these standards were less likely to condone intimate partner violence (AORs ranged from 0.29 to 0.70, with p-values ranging from <0.001 to 0.07). The analysis of perceived norms displayed similar patterns, but the effects were slightly smaller in magnitude: participants assigned to the “intentional” survey variant were more likely to perceive intimate partner violence as normative (AORs ranged from 2.05 to 3.51, with all p < 0.001), while participants assigned to the “unintentional” survey variant were less likely to perceive intimate partner violence as normative (AORs ranged from 0.49 to 0.65, with p-values ranging from <0.001 to 0.14). The primary limitations of this study are that our assessments of personal beliefs and perceived norms could have been measured with error and that our findings may not generalize beyond rural Uganda. Conclusions Contextual information about the circumstances under which women in hypothetical vignettes were perceived to violate gendered standards of behavior had a significant influence on the extent to which study participants endorsed the acceptability of intimate partner violence. Researchers aiming to assess personal beliefs or perceived norms about intimate partner violence should attempt to eliminate, as much as possible, ambiguities in vignettes and questions administered to study participants. Trial registration ClinicalTrials.gov NCT02202824. PMID:28542176

  18. Measuring personal beliefs and perceived norms about intimate partner violence: Population-based survey experiment in rural Uganda.

    PubMed

    Tsai, Alexander C; Kakuhikire, Bernard; Perkins, Jessica M; Vořechovská, Dagmar; McDonough, Amy Q; Ogburn, Elizabeth L; Downey, Jordan M; Bangsberg, David R

    2017-05-01

    Demographic and Health Surveys (DHS) conducted throughout sub-Saharan Africa indicate there is widespread acceptance of intimate partner violence, contributing to an adverse health risk environment for women. While qualitative studies suggest important limitations in the accuracy of the DHS methods used to elicit attitudes toward intimate partner violence, to date there has been little experimental evidence from sub-Saharan Africa that can be brought to bear on this issue. We embedded a randomized survey experiment in a population-based survey of 1,334 adult men and women living in Nyakabare Parish, Mbarara, Uganda. The primary outcomes were participants' personal beliefs about the acceptability of intimate partner violence and perceived norms about intimate partner violence in the community. To elicit participants' personal beliefs and perceived norms, we asked about the acceptability of intimate partner violence in five different vignettes. Study participants were randomly assigned to one of three survey instruments, each of which contained varying levels of detail about the extent to which the wife depicted in the vignette intentionally or unintentionally violated gendered standards of behavior. For the questions about personal beliefs, the mean (standard deviation) number of items where intimate partner violence was endorsed as acceptable was 1.26 (1.58) among participants assigned to the DHS-style survey variant (which contained little contextual detail about the wife's intentions), 2.74 (1.81) among participants assigned to the survey variant depicting the wife as intentionally violating gendered standards of behavior, and 0.77 (1.19) among participants assigned to the survey variant depicting the wife as unintentionally violating these standards. In a partial proportional odds regression model adjusting for sex and village of residence, with participants assigned to the DHS-style survey variant as the referent group, participants assigned the survey variant that depicted the wife as intentionally violating gendered standards of behavior were more likely to condone intimate partner violence in a greater number of vignettes (adjusted odds ratios [AORs] ranged from 3.87 to 5.74, with all p < 0.001), while participants assigned the survey variant that depicted the wife as unintentionally violating these standards were less likely to condone intimate partner violence (AORs ranged from 0.29 to 0.70, with p-values ranging from <0.001 to 0.07). The analysis of perceived norms displayed similar patterns, but the effects were slightly smaller in magnitude: participants assigned to the "intentional" survey variant were more likely to perceive intimate partner violence as normative (AORs ranged from 2.05 to 3.51, with all p < 0.001), while participants assigned to the "unintentional" survey variant were less likely to perceive intimate partner violence as normative (AORs ranged from 0.49 to 0.65, with p-values ranging from <0.001 to 0.14). The primary limitations of this study are that our assessments of personal beliefs and perceived norms could have been measured with error and that our findings may not generalize beyond rural Uganda. Contextual information about the circumstances under which women in hypothetical vignettes were perceived to violate gendered standards of behavior had a significant influence on the extent to which study participants endorsed the acceptability of intimate partner violence. Researchers aiming to assess personal beliefs or perceived norms about intimate partner violence should attempt to eliminate, as much as possible, ambiguities in vignettes and questions administered to study participants. ClinicalTrials.gov NCT02202824.

  19. Managing human fallibility in critical aerospace situations

    NASA Astrophysics Data System (ADS)

    Tew, Larry

    2014-11-01

    Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.

  20. Impact of geophysical model error for recovering temporal gravity field model

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang

    2016-07-01

    The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, X; Li, Z; Zheng, D

    Purpose: In the context of evaluating dosimetric impacts of a variety of uncertainties involved in HDR Tandem-and-Ovoid treatment, to study the correlations between conventional point doses and 3D volumetric doses. Methods: For 5 cervical cancer patients treated with HDR T&O, 150 plans were retrospectively created to study dosimetric impacts of the following uncertainties: (1) inter-fractional applicator displacement between two treatment fractions within a single insertion by applying Fraction#1 plan to Fraction#2 CT; (2) positional dwell error simulated from −5mm to 5mm in 1mm steps; (3) simulated temporal dwell error of 0.05s, 0.1s, 0.5s, and 1s. The original plans were basedmore » on point dose prescription, from which the volume covered by the prescription dose was generated as the pseudo target volume to study the 3D target dose effect. OARs were contoured. The point and volumetric dose errors were calculated by taking the differences between original and simulated plans. The correlations between the point and volumetric dose errors were analyzed. Results: For the most clinically relevant positional dwell uncertainty of 1mm, temporal uncertainty of 0.05s, and inter-fractional applicator displacement within the same insertion, the mean target D90 and V100 deviation were within 1%. Among these uncertainties, the applicator displacement showed the largest potential target coverage impact (2.6% on D90) as well as the OAR dose impact (2.5% and 3.4% on bladder D2cc and rectum D2cc). The Spearman correlation analysis shows a correlation coefficient of 0.43 with a p-value of 0.11 between target D90 coverage and H point dose. Conclusion: With the most clinically relevant positional and temporal dwell uncertainties and patient interfractional applicator displacement within the same insertion, the dose error is within clinical acceptable range. The lack of correlation between H point and 3D volumetric dose errors is a motivator for the use of 3D treatment planning in cervical HDR brachytherapy.« less

  2. Improved Quality in Aerospace Testing Through the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.

  3. Correction to: Attrition after Acceptance onto a Publicly Funded Bariatric Surgery Program.

    PubMed

    Taylor, Tamasin; Wang, Yijiao; Rogerson, William; Bavin, Lynda; Sharon, Cindy; Beban, Grant; Evennett, Nicholas; Gamble, Greg; Cundy, Timothy

    2018-03-20

    Unfortunately, the original version of this article contained an error. The Methods section's first sentence and Table 1 both mistakenly contained the letters XXXX in place of the district health board and hospital city names.

  4. Developing and Validating a Tablet Version of an Illness Explanatory Model Interview for a Public Health Survey in Pune, India

    PubMed Central

    Giduthuri, Joseph G.; Maire, Nicolas; Joseph, Saju; Kudale, Abhay; Schaetti, Christian; Sundaram, Neisha; Schindler, Christian; Weiss, Mitchell G.

    2014-01-01

    Background Mobile electronic devices are replacing paper-based instruments and questionnaires for epidemiological and public health research. The elimination of a data-entry step after an interview is a notable advantage over paper, saving investigator time, decreasing the time lags in managing and analyzing data, and potentially improving the data quality by removing the error-prone data-entry step. Research has not yet provided adequate evidence, however, to substantiate the claim of fewer errors for computerized interviews. Methodology We developed an Android-based illness explanatory interview for influenza vaccine acceptance and tested the instrument in a field study in Pune, India, for feasibility and acceptability. Error rates for tablet and paper were compared with reference to the voice recording of the interview as gold standard to assess discrepancies. We also examined the preference of interviewers for the classical paper-based or the electronic version of the interview and compared the costs of research with both data collection devices. Results In 95 interviews with household respondents, total error rates with paper and tablet devices were nearly the same (2.01% and 1.99% respectively). Most interviewers indicated no preference for a particular device; but those with a preference opted for tablets. The initial investment in tablet-based interviews was higher compared to paper, while the recurring costs per interview were lower with the use of tablets. Conclusion An Android-based tablet version of a complex interview was developed and successfully validated. Advantages were not compromised by increased errors, and field research assistants with a preference preferred the Android device. Use of tablets may be more costly than paper for small samples and less costly for large studies. PMID:25233212

  5. Accuracy of a Basketball Indoor Tracking System Based on Standard Bluetooth Low Energy Channels (NBN23®).

    PubMed

    Figueira, Bruno; Gonçalves, Bruno; Folgado, Hugo; Masiulis, Nerijus; Calleja-González, Julio; Sampaio, Jaime

    2018-06-14

    The present study aims to identify the accuracy of the NBN23 ® system, an indoor tracking system based on radio-frequency and standard Bluetooth Low Energy channels. Twelve capture tags were attached to a custom cart with fixed distances of 0.5, 1.0, 1.5, and 1.8 m. The cart was pushed along a predetermined course following the lines of a standard dimensions Basketball court. The course was performed at low speed (<10.0 km/h), medium speed (>10.0 km/h and <20.0 km/h) and high speed (>20.0 km/h). Root mean square error (RMSE) and percentage of variance accounted for (%VAF) were used as accuracy measures. The obtained data showed acceptable accuracy results for both RMSE and %VAF, despite the expected degree of error in position measurement at higher speeds. The RMSE for all the distances and velocities presented an average absolute error of 0.30 ± 0.13 cm with 90.61 ± 8.34 of %VAF, in line with most available systems, and considered acceptable for indoor sports. The processing of data with filter correction seemed to reduce the noise and promote a lower relative error, increasing the %VAF for each measured distance. Research using positional-derived variables in Basketball is still very scarce; thus, this independent test of the NBN23 ® tracking system provides accuracy details and opens up opportunities to develop new performance indicators that help to optimize training adaptations and performance.

  6. [Detection and classification of medication errors at Joan XXIII University Hospital].

    PubMed

    Jornet Montaña, S; Canadell Vilarrasa, L; Calabuig Mũoz, M; Riera Sendra, G; Vuelta Arce, M; Bardají Ruiz, A; Gallart Mora, M J

    2004-01-01

    Medication errors are multifactorial and multidisciplinary, and may originate in processes such as drug prescription, transcription, dispensation, preparation and administration. The goal of this work was to measure the incidence of detectable medication errors that arise within a unit dose drug distribution and control system, from drug prescription to drug administration, by means of an observational method confined to the Pharmacy Department, as well as a voluntary, anonymous report system. The acceptance of this voluntary report system's implementation was also assessed. A prospective descriptive study was conducted. Data collection was performed at the Pharmacy Department from a review of prescribed medical orders, a review of pharmaceutical transcriptions, a review of dispensed medication and a review of medication returned in unit dose medication carts. A voluntary, anonymous report system centralized in the Pharmacy Department was also set up to detect medication errors. Prescription errors were the most frequent (1.12%), closely followed by dispensation errors (1.04%). Transcription errors (0.42%) and administration errors (0.69%) had the lowest overall incidence. Voluntary report involved only 4.25% of all detected errors, whereas unit dose medication cart review contributed the most to error detection. Recognizing the incidence and types of medication errors that occur in a health-care setting allows us to analyze their causes and effect changes in different stages of the process in order to ensure maximal patient safety.

  7. Impact of gradient timing error on the tissue sodium concentration bioscale measured using flexible twisted projection imaging

    NASA Astrophysics Data System (ADS)

    Lu, Aiming; Atkinson, Ian C.; Vaughn, J. Thomas; Thulborn, Keith R.

    2011-12-01

    The rapid biexponential transverse relaxation of the sodium MR signal from brain tissue requires efficient k-space sampling for quantitative imaging in a time that is acceptable for human subjects. The flexible twisted projection imaging (flexTPI) sequence has been shown to be suitable for quantitative sodium imaging with an ultra-short echo time to minimize signal loss. The fidelity of the k-space center location is affected by the readout gradient timing errors on the three physical axes, which is known to cause image distortion for projection-based acquisitions. This study investigated the impact of these timing errors on the voxel-wise accuracy of the tissue sodium concentration (TSC) bioscale measured with the flexTPI sequence. Our simulations show greater than 20% spatially varying quantification errors when the gradient timing errors are larger than 10 μs on all three axes. The quantification is more tolerant of gradient timing errors on the Z-axis. An existing method was used to measure the gradient timing errors with <1 μs error. The gradient timing error measurement is shown to be RF coil dependent, and timing error differences of up to ˜16 μs have been observed between different RF coils used on the same scanner. The measured timing errors can be corrected prospectively or retrospectively to obtain accurate TSC values.

  8. Does self-administered vaginal misoprostol result in cervical ripening in postmenopausal women after 14 days of pre-treatment with estradiol? Trial protocol for a randomised, placebo-controlled sequential trial.

    PubMed

    Oppegaard, K S; Lieng, M; Berg, A; Istre, O; Qvigstad, E; Nesheim, B-I

    2008-06-01

    To compare the impact of 1000 micrograms of self-administered vaginal misoprostol versus self-administered vaginal placebo on preoperative cervical ripening after pre-treatment with estradiol vaginal tablets at home in postmenopausal women prior to day-care operative hysteroscopy. Randomised double-blind placebo-controlled sequential trial. The boundaries for the sequential trial were calculated on the primary outcomes of a difference of cervical dilatation > or = 1 millimetre, with the assumption of a type 1 error of 0.05 and a power of 0.95. Norwegian university teaching hospital. Postmenopausal women referred for day-care operative hysteroscopy. The women were randomised to either 1000 micrograms of self-administered vaginal misoprostol or self-administered vaginal placebo the evening before day-care operative hysteroscopy. All women had administered a 25-microgram vaginal estradiol tablet daily for 14 days prior to the operation. Preoperative cervical dilatation (difference between misoprostol and placebo group, primary outcome), difference in dilatation before and after administration of misoprostol or placebo, number of women who achieve a preoperative cervical dilatation > or = 5 millimetres, acceptability, complications and side effects (secondary outcomes). Intra-operative findings and distribution of cervical dilatation in the two treatment groups: values are given as median (range) or n (%). Difference in dilatation before and after administration of misoprostol and placebo: values are given as median (range) of intraindividual differences. Percentage of women who achieve a cervical dilatation of > or = 5 mm, percentage of women who were difficult to dilate. Acceptability in the two treatment groups: values are given as completely acceptable n (%), fairly acceptable n (%), fairly unacceptable n (%), completely unacceptable n (%). Pain in the two treatment groups: pain was measured with a visual analogue scale ranging from 0 (no pain) to 10 (unbearable pain): values are given as median (range). Occurrence of side effects in the two treatment groups. Values are given as n (%). Complications given as n (%). No pharmaceutical company was involved in this study. A research grant from the regional research board of Northern Norway has been awarded to finance Dr K.S.O.'s leave from Hammerfest hospital as well as travel expenses between Hammerfest and Oslo, and research courses. The research grant from Prof B.I.N. (Helse Øst) funded the purchase of estradiol tablets, the manufacturing costs of misoprostol and placebo capsules from the hospital pharmacy, as well as the costs incurred for preparing the randomisation schedule and distribution of containers containing capsules to hospital. Prof B.I.N.'s research grant also funded insurance for the study participants. Estimated completion date 31 December 2008.

  9. Adolescents’ experience of a rapid HIV self-testing device in youth-friendly clinic settings in Cape Town South Africa: a cross-sectional community based usability study

    PubMed Central

    Smith, Philip; Wallace, Melissa; Bekker, Linda-Gail

    2016-01-01

    Abstract Introduction: Since HIV testing in South African adolescents and young adults is sub-optimal, the objective of the current study was to investigate the feasibility and acceptability of an HIV rapid self-testing device in adolescents and young people at the Desmond Tutu HIV Foundation Youth Centre and Mobile Clinic. Methods: Self-presenting adolescents and young adults were invited to participate in a study investigating the fidelity, usability and acceptability of the AtomoRapid HIV Rapid self-testing device. Trained healthcare workers trained participants to use the device before the participant conducted the HIV self-test with device usage instructions. The healthcare worker then conducted a questionnaire-based survey to assess outcomes. Results: Of the 224 enrolled participants between 16 and 24 years of age, 155 (69,2%) were female. Overall, fidelity was high; 216 (96,4%) participants correctly completed the test and correctly read and interpreted the HIV test result. There were eight (3,6%) user errors overall; six participants failed to prick their finger even though the lancet fired correctly. There were two user errors where participants failed to use the capillary tube correctly. Participants rated acceptability and usability highly, with debut testers giving significantly higher ratings for both. Younger participants gave significantly higher ratings of acceptability. Conclusions: Adolescents and young adults found HIV self-testing highly acceptable with the AtomoRapid and they used the device accurately. Further research should investigate how, where and when to deploy HIV self-testing as a means to accompany existing strategies in reaching the UNAIDS goal to test 90% of all individuals worldwide. PMID:28406597

  10. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  11. An error covariance model for sea surface topography and velocity derived from TOPEX/POSEIDON altimetry

    NASA Technical Reports Server (NTRS)

    Tsaoussi, Lucia S.; Koblinsky, Chester J.

    1994-01-01

    In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.

  12. Which is the most useful patient-reported outcome in femoroacetabular impingement? Test-retest reliability of six questionnaires.

    PubMed

    Hinman, Rana S; Dobson, Fiona; Takla, Amir; O'Donnell, John; Bennell, Kim L

    2014-03-01

    The most reliable patient-reported outcomes (PROs) for people with femoroacetabular impingement (FAI) is unknown because there have been no direct comparisons of questionnaires. Thus, the aim was to evaluate the test-retest reliability of six existing PROs in a single cohort of young active people with hip/groin pain consistent with a clinical diagnosis of FAI. Young adults with clinical FAI completed six PRO questionnaires on two occasions, 1-2 weeks apart. The PROs were modified Harris Hip Score, Hip dysfunction and Osteoarthritis Score, Hip Outcome Score, Non-Arthritic Hip Score, International Hip Outcome Tool, Copenhagen Hip and Groin Outcome Score. 30 young adults (mean age 24 years, SD 4 years, range 18-30 years; 15 men) with stable symptoms participated. Intraclass correlation coefficient(3,1) values ranged from 0.73 to 0.93 (95% CI 0.38 to 0.98) indicating that most questionnaires reached minimal reliability benchmarks. Measurement error at the individual level was quite large for most questionnaires (minimal detectable change (MDC95) 12.4-35.6, 95% CI 8.7 to 54.0). In contrast, measurement error at the group level was quite small for most questionnaires (MDC95 2.2-7.3, 95% CI 1.6 to 11). The majority of the questionnaires were reliable and precise enough for use at the group level. Samples of only 23-30 individuals were required to achieve acceptable measurement variation at the group level. Further direct comparisons of these questionnaires are required to assess other measurement properties such as validity, responsiveness and meaningful change in young people with FAI.

  13. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  14. Growth of Errors and Uncertainties in Medium Range Ensemble Forecasts of U.S. East Coast Cool Season Extratropical Cyclones

    NASA Astrophysics Data System (ADS)

    Zheng, Minghua

    Cool-season extratropical cyclones near the U.S. East Coast often have significant impacts on the safety, health, environment and economy of this most densely populated region. Hence it is of vital importance to forecast these high-impact winter storm events as accurately as possible by numerical weather prediction (NWP), including in the medium-range. Ensemble forecasts are appealing to operational forecasters when forecasting such events because they can provide an envelope of likely solutions to serve user communities. However, it is generally accepted that ensemble outputs are not used efficiently in NWS operations mainly due to the lack of simple and quantitative tools to communicate forecast uncertainties and ensemble verification to assess model errors and biases. Ensemble sensitivity analysis (ESA), which employs a linear correlation and regression between a chosen forecast metric and the forecast state vector, can be used to analyze the forecast uncertainty development for both short- and medium-range forecasts. The application of ESA to a high-impact winter storm in December 2010 demonstrated that the sensitivity signals based on different forecast metrics are robust. In particular, the ESA based on the leading two EOF PCs can separate sensitive regions associated with cyclone amplitude and intensity uncertainties, respectively. The sensitivity signals were verified using the leave-one-out cross validation (LOOCV) method based on a multi-model ensemble from CMC, ECMWF, and NCEP. The climatology of ensemble sensitivities for the leading two EOF PCs based on 3-day and 6-day forecasts of historical cyclone cases was presented. It was found that the EOF1 pattern often represents the intensity variations while the EOF2 pattern represents the track variations along west-southwest and east-northeast direction. For PC1, the upper-level trough associated with the East Coast cyclone and its downstream ridge are important to the forecast uncertainty in cyclone strength. The initial differences in forecasting the ridge along the west coast of North America impact the EOF1 pattern most. For PC2, it was shown that the shift of the tri-polar structure is most significantly related to the cyclone track forecasts. The EOF/fuzzy clustering tool was applied to diagnose the scenarios in operational ensemble forecast of East Coast winter storms. It was shown that the clustering method could efficiently separate the forecast scenarios associated with East Coast storms based on the 90-member multi-model ensemble. A scenario-based ensemble verification method has been proposed and applied it to examine the capability of different EPSs in capturing the analysis scenarios for historical East Coast cyclone cases at lead times of 1-9 days. The results suggest that the NCEP model performs better in short-range forecasts in capturing the analysis scenario although it is under-dispersed. The ECMWF ensemble shows the best performance in the medium range. The CMC model is found to show the smallest percentage of members in the analysis group and a relatively high missing rate, suggesting that it is less reliable regarding capturing the analysis scenario when compared with the other two EPSs. A combination of NCEP and CMC models has been found to reduce the missing rate and improve the error-spread skill in medium- to extended-range forecasts. Based on the orthogonal features of the EOF patterns, the model errors for 1-6-day forecasts have been decomposed for the leading two EOF patterns. The results for error decomposition show that the NCEP model tends to better represent both EOF1 and EOF2 patterns by showing less intensity and displacement errors during 1-3 days. The ECMWF model is found to have the smallest errors in both EOF1 and EOF2 patterns during 4-6 days. We have also found that East Coast cyclones in the ECMWF forecast tend to be towards the southwest of the other two models in representing the EOF2 pattern, which is associated with the southwest-northeast shifting of the cyclone. This result suggests that ECMWF model may have a tendency to show a closer-to-shore solution in forecasting East Coast winter storms. The downstream impacts of Rossby wave packets (RWPs) on the predictability of winter storms are investigated to explore the source of ensemble uncertainties. The composited RWPA anomalies show that there are enhanced RWPs propagating across the Pacific in both large-error and large-spread cases over the verification regions. There are also indications that the errors might propagate with a speed comparable with the group velocity of RWPs. Based on the composite results as well as our observations of the operation daily RWPA, a conceptual model of errors/uncertainty development associated with RWPs has been proposed to serve as a practical tool to understand the evolution of forecast errors and uncertainties associated with the coherent RWPs originating from upstream as far as western Pacific. (Abstract shortened by ProQuest.).

  15. Electronic device for endosurgical skills training (EDEST): study of reliability.

    PubMed

    Pagador, J B; Uson, J; Sánchez, M A; Moyano, J L; Moreno, J; Bustos, P; Mateos, J; Sánchez-Margallo, F M

    2011-05-01

    Minimally Invasive Surgery procedures are commonly used in many surgical practices, but surgeons need specific training models and devices due to its difficulty and complexity. In this paper, an innovative electronic device for endosurgical skills training (EDEST) is presented. A study on reliability for this device was performed. Different electronic components were used to compose this new training device. The EDEST was focused on two basic laparoscopic tasks: triangulation and coordination manoeuvres. A configuration and statistical software was developed to complement the functionality of the device. A calibration method was used to assure the proper work of the device. A total of 35 subjects (8 experts and 27 novices) were used to check the reliability of the system using the MTBF analysis. Configuration values for triangulation and coordination exercises were calculated as 0.5 s limit threshold and 800-11,000 lux range of light intensity, respectively. Zero errors in 1,050 executions (0%) for triangulation and 21 errors in 5,670 executions (0.37%) for coordination were obtained. A MTBF of 2.97 h was obtained. The results show that the reliability of the EDEST device is acceptable when used under previously defined light conditions. These results along with previous work could demonstrate that the EDEST device can help surgeons during first training stages.

  16. Friendship, cliquishness, and the emergence of cooperation.

    PubMed

    Hruschka, Daniel J; Henrich, Joseph

    2006-03-07

    The evolution of cooperation is a central problem in biology and the social sciences. While theoretical work using the iterated prisoner's dilemma (IPD) has shown that cooperation among non-kin can be sustained among reciprocal strategies (i.e. tit-for-tat), these results are sensitive to errors in strategy execution, cyclical invasions by free riders, and the specific ecology of strategies. Moreover, the IPD assumes that a strategy's probability of playing the PD game with other individuals is independent of the decisions made by others. Here, we remove the assumption of independent pairing by studying a more plausible cooperative dilemma in which players can preferentially interact with a limited set of known partners and also deploy longer-term accounting strategies that can counteract the effects of random errors. We show that cooperative strategies readily emerge and persist in a range of noisy environments, with successful cooperative strategies (henceforth, cliquers) maintaining medium-term memories for partners and low thresholds for acceptable cooperation (i.e. forgiveness). The success of these strategies relies on their cliquishness-a propensity to defect with strangers if they already have an adequate number of partners. Notably, this combination of medium-term accounting, forgiveness, and cliquishness fits with empirical studies of friendship and other long-term relationships among humans.

  17. Continuous haematic pH monitoring in extracorporeal circulation using a disposable florescence sensing element

    NASA Astrophysics Data System (ADS)

    Ferrari, Luca; Rovati, Luigi; Fabbri, Paola; Pilati, Francesco

    2013-02-01

    During extracorporeal circulation (ECC), blood is periodically sampled and analyzed to maintain the blood-gas status of the patient within acceptable limits. This protocol has well-known drawbacks that may be overcome by continuous monitoring. We present the characterization of a new pH sensor for continuous monitoring in ECC. This monitoring device includes a disposable fluorescence-sensing element directly in contact with the blood, whose fluorescence intensity is strictly related to the pH of the blood. In vitro experiments show no significant difference between the blood gas analyzer values and the sensor readings; after proper calibration, it gives a correlation of R>0.9887, and measuring errors were lower than the 3% of the pH range of interest (RoI) with respect to a commercial blood gas analyzer. This performance has been confirmed also by simulating a moderate ipothermia condition, i.e., blood temperature 32°C, frequently used in cardiac surgery. In ex vivo experiments, performed with animal models, the sensor is continuously operated in an extracorporeal undiluted blood stream for a maximum of 11 h. It gives a correlation of R>0.9431, and a measuring error lower than the 3% of the pH RoI with respect to laboratory techniques.

  18. Continuous haematic pH monitoring in extracorporeal circulation using a disposable florescence sensing element.

    PubMed

    Ferrari, Luca; Rovati, Luigi; Fabbri, Paola; Pilati, Francesco

    2013-02-01

    During extracorporeal circulation (ECC), blood is periodically sampled and analyzed to maintain the blood-gas status of the patient within acceptable limits. This protocol has well-known drawbacks that may be overcome by continuous monitoring. We present the characterization of a new pH sensor for continuous monitoring in ECC. This monitoring device includes a disposable fluorescence-sensing element directly in contact with the blood, whose fluorescence intensity is strictly related to the pH of the blood. In vitro experiments show no significant difference between the blood gas analyzer values and the sensor readings; after proper calibration, it gives a correlation of R>0.9887, and measuring errors were lower than the 3% of the pH range of interest (RoI) with respect to a commercial blood gas analyzer. This performance has been confirmed also by simulating a moderate ipothermia condition, i.e., blood temperature 32°C, frequently used in cardiac surgery. In ex vivo experiments, performed with animal models, the sensor is continuously operated in an extracorporeal undiluted blood stream for a maximum of 11 h. It gives a correlation of R>0.9431, and a measuring error lower than the 3% of the pH RoI with respect to laboratory techniques.

  19. Enhanced noise tolerance for 10 Gb/s Bi-directional cross-wavelength reuse colorless WDM-PON by using spectrally shaped OFDM signals

    NASA Astrophysics Data System (ADS)

    Choudhury, Pallab K.

    2018-05-01

    Spectrally shaped orthogonal frequency division multiplexing (OFDM) signal for symmetric 10 Gb/s cross-wavelength reuse reflective semiconductor optical amplifier (RSOA) based colorless wavelength division multiplexed passive optical network (WDM-PON) is proposed and further analyzed to support broadband services of next generation high speed optical access networks. The generated OFDM signal has subcarriers in separate frequency ranges for downstream and upstream, such that the re-modulation noise can be effectively minimized in upstream data receiver. Moreover, the cross wavelength reuse approach improves the tolerance against Rayleigh backscattering noise due to the propagation of different wavelengths in the same feeder fiber. The proposed WDM-PON is successfully demonstrated for 25 km fiber with 16-QAM (quadrature amplitude modulation) OFDM signal having bandwidth of 2.5 GHz for 10 Gb/s operation and subcarrier frequencies in 3-5.5 GHz and DC-2.5 GHz for downstream (DS) and upstream (US) transmission respectively. The result shows that the proposed scheme maintains a good bit error rate (BER) performance below the forward error correction (FEC) limit of 3.8 × 10-3 at acceptable receiver sensitivity and provides a high resilience against re-modulation and Rayleigh backscattering noises as well as chromatic dispersion.

  20. Three-dimensional analysis of the surface registration accuracy of electromagnetic navigation systems in live endoscopic sinus surgery.

    PubMed

    Chang, C M; Fang, K M; Huang, T W; Wang, C T; Cheng, P W

    2013-12-01

    Studies on the performance of surface registration with electromagnetic tracking systems are lacking in both live surgery and the laboratory setting. This study presents the efficiency in time of the system preparation as well as the navigational accuracy of surface registration using electromagnetic tracking systems. Forty patients with bilateral chronic paranasal pansinusitis underwent endoscopic sinus surgery after undergoing sinus computed tomography scans. The surgeries were performed under electromagnetic navigation guidance after the surface registration had been carried out on all of the patients. The intraoperative measurements indicate the time taken for equipment set-up, surface registration and surgical procedure, as well as the degree of navigation error along 3 axes. The time taken for equipment set-up, surface registration and the surgical procedure was 179 +- 23 seconds, 39 +- 4.8 seconds and 114 +- 36 minutes, respectively. A comparison of the navigation error along the 3 axes showed that the deviation in the medial-lateral direction was significantly less than that in the anterior-posterior and cranial-caudal directions. The procedures of equipment set-up and surface registration in electromagnetic navigation tracking are efficient, convenient and easy to manipulate. The system accuracy is within the acceptable ranges, especially on the medial-lateral axis.

  1. Neural network for image compression

    NASA Astrophysics Data System (ADS)

    Panchanathan, Sethuraman; Yeap, Tet H.; Pilache, B.

    1992-09-01

    In this paper, we propose a new scheme for image compression using neural networks. Image data compression deals with minimization of the amount of data required to represent an image while maintaining an acceptable quality. Several image compression techniques have been developed in recent years. We note that the coding performance of these techniques may be improved by employing adaptivity. Over the last few years neural network has emerged as an effective tool for solving a wide range of problems involving adaptivity and learning. A multilayer feed-forward neural network trained using the backward error propagation algorithm is used in many applications. However, this model is not suitable for image compression because of its poor coding performance. Recently, a self-organizing feature map (SOFM) algorithm has been proposed which yields a good coding performance. However, this algorithm requires a long training time because the network starts with random initial weights. In this paper we have used the backward error propagation algorithm (BEP) to quickly obtain the initial weights which are then used to speedup the training time required by the SOFM algorithm. The proposed approach (BEP-SOFM) combines the advantages of the two techniques and, hence, achieves a good coding performance in a shorter training time. Our simulation results demonstrate the potential gains using the proposed technique.

  2. Acceptable range of speech level in noisy sound fields for young adults and elderly persons.

    PubMed

    Sato, Hayato; Morimoto, Masayuki; Ota, Ryo

    2011-09-01

    The acceptable range of speech level as a function of background noise level was investigated on the basis of word intelligibility scores and listening difficulty ratings. In the present study, the acceptable range is defined as the range that maximizes word intelligibility scores and simultaneously does not cause a significant increase in listening difficulty ratings from the minimum ratings. Listening tests with young adult and elderly listeners demonstrated the following. (1) The acceptable range of speech level for elderly listeners overlapped that for young listeners. (2) The lower limit of the acceptable speech level for both young and elderly listeners was 65 dB (A-weighted) for noise levels of 40 and 45 dB (A-weighted), a level with a speech-to-noise ratio of +15 dB for noise levels of 50 and 55 dB, and a level with a speech-to-noise ratio of +10 dB for noise levels from 60 to 70 dB. (3) The upper limit of the acceptable speech level for both young and elderly listeners was 80 dB for noise levels from 40 to 55 dB and 85 dB or above for noise levels from 55 to 70 dB. © 2011 Acoustical Society of America

  3. Geodetic positioning using a global positioning system of satellites

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1980-01-01

    Geodetic positioning using range, integrated Doppler, and interferometric observations from a constellation of twenty-four Global Positioning System satellites is analyzed. A summary of the proposals for geodetic positioning and baseline determination is given which includes a description of measurement techniques and comments on rank deficiency and error sources. An analysis of variance comparison of range, Doppler, and interferometric time delay to determine their relative geometric strength for baseline determination is included. An analytic examination to the effect of a priori constraints on positioning using simultaneous observations from two stations is presented. Dynamic point positioning and baseline determination using range and Doppler is examined in detail. Models for the error sources influencing dynamic positioning are developed. Included is a discussion of atomic clock stability, and range and Doppler observation error statistics based on random correlated atomic clock error are derived.

  4. Clinical Implications of TiGRT Algorithm for External Audit in Radiation Oncology.

    PubMed

    Shahbazi-Gahrouei, Daryoush; Saeb, Mohsen; Monadi, Shahram; Jabbari, Iraj

    2017-01-01

    Performing audits play an important role in quality assurance program in radiation oncology. Among different algorithms, TiGRT is one of the common application software for dose calculation. This study aimed to clinical implications of TiGRT algorithm to measure dose and compared to calculated dose delivered to the patients for a variety of cases, with and without the presence of inhomogeneities and beam modifiers. Nonhomogeneous phantom as quality dose verification phantom, Farmer ionization chambers, and PC-electrometer (Sun Nuclear, USA) as a reference class electrometer was employed throughout the audit in linear accelerators 6 and 18 MV energies (Siemens ONCOR Impression Plus, Germany). Seven test cases were performed using semi CIRS phantom. In homogeneous regions and simple plans for both energies, there was a good agreement between measured and treatment planning system calculated dose. Their relative error was found to be between 0.8% and 3% which is acceptable for audit, but in nonhomogeneous organs, such as lung, a few errors were observed. In complex treatment plans, when wedge or shield in the way of energy is used, the error was in the accepted criteria. In complex beam plans, the difference between measured and calculated dose was found to be 2%-3%. All differences were obtained between 0.4% and 1%. A good consistency was observed for the same type of energy in the homogeneous and nonhomogeneous phantom for the three-dimensional conformal field with a wedge, shield, asymmetric using the TiGRT treatment planning software in studied center. The results revealed that the national status of TPS calculations and dose delivery for 3D conformal radiotherapy was globally within acceptable standards with no major causes for concern.

  5. Colour compatibility between teeth and dental shade guides in Quinquagenarians and Septuagenarians.

    PubMed

    Cocking, C; Cevirgen, E; Helling, S; Oswald, M; Corcodel, N; Rammelsberg, P; Reinelt, G; Hassel, A J

    2009-11-01

    The aim of this investigation was to determine colour compatibility between dental shade guides, namely, VITA Classical (VC) and VITA 3D-Master (3D), and human teeth in quinquagenarians and septuagenarians. Tooth colour, described in terms of L*a*b* values of the middle third of facial tooth surface of 1391 teeth, was measured using VITA Easyshade in 195 subjects (48% female). These were compared with the colours (L*a*b* values) of the shade tabs of VC and 3D. The mean coverage error and the percentage of tooth colours being within a given colour difference (DeltaE(ab)) from the tabs of VC and 3D were calculated. For comparison, hypothetical, optimized, population-specific shade guides were additionally calculated based on discrete optimization techniques for optimizing coverage. Mean coverage error was DeltaE(ab) = 3.51 for VC and DeltaE(ab) = 2.96 for 3D. Coverage of tooth colours by the tabs of VC and 3D within DeltaE(ab) = 2 was 23% and 24%, respectively, (DeltaE(ab)

  6. Clinical Implications of TiGRT Algorithm for External Audit in Radiation Oncology

    PubMed Central

    Shahbazi-Gahrouei, Daryoush; Saeb, Mohsen; Monadi, Shahram; Jabbari, Iraj

    2017-01-01

    Background: Performing audits play an important role in quality assurance program in radiation oncology. Among different algorithms, TiGRT is one of the common application software for dose calculation. This study aimed to clinical implications of TiGRT algorithm to measure dose and compared to calculated dose delivered to the patients for a variety of cases, with and without the presence of inhomogeneities and beam modifiers. Materials and Methods: Nonhomogeneous phantom as quality dose verification phantom, Farmer ionization chambers, and PC-electrometer (Sun Nuclear, USA) as a reference class electrometer was employed throughout the audit in linear accelerators 6 and 18 MV energies (Siemens ONCOR Impression Plus, Germany). Seven test cases were performed using semi CIRS phantom. Results: In homogeneous regions and simple plans for both energies, there was a good agreement between measured and treatment planning system calculated dose. Their relative error was found to be between 0.8% and 3% which is acceptable for audit, but in nonhomogeneous organs, such as lung, a few errors were observed. In complex treatment plans, when wedge or shield in the way of energy is used, the error was in the accepted criteria. In complex beam plans, the difference between measured and calculated dose was found to be 2%–3%. All differences were obtained between 0.4% and 1%. Conclusions: A good consistency was observed for the same type of energy in the homogeneous and nonhomogeneous phantom for the three-dimensional conformal field with a wedge, shield, asymmetric using the TiGRT treatment planning software in studied center. The results revealed that the national status of TPS calculations and dose delivery for 3D conformal radiotherapy was globally within acceptable standards with no major causes for concern. PMID:28989910

  7. Validation of the Malay Version of the Inventory of Functional Status after Childbirth Questionnaire

    PubMed Central

    Noor, Norhayati Mohd; Aziz, Aniza Abd.; Mostapa, Mohd Rosmizaki; Awang, Zainudin

    2015-01-01

    Objective. This study was designed to examine the psychometric properties of Malay version of the Inventory of Functional Status after Childbirth (IFSAC). Design. A cross-sectional study. Materials and Methods. A total of 108 postpartum mothers attending Obstetrics and Gynaecology Clinic, in a tertiary teaching hospital in Malaysia, were involved. Construct validity and internal consistency were performed after the translation, content validity, and face validity process. The data were analyzed using Analysis of Moment Structure version 18 and Statistical Packages for the Social Sciences version 20. Results. The final model consists of four constructs, namely, infant care, personal care, household activities, and social and community activities, with 18 items demonstrating acceptable factor loadings, domain to domain correlation, and best fit (Chi-squared/degree of freedom = 1.678; Tucker-Lewis index = 0.923; comparative fit index = 0.936; and root mean square error of approximation = 0.080). Composite reliability and average variance extracted of the domains ranged from 0.659 to 0.921 and from 0.499 to 0.628, respectively. Conclusion. The study suggested that the four-factor model with 18 items of the Malay version of IFSAC was acceptable to be used to measure functional status after childbirth because it is valid, reliable, and simple. PMID:25667932

  8. Applications of inertial-sensor high-inheritance instruments to DSN precision antenna pointing

    NASA Technical Reports Server (NTRS)

    Goddard, R. E.

    1992-01-01

    Laboratory test results of the initialization and tracking performance of an existing inertial-sensor-based instrument are given. The instrument, although not primarily designed for precision antenna pointing applications, demonstrated an on-average 10-hour tracking error of several millidegrees. The system-level instrument performance is shown by analysis to be sensor limited. Simulated instrument improvements show a tracking error of less than 1 mdeg, which would provide acceptable performance, i.e., low pointing loss, for the DSN 70-m antenna sub network, operating at Ka-band (1-cm wavelength).

  9. Applications of inertial-sensor high-inheritance instruments to DSN precision antenna pointing

    NASA Technical Reports Server (NTRS)

    Goddard, R. E.

    1992-01-01

    Laboratory test results of the initialization and tracking performance of an existing inertial-sensor-based instrument are given. The instrument, although not primarily designed for precision antenna pointing applications, demonstrated an on-average 10-hour tracking error of several millidegrees. The system-level instrument performance is shown by analysis to be sensor limited. Simulated instrument improvements show a tracking error of less than 1 mdeg, which would provide acceptable performance, i.e., low pointing loss, for the Deep Space Network 70-m antenna subnetwork, operating at Ka-band (1-cm wavelength).

  10. The nearest neighbor and the bayes error rates.

    PubMed

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.

  11. An investigation of error correcting techniques for OMV data

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1992-01-01

    Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).

  12. Applying an overstress principle in accelerated testing of absorbing mechanisms

    NASA Astrophysics Data System (ADS)

    Tsyss, V. G.; Sergaeva, M. Yu; Sergaev, A. A.

    2018-04-01

    The relevance of using overstress test as a forced one to determine the pneumatic absorber lifespan was studied. The obtained results demonstrated that at low load overstress the relative error for the absorber lifespan evaluation is no more than 3%. This means that the test results spread has almost no effect on the lifespan evaluation, and this effect is several times less than that at high load overstress tests. Accelerated testing of absorbers with low load overstress is more acceptable since the relative error for the lifespan evaluation is negligible.

  13. Safe and effective error rate monitors for SS7 signaling links

    NASA Astrophysics Data System (ADS)

    Schmidt, Douglas C.

    1994-04-01

    This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.

  14. Interactive Video Coding and Transmission over Heterogeneous Wired-to-Wireless IP Networks Using an Edge Proxy

    NASA Astrophysics Data System (ADS)

    Pei, Yong; Modestino, James W.

    2004-12-01

    Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.

  15. Altimeter error sources at the 10-cm performance level

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1977-01-01

    Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.

  16. Acceptance threshold hypothesis is supported by chemical similarity of cuticular hydrocarbons in a stingless bee, Melipona asilvai.

    PubMed

    Nascimento, D L; Nascimento, F S

    2012-11-01

    The ability to discriminate nestmates from non-nestmates in insect societies is essential to protect colonies from conspecific invaders. The acceptance threshold hypothesis predicts that organisms whose recognition systems classify recipients without errors should optimize the balance between acceptance and rejection. In this process, cuticular hydrocarbons play an important role as cues of recognition in social insects. The aims of this study were to determine whether guards exhibit a restrictive level of rejection towards chemically distinct individuals, becoming more permissive during the encounters with either nestmate or non-nestmate individuals bearing chemically similar profiles. The study demonstrates that Melipona asilvai (Hymenoptera: Apidae: Meliponini) guards exhibit a flexible system of nestmate recognition according to the degree of chemical similarity between the incoming forager and its own cuticular hydrocarbons profile. Guards became less restrictive in their acceptance rates when they encounter non-nestmates with highly similar chemical profiles, which they probably mistake for nestmates, hence broadening their acceptance level.

  17. Heterodyne range imaging as an alternative to photogrammetry

    NASA Astrophysics Data System (ADS)

    Dorrington, Adrian; Cree, Michael; Carnegie, Dale; Payne, Andrew; Conroy, Richard

    2007-01-01

    Solid-state full-field range imaging technology, capable of determining the distance to objects in a scene simultaneously for every pixel in an image, has recently achieved sub-millimeter distance measurement precision. With this level of precision, it is becoming practical to use this technology for high precision three-dimensional metrology applications. Compared to photogrammetry, range imaging has the advantages of requiring only one viewing angle, a relatively short measurement time, and simplistic fast data processing. In this paper we fist review the range imaging technology, then describe an experiment comparing both photogrammetric and range imaging measurements of a calibration block with attached retro-reflective targets. The results show that the range imaging approach exhibits errors of approximately 0.5 mm in-plane and almost 5 mm out-of-plane; however, these errors appear to be mostly systematic. We then proceed to examine the physical nature and characteristics of the image ranging technology and discuss the possible causes of these systematic errors. Also discussed is the potential for further system characterization and calibration to compensate for the range determination and other errors, which could possibly lead to three-dimensional measurement precision approaching that of photogrammetry.

  18. Proton range shift analysis on brain pseudo-CT generated from T1 and T2 MR.

    PubMed

    Pileggi, Giampaolo; Speier, Christoph; Sharp, Gregory C; Izquierdo Garcia, David; Catana, Ciprian; Pursley, Jennifer; Amato, Francesco; Seco, Joao; Spadea, Maria Francesca

    2018-05-29

    In radiotherapy, MR imaging is only used because it has significantly better soft tissue contrast than CT, but it lacks electron density information needed for dose calculation. This work assesses the feasibility of using pseudo-CT (pCT) generated from T1w/T2w MR for proton treatment planning, where proton range comparisons are performed between standard CT and pCT. MR and CT data from 14 glioblastoma patients were used in this study. The pCT was generated by using conversion libraries obtained from tissue segmentation and anatomical regioning of the T1w/T2w MR. For each patient, a plan consisting of three 18 Gy beams was designed on the pCT, for a total of 42 analyzed beams. The plan was then transferred onto the CT that represented the ground truth. Range shift (RS) between pCT and CT was computed at R 80 over 10 slices. The acceptance threshold for RS was according to clinical guidelines of two institutions. A γ-index test was also performed on the total dose for each patient. Mean absolute error and bias for the pCT were 124 ± 10 and -16 ± 26 Hounsfield Units (HU), respectively. The median and interquartile range of RS was 0.5 and 1.4 mm, with highest absolute value being 4.4 mm. Of the 42 beams, 40 showed RS less than the clinical range margin. The two beams with larger RS were both in the cranio-caudal direction and had segmentation errors due to the partial volume effect, leading to misassignment of the HU. This study showed the feasibility of using T1w and T2w MRI to generate a pCT for proton therapy treatment, thus avoiding the use of a planning CT and allowing better target definition and possibilities for online adaptive therapies. Further improvements of the methodology are still required to improve the conversion from MRI intensities to HUs.

  19. Medication errors in anesthesia: unacceptable or unavoidable?

    PubMed

    Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra

    Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.

  20. Truncation of CPC solar collectors and its effect on energy collection

    NASA Astrophysics Data System (ADS)

    Carvalho, M. J.; Collares-Pereira, M.; Gordon, J. M.; Rabl, A.

    1985-01-01

    Analytic expressions are derived for the angular acceptance function of two-dimensional compound parabolic concentrator solar collectors (CPC's) of arbitrary degree of truncation. Taking into account the effect of truncation on both optical and thermal losses in real collectors, the increase in monthly and yearly collectible energy is also evaluated. Prior analyses that have ignored the correct behavior of the angular acceptance function at large angles for truncated collectors are shown to be in error by 0-2 percent in calculations of yearly collectible energy for stationary collectors.

  1. Guidelines for the assessment and acceptance of potential brain-dead organ donors

    PubMed Central

    Westphal, Glauco Adrieno; Garcia, Valter Duro; de Souza, Rafael Lisboa; Franke, Cristiano Augusto; Vieira, Kalinca Daberkow; Birckholz, Viviane Renata Zaclikevis; Machado, Miriam Cristine; de Almeida, Eliana Régia Barbosa; Machado, Fernando Osni; Sardinha, Luiz Antônio da Costa; Wanzuita, Raquel; Silvado, Carlos Eduardo Soares; Costa, Gerson; Braatz, Vera; Caldeira Filho, Milton; Furtado, Rodrigo; Tannous, Luana Alves; de Albuquerque, André Gustavo Neves; Abdala, Edson; Gonçalves, Anderson Ricardo Roman; Pacheco-Moreira, Lúcio Filgueiras; Dias, Fernando Suparregui; Fernandes, Rogério; Giovanni, Frederico Di; de Carvalho, Frederico Bruzzi; Fiorelli, Alfredo; Teixeira, Cassiano; Feijó, Cristiano; Camargo, Spencer Marcantonio; de Oliveira, Neymar Elias; David, André Ibrahim; Prinz, Rafael Augusto Dantas; Herranz, Laura Brasil; de Andrade, Joel

    2016-01-01

    Organ transplantation is the only alternative for many patients with terminal diseases. The increasing disproportion between the high demand for organ transplants and the low rate of transplants actually performed is worrisome. Some of the causes of this disproportion are errors in the identification of potential organ donors and in the determination of contraindications by the attending staff. Therefore, the aim of the present document is to provide guidelines for intensive care multi-professional staffs for the recognition, assessment and acceptance of potential organ donors. PMID:27737418

  2. Monte Carlo tests of the ELIPGRID-PC algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangularmore » sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.« less

  3. Mitigate the impact of transmitter finite extinction ratio using K-means clustering algorithm for 16QAM signal

    NASA Astrophysics Data System (ADS)

    Yu, Miao; Li, Yan; Shu, Tong; Zhang, Yifan; Hong, Xiaobin; Qiu, Jifang; Zuo, Yong; Guo, Hongxiang; Li, Wei; Wu, Jian

    2018-02-01

    A method of recognizing 16QAM signal based on k-means clustering algorithm is proposed to mitigate the impact of transmitter finite extinction ratio. There are pilot symbols with 0.39% overhead assigned to be regarded as initial centroids of k-means clustering algorithm. Simulation result in 10 GBaud 16QAM system shows that the proposed method obtains higher precision of identification compared with traditional decision method for finite ER and IQ mismatch. Specially, the proposed method improves the required OSNR by 5.5 dB, 4.5 dB, 4 dB and 3 dB at FEC limit with ER= 12 dB, 16 dB, 20 dB and 24 dB, respectively, and the acceptable bias error and IQ mismatch range is widened by 767% and 360% with ER =16 dB, respectively.

  4. Identification of poor households for premium exemptions in Ghana's National Health Insurance Scheme: empirical analysis of three strategies.

    PubMed

    Aryeetey, Genevieve Cecilia; Jehu-Appiah, Caroline; Spaan, Ernst; D'Exelle, Ben; Agyepong, Irene; Baltussen, Rob

    2010-12-01

    To evaluate the effectiveness of three alternative strategies to identify poor households: means testing (MT), proxy means testing (PMT) and participatory wealth ranking (PWR) in urban, rural and semi-urban settings in Ghana. The primary motivation was to inform implementation of the National Health Insurance policy of premium exemptions for the poorest households. Survey of 145-147 households per setting to collect data on consumption expenditure to estimate MT measures and of household assets to estimate PMT measures. We organized focus group discussions to derive PWR measures. We compared errors of inclusion and exclusion of PMT and PWR relative to MT, the latter being considered the gold standard measure to identify poor households. Compared to MT, the errors of exclusion and inclusion of PMT ranged between 0.46-0.63 and 0.21-0.36, respectively, and of PWR between 0.03-0.73 and 0.17-0.60, respectively, depending on the setting. Proxy means testing and PWR have considerable errors of exclusion and inclusion in comparison with MT. PWR is a subjective measure of poverty and has appeal because it reflects community's perceptions on poverty. However, as its definition of the poor varies across settings, its acceptability as a uniform strategy to identify the poor in Ghana may be questionable. PMT and MT are potential strategies to identify the poor, and their relative societal attractiveness should be judged in a broader economic analysis. This study also holds relevance to other programmes that require identification of the poor in low-income countries. © 2010 Blackwell Publishing Ltd.

  5. Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2017-05-01

    The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  6. Clinical value of CT-based preoperative software assisted lung lobe volumetry for predicting postoperative pulmonary function after lung surgery

    NASA Astrophysics Data System (ADS)

    Wormanns, Dag; Beyer, Florian; Hoffknecht, Petra; Dicken, Volker; Kuhnigk, Jan-Martin; Lange, Tobias; Thomas, Michael; Heindel, Walter

    2005-04-01

    This study was aimed to evaluate a morphology-based approach for prediction of postoperative forced expiratory volume in one second (FEV1) after lung resection from preoperative CT scans. Fifteen Patients with surgically treated (lobectomy or pneumonectomy) bronchogenic carcinoma were enrolled in the study. A preoperative chest CT and pulmonary function tests before and after surgery were performed. CT scans were analyzed by prototype software: automated segmentation and volumetry of lung lobes was performed with minimal user interaction. Determined volumes of different lung lobes were used to predict postoperative FEV1 as percentage of the preoperative values. Predicted FEV1 values were compared to the observed postoperative values as standard of reference. Patients underwent lobectomy in twelve cases (6 upper lobes; 1 middle lobe; 5 lower lobes; 6 right side; 6 left side) and pneumonectomy in three cases. Automated calculation of predicted postoperative lung function was successful in all cases. Predicted FEV1 ranged from 54% to 95% (mean 75% +/- 11%) of the preoperative values. Two cases with obviously erroneous LFT were excluded from analysis. Mean error of predicted FEV1 was 20 +/- 160 ml, indicating absence of systematic error; mean absolute error was 7.4 +/- 3.3% respective 137 +/- 77 ml/s. The 200 ml reproducibility criterion for FEV1 was met in 11 of 13 cases (85%). In conclusion, software-assisted prediction of postoperative lung function yielded a clinically acceptable agreement with the observed postoperative values. This method might add useful information for evaluation of functional operability of patients with lung cancer.

  7. Research of laser echo signal simulator

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Shi, Rui; Wang, Xin; Li, Zhou

    2015-11-01

    Laser echo signal simulator is one of the most significant components of hardware-in-the-loop (HWIL) simulation systems for LADAR. System model and time series model of laser echo signal simulator are established. Some influential factors which could induce fixed error and random error on the simulated return signals are analyzed, and then these system insertion errors are analyzed quantitatively. Using this theoretical model, the simulation system is investigated experimentally. The results corrected by subtracting fixed error indicate that the range error of the simulated laser return signal is less than 0.25m, and the distance range that the system can simulate is from 50m to 20km.

  8. Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method

    NASA Astrophysics Data System (ADS)

    Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu

    2017-10-01

    Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.

  9. A Posteriori Correction of Forecast and Observation Error Variances

    NASA Technical Reports Server (NTRS)

    Rukhovets, Leonid

    2005-01-01

    Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.

  10. Methodology evaluation: Effects of independent verification and intergration on one class of application

    NASA Technical Reports Server (NTRS)

    Page, J.

    1981-01-01

    The effects of an independent verification and integration (V and I) methodology on one class of application are described. Resource profiles are discussed. The development environment is reviewed. Seven measures are presented to test the hypothesis that V and I improve the development and product. The V and I methodology provided: (1) a decrease in requirements ambiguities and misinterpretation; (2) no decrease in design errors; (3) no decrease in the cost of correcting errors; (4) a decrease in the cost of system and acceptance testing; (5) an increase in early discovery of errors; (6) no improvement in the quality of software put into operation; and (7) a decrease in productivity and an increase in cost.

  11. Correcting for particle counting bias error in turbulent flow

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Baratuci, W.

    1985-01-01

    An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.

  12. Accounting for measurement error in log regression models with applications to accelerated testing.

    PubMed

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  13. Improved Calibration through SMAP RFI Change Detection

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey; De Amici, Giovanni; Mohammed, Priscilla; Peng, Jinzheng

    2017-01-01

    Anthropogenic Radio-Frequency Interference (RFI) drove both the SMAP (Soil Moisture Active Passive) microwave radiometer hardware and Level 1 science algorithm designs to use new technology and techniques for the first time on a spaceflight project. Care was taken to provide special features allowing the detection and removal of harmful interference in order to meet the error budget. Nonetheless, the project accepted a risk that RFI and its mitigation would exceed the 1.3-K error budget. Thus, RFI will likely remain a challenge afterwards due to its changing and uncertain nature. To address the challenge, we seek to answer the following questions: How does RFI evolve over the SMAP lifetime? What calibration error does the changing RFI environment cause? Can time series information be exploited to reduce these errors and improve calibration for all science products reliant upon SMAP radiometer data? In this talk, we address the first question.

  14. Adaptation of the CVT algorithm for catheter optimization in high dose rate brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poulin, Eric; Fekete, Charles-Antoine Collins; Beaulieu, Luc

    2013-11-15

    Purpose: An innovative, simple, and fast method to optimize the number and position of catheters is presented for prostate and breast high dose rate (HDR) brachytherapy, both for arbitrary templates or template-free implants (such as robotic templates).Methods: Eight clinical cases were chosen randomly from a bank of patients, previously treated in our clinic to test our method. The 2D Centroidal Voronoi Tessellations (CVT) algorithm was adapted to distribute catheters uniformly in space, within the maximum external contour of the planning target volume. The catheters optimization procedure includes the inverse planning simulated annealing algorithm (IPSA). Complete treatment plans can then bemore » generated from the algorithm for different number of catheters. The best plan is chosen from different dosimetry criteria and will automatically provide the number of catheters and their positions. After the CVT algorithm parameters were optimized for speed and dosimetric results, it was validated against prostate clinical cases, using clinically relevant dose parameters. The robustness to implantation error was also evaluated. Finally, the efficiency of the method was tested in breast interstitial HDR brachytherapy cases.Results: The effect of the number and locations of the catheters on prostate cancer patients was studied. Treatment plans with a better or equivalent dose distributions could be obtained with fewer catheters. A better or equal prostate V100 was obtained down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of prostate V100 and D90. Implantation errors up to 3 mm were acceptable since no statistical difference was found when compared to 0 mm error (p > 0.05). No significant difference in dosimetric indices was observed for the different combination of parameters within the CVT algorithm. A linear relation was found between the number of random points and the optimization time of the CVT algorithm. Because the computation time decrease with the number of points and that no effects were observed on the dosimetric indices when varying the number of sampling points and the number of iterations, they were respectively fixed to 2500 and to 100. The computation time to obtain ten complete treatments plans ranging from 9 to 18 catheters, with the corresponding dosimetric indices, was 90 s. However, 93% of the computation time is used by a research version of IPSA. For the breast, on average, the Radiation Therapy Oncology Group recommendations would be satisfied down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of V100, dose homogeneity index, and D90.Conclusions: The authors have devised a simple, fast and efficient method to optimize the number and position of catheters in interstitial HDR brachytherapy. The method was shown to be robust for both prostate and breast HDR brachytherapy. More importantly, the computation time of the algorithm is acceptable for clinical use. Ultimately, this catheter optimization algorithm could be coupled with a 3D ultrasound system to allow real-time guidance and planning in HDR brachytherapy.« less

  15. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  16. Detection, prevention, and rehabilitation of amblyopia.

    PubMed

    Spiritus, M

    1997-10-01

    The necessity of visual preschool screening for reducing the prevalence of amblyopia is widely accepted. The beneficial results of large-scale screening programs conducted in Scandinavia are reported. Screening monocular visual acuity at 3.5 to 4 years of age appears to be an excellent basis for detecting and treating amblyopia and an acceptable compromise between the pitfalls encountered in screening younger children and the cost-to-benefit ratio. In this respect, several preschoolers' visual acuity charts have been evaluated. New recently developed small-target random stereotests and binocular suppression tests have also been developed with the aim of correcting the many false negatives (anisometropic amblyopia or bilateral high ametropia) induced by the usual stereotests. Longitudinal studies demonstrate that correction of high refractive errors decreases the risk of amblyopia and does not impede emmetropization. The validity of various photoscreening and videoscreening procedures for detecting refractive errors in infants prior to the onset of strabismus or amblyopia, as well as alternatives to conventional occlusion therapy, is discussed.

  17. Mars approach navigation using Doppler and range measurements to surface beacons and orbiting spacecraft

    NASA Technical Reports Server (NTRS)

    Thurman, Sam W.; Estefan, Jeffrey A.

    1991-01-01

    Approximate analytical models are developed and used to construct an error covariance analysis for investigating the range of orbit determination accuracies which might be achieved for typical Mars approach trajectories. The sensitivity or orbit determination accuracy to beacon/orbiter position errors and to small spacecraft force modeling errors is also investigated. The results indicate that the orbit determination performance obtained from both Doppler and range data is a strong function of the inclination of the approach trajectory to the Martian equator, for surface beacons, and for orbiters, the inclination relative to the orbital plane. Large variations in performance were also observed for different approach velocity magnitudes; Doppler data in particular were found to perform poorly in determining the downtrack (along the direction of flight) component of spacecraft position. In addition, it was found that small spacecraft acceleration modeling errors can induce large errors in the Doppler-derived downtrack position estimate.

  18. Study of an instrument for sensing errors in a telescope wavefront

    NASA Technical Reports Server (NTRS)

    Golden, L. J.; Shack, R. V.; Slater, D. N.

    1973-01-01

    Partial results are presented of theoretical and experimental investigations of different focal plane sensor configurations for determining the error in a telescope wavefront. The coarse range sensor and fine range sensors are used in the experimentation. The design of a wavefront error simulator is presented along with the Hartmann test, the shearing polarization interferometer, the Zernike test, and the Zernike polarization test.

  19. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Mark; Tuen Mun Hospital, Hong Kong; Grehn, Melanie

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with themore » original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.« less

  20. A 1000-year record of dry conditions in the eastern Canadian prairies reconstructed from oxygen and carbon isotope measurements on Lake Winnipeg sediment organics

    USGS Publications Warehouse

    Buhay, W.M.; Simpson, S.; Thorleifson, H.; Lewis, M.; King, J.; Telka, A.; Wilkinson, Philip M.; Babb, J.; Timsic, S.; Bailey, D.

    2009-01-01

    A short sediment core (162 cm), covering the period AD 920-1999, was sampled from the south basin of Lake Winnipeg for a suite of multi-proxy analyses leading towards a detailed characterisation of the recent millennial lake environment and hydroclimate of southern Manitoba, Canada. Information on the frequency and duration of major dry periods in southern Manitoba, in light of the changes that are likely to occur as a result of an increasingly warming atmosphere, is of specific interest in this study. Intervals of relatively enriched lake sediment cellulose oxygen isotope values (??18Ocellulose) were found to occur from AD 1180 to 1230 (error range: AD 1104-1231 to 1160-1280), 1610-1640 (error range: AD 1571-1634 to 1603-1662), 1670-1720 (error range: AD 1643-1697 to 1692-1738) and 1750-1780 (error range: AD 1724-1766 to 1756-1794). Regional water balance, inferred from calculated Lake Winnipeg water oxygen isotope values (??18Oinf-lw), suggest that the ratio of lake evaporation to catchment input may have been 25-40% higher during these isotopically distinct periods. Associated with the enriched d??18Ocellulose intervals are some depleted carbon isotope values associated with more abundantly preserved sediment organic matter (d??13COM). These suggest reduced microbial oxidation of terrestrially derived organic matter and/or subdued lake productivity during periods of minimised input of nutrients from the catchment area. With reference to other corroborating evidence, it is suggested that the AD 1180-1230, 1610-1640, 1670-1720 and 1750-1780 intervals represent four distinctly drier periods (droughts) in southern Manitoba, Canada. Additionally, lower-magnitude and duration dry periods may have also occurred from 1320 to 1340 (error range: AD 1257-1363), 1530-1540 (error range: AD 1490-1565 to 1498-1572) and 1570-1580 (error range: AD 1531-1599 to 1539-1606). ?? 2009 John Wiley & Sons, Ltd.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Y; Macq, B; Bondar, L

    Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees,more » and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for complex error scenarios. Yafei Xing and Luiza Bondar are funded by BEWARE grants from the Walloon Region. The work presents simulations results for a prompt gamma camera prototype developed by IBA.« less

  2. Day-to-day reliability of gait characteristics in rats.

    PubMed

    Raffalt, Peter C; Nielsen, Louise R; Madsen, Stefan; Munk Højberg, Laurits; Pingel, Jessica; Nielsen, Jens Bo; Wienecke, Jacob; Alkjær, Tine

    2018-04-27

    The purpose of the present study was to determine the day-to-day reliability in stride characteristics in rats during treadmill walking obtained with two-dimensional (2D) motion capture. Kinematics were recorded from 26 adult rats during walking at 8 m/min, 12 m/min and 16 m/min on two separate days. Stride length, stride time, contact time, swing time and hip, knee and ankle joint range of motion were extracted from 15 strides. The relative reliability was assessed using intra-class correlation coefficients (ICC(1,1)) and (ICC(3,1)). The absolute reliability was determined using measurement error (ME). Across walking speeds, the relative reliability ranged from fair to good (ICCs between 0.4 and 0.75). The ME was below 91 mm for strides lengths, below 55 ms for the temporal stride variables and below 6.4° for the joint angle range of motion. In general, the results indicated an acceptable day-to-day reliability of the gait pattern parameters observed in rats during treadmill walking. The results of the present study may serve as a reference material that can help future intervention studies on rat gait characteristics both with respect to the selection of outcome measures and in the interpretation of the results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Multi-GNSS signal-in-space range error assessment - Methodology and results

    NASA Astrophysics Data System (ADS)

    Montenbruck, Oliver; Steigenberger, Peter; Hauschild, André

    2018-06-01

    The positioning accuracy of global and regional navigation satellite systems (GNSS/RNSS) depends on a variety of influence factors. For constellation-specific performance analyses it has become common practice to separate a geometry-related quality factor (the dilution of precision, DOP) from the measurement and modeling errors of the individual ranging measurements (known as user equivalent range error, UERE). The latter is further divided into user equipment errors and contributions related to the space and control segment. The present study reviews the fundamental concepts and underlying assumptions of signal-in-space range error (SISRE) analyses and presents a harmonized framework for multi-GNSS performance monitoring based on the comparison of broadcast and precise ephemerides. The implications of inconsistent geometric reference points, non-common time systems, and signal-specific range biases are analyzed, and strategies for coping with these issues in the definition and computation of SIS range errors are developed. The presented concepts are, furthermore, applied to current navigation satellite systems, and representative results are presented along with a discussion of constellation-specific problems in their determination. Based on data for the January to December 2017 time frame, representative global average root-mean-square (RMS) SISRE values of 0.2 m, 0.6 m, 1 m, and 2 m are obtained for Galileo, GPS, BeiDou-2, and GLONASS, respectively. Roughly two times larger values apply for the corresponding 95th-percentile values. Overall, the study contributes to a better understanding and harmonization of multi-GNSS SISRE analyses and their use as key performance indicators for the various constellations.

  4. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  5. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  6. Prospective memory in an air traffic control simulation: External aids that signal when to act

    PubMed Central

    Loft, Shayne; Smith, Rebekah E.; Bhaskara, Adella

    2011-01-01

    At work and in our personal life we often need to remember to perform intended actions at some point in the future, referred to as Prospective Memory. Individuals sometimes forget to perform intentions in safety-critical work contexts. Holding intentions can also interfere with ongoing tasks. We applied theories and methods from the experimental literature to test the effectiveness of external aids in reducing prospective memory error and costs to ongoing tasks in an air traffic control simulation. Participants were trained to accept and hand-off aircraft, and to detect aircraft conflicts. For the prospective memory task participants were required to substitute alternative actions for routine actions when accepting target aircraft. Across two experiments, external display aids were provided that presented the details of target aircraft and associated intended actions. We predicted that aids would only be effective if they provided information that was diagnostic of target occurrence and in this study we examined the utility of aids that directly cued participants when to allocate attention to the prospective memory task. When aids were set to flash when the prospective memory target aircraft needed to be accepted, prospective memory error and costs to ongoing tasks of aircraft acceptance and conflict detection were reduced. In contrast, aids that did not alert participants specifically when the target aircraft were present provided no advantage compared to when no aids we used. These findings have practical implications for the potential relative utility of automated external aids for occupations where individuals monitor multi-item dynamic displays. PMID:21443381

  7. Prospective memory in an air traffic control simulation: external aids that signal when to act.

    PubMed

    Loft, Shayne; Smith, Rebekah E; Bhaskara, Adella

    2011-03-01

    At work and in our personal life we often need to remember to perform intended actions at some point in the future, referred to as Prospective Memory. Individuals sometimes forget to perform intentions in safety-critical work contexts. Holding intentions can also interfere with ongoing tasks. We applied theories and methods from the experimental literature to test the effectiveness of external aids in reducing prospective memory error and costs to ongoing tasks in an air traffic control simulation. Participants were trained to accept and hand-off aircraft and to detect aircraft conflicts. For the prospective memory task, participants were required to substitute alternative actions for routine actions when accepting target aircraft. Across two experiments, external display aids were provided that presented the details of target aircraft and associated intended actions. We predicted that aids would only be effective if they provided information that was diagnostic of target occurrence, and in this study, we examined the utility of aids that directly cued participants when to allocate attention to the prospective memory task. When aids were set to flash when the prospective memory target aircraft needed to be accepted, prospective memory error and costs to ongoing tasks of aircraft acceptance and conflict detection were reduced. In contrast, aids that did not alert participants specifically when the target aircraft were present provided no advantage compared to when no aids were used. These findings have practical implications for the potential relative utility of automated external aids for occupations where individuals monitor multi-item dynamic displays.

  8. Synthesis and analysis of precise spaceborne laser ranging systems, volume 1. [link analysis

    NASA Technical Reports Server (NTRS)

    Paddon, E. A.

    1977-01-01

    Measurement accuracy goals of 2 cm rms range estimation error and 0.003 cm/sec rms range rate estimation error, with no more than 1 cm (range) static bias error are requirements for laser measurement systems to be used in planned space-based earth physics investigations. Constraints and parameters were defined for links between a high altitude, transmit/receive satellite (HATRS), and one of three targets: a low altitude target satellite, passive (LATS), and active low altitude target, and a ground-based target, as well as with operations with a primary transmit/receive terminal intended to be carried as a shuttle payload, in conjunction with the Spacelab program.

  9. Seed Placement in Permanent Breast Seed Implant Brachytherapy: Are Concerns Over Accuracy Valid?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, Daniel, E-mail: dmorton@bccancer.bc.ca; Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia; Hilts, Michelle

    Purpose: To evaluate seed placement accuracy in permanent breast seed implant brachytherapy (PBSI), to identify any systematic errors and evaluate their effect on dosimetry. Methods and Materials: Treatment plans and postimplant computed tomography scans for 20 PBSI patients were spatially registered and used to evaluate differences between planned and implanted seed positions, termed seed displacements. For each patient, the mean total and directional seed displacements were determined in both standard room coordinates and in needle coordinates relative to needle insertion angle. Seeds were labeled according to their proximity to the anatomy within the breast, to evaluate the influence of anatomicmore » regions on seed placement. Dosimetry within an evaluative target volume (seroma + 5 mm), skin, breast, and ribs was evaluated to determine the impact of seed placement on the treatment. Results: The overall mean (±SD) difference between implanted and planned positions was 9 ± 5 mm for the aggregate seed population. No significant systematic directional displacements were observed for this whole population. However, for individual patients, systematic displacements were observed, implying that intrapatient offsets occur during the procedure. Mean displacements for seeds in the different anatomic areas were not found to be significantly different from the mean for the entire seed population. However, small directional trends were observed within the anatomy, potentially indicating some bias in the delivery. Despite observed differences between the planned and implanted seed positions, the median (range) V{sub 90} for the 20 patients was 97% (66%-100%), and acceptable dosimetry was achieved for critical structures. Conclusions: No significant trends or systematic errors were observed in the placement of seeds in PBSI, including seeds implanted directly into the seroma. Recorded seed displacements may be related to intrapatient setup adjustments. Despite observed seed displacements, acceptable postimplant dosimetry was achieved.« less

  10. Evaluation of the Intel RealSense SR300 camera for image-guided interventions and application in vertebral level localization

    NASA Astrophysics Data System (ADS)

    House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor

    2017-03-01

    PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.

  11. The maximal width of the base of support (BSW): clinical applicability and reliability of a preferred-standing test for measuring the risk of falling.

    PubMed

    Swanenburg, Jaap; Nevzati, Arian; Mittaz Hager, Anne Gabrielle; de Bruin, Eling D; Klipstein, Andreas

    2013-01-01

    The aim of this study was to test the reliability and validity of a preferred-standing test for measuring the risk of falling. The preferred-standing position of elderly fallers and non-fallers and healthy young adults was measured. The maximal BSW was measured. The absolute and relative reliability and discriminant validity were assessed. The expanded timed get-up-and-go test (ETGUG), one-leg stance test (OS), tandem stance (TS), and falls efficacy scale international version (FES-I) were used to determine criterion validity. In total, 146 persons (102 females, 44 males; mean age 55±22 years, range 20-94) were recruited. Forty elderly community dwellers (8 fallers) and 26 young adults were tested twice to determine the test-retest reliability. The BSW showed acceptable test-retest reliability (Intraclass correlation coefficient, ICC2,1=0.77-0.83) and inter-rater reliability (ICC3,1=0.77-0.95) for all groups. The standard error of measurement (SEM) was between 0.77 and 1.87, and the smallest detectable change (SDC) was between 2.14cm and 5.19cm. The Bland-Altman plot revealed no systematic errors. There was significant difference between elderly fallers and non-fallers (F(1/75)=11.951; p=0.001. Spearman's rho coefficient values showed no correlation between the BSW and the ETGUG (-0.17, p=0.47), OLS (-0.04, p=0.65), TS (-0.11, p=0.21), and FES-I (-0.10; p=0.27). Only the BSW was a significant predictor for falling (odds ratio=0.736, p=0.007). The reliability and validity of the BSW protocol were acceptable overall. Prospective studies are warranted to evaluate the predictive value of the BSW for determining the risk of falling. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Navigated total knee arthroplasty: is it error-free?

    PubMed

    Chua, Kerk Hsiang Zackary; Chen, Yongsheng; Lingaraj, Krishna

    2014-03-01

    The aim of this study was to determine whether errors do occur in navigated total knee arthroplasty (TKAs) and to study whether errors in bone resection or implantation contribute to these errors. A series of 20 TKAs was studied using computer navigation. The coronal and sagittal alignments of the femoral and tibial cutting guides, the coronal and sagittal alignments of the final tibial implant and the coronal alignment of the final femoral implant were compared with that of the respective bone resections. To determine the post-implantation mechanical alignment of the limb, the coronal alignment of the femoral and tibial implants was combined. The median deviation between the femoral cutting guide and bone resection was 0° (range -0.5° to +0.5°) in the coronal plane and 1.0° (range -2.0° to +1.0°) in the sagittal plane. The median deviation between the tibial cutting guide and bone resection was 0.5° (range -1.0° to +1.5°) in the coronal plane and 1.0° (range -1.0° to +3.5°) in the sagittal plane. The median deviation between the femoral bone resection and the final implant was 0.25° (range -2.0° to 3.0°) in the coronal plane. The median deviation between the tibial bone resection and the final implant was 0.75° (range -3.0° to +1.5°) in the coronal plane and 1.75° (range -4.0° to +2.0°) in the sagittal plane. The median post-implantation mechanical alignment of the limb was 0.25° (range -3.0° to +2.0°). When navigation is used only to guide the positioning of the cutting jig, errors may arise in the manual, non-navigated steps of the procedure. Our study showed increased cutting errors in the sagittal plane for both the femur and the tibia, and following implantation, the greatest error was seen in the sagittal alignment of the tibial component. Computer navigation should be used not only to guide the positioning of the cutting jig, but also to check the bone resection and implant position during TKA. IV.

  13. Adaptive clinical trial designs for European marketing authorization: a survey of scientific advice letters from the European Medicines Agency.

    PubMed

    Elsäßer, Amelie; Regnstrom, Jan; Vetter, Thorsten; Koenig, Franz; Hemmings, Robert James; Greco, Martina; Papaluca-Amati, Marisa; Posch, Martin

    2014-10-02

    Since the first methodological publications on adaptive study design approaches in the 1990s, the application of these approaches in drug development has raised increasing interest among academia, industry and regulators. The European Medicines Agency (EMA) as well as the Food and Drug Administration (FDA) have published guidance documents addressing the potentials and limitations of adaptive designs in the regulatory context. Since there is limited experience in the implementation and interpretation of adaptive clinical trials, early interaction with regulators is recommended. The EMA offers such interactions through scientific advice and protocol assistance procedures. We performed a text search of scientific advice letters issued between 1 January 2007 and 8 May 2012 that contained relevant key terms. Letters containing questions related to adaptive clinical trials in phases II or III were selected for further analysis. From the selected letters, important characteristics of the proposed design and its context in the drug development program, as well as the responses of the Committee for Human Medicinal Products (CHMP)/Scientific Advice Working Party (SAWP), were extracted and categorized. For 41 more recent procedures (1 January 2009 to 8 May 2012), additional details of the trial design and the CHMP/SAWP responses were assessed. In addition, case studies are presented as examples. Over a range of 5½ years, 59 scientific advices were identified that address adaptive study designs in phase II and phase III clinical trials. Almost all were proposed as confirmatory phase III or phase II/III studies. The most frequently proposed adaptation was sample size reassessment, followed by dropping of treatment arms and population enrichment. While 12 (20%) of the 59 proposals for an adaptive clinical trial were not accepted, the great majority of proposals were accepted (15, 25%) or conditionally accepted (32, 54%). In the more recent 41 procedures, the most frequent concerns raised by CHMP/SAWP were insufficient justifications of the adaptation strategy, type I error rate control and bias. For the majority of proposed adaptive clinical trials, an overall positive opinion was given albeit with critical comments. Type I error rate control, bias and the justification of the design are common issues raised by the CHMP/SAWP.

  14. Geometric Quality Assessment of LIDAR Data Based on Swath Overlap

    NASA Astrophysics Data System (ADS)

    Sampath, A.; Heidemann, H. K.; Stensaas, G. L.

    2016-06-01

    This paper provides guidelines on quantifying the relative horizontal and vertical errors observed between conjugate features in the overlapping regions of lidar data. The quantification of these errors is important because their presence quantifies the geometric quality of the data. A data set can be said to have good geometric quality if measurements of identical features, regardless of their position or orientation, yield identical results. Good geometric quality indicates that the data are produced using sensor models that are working as they are mathematically designed, and data acquisition processes are not introducing any unforeseen distortion in the data. High geometric quality also leads to high geolocation accuracy of the data when the data acquisition process includes coupling the sensor with geopositioning systems. Current specifications (e.g. Heidemann 2014) do not provide adequate means to quantitatively measure these errors, even though they are required to be reported. Current accuracy measurement and reporting practices followed in the industry and as recommended by data specification documents also potentially underestimate the inter-swath errors, including the presence of systematic errors in lidar data. Hence they pose a risk to the user in terms of data acceptance (i.e. a higher potential for Type II error indicating risk of accepting potentially unsuitable data). For example, if the overlap area is too small or if the sampled locations are close to the center of overlap, or if the errors are sampled in flat regions when there are residual pitch errors in the data, the resultant Root Mean Square Differences (RMSD) can still be small. To avoid this, the following are suggested to be used as criteria for defining the inter-swath quality of data: a) Median Discrepancy Angle b) Mean and RMSD of Horizontal Errors using DQM measured on sloping surfaces c) RMSD for sampled locations from flat areas (defined as areas with less than 5 degrees of slope) It is suggested that 4000-5000 points are uniformly sampled in the overlapping regions of the point cloud, and depending on the surface roughness, to measure the discrepancy between swaths. Care must be taken to sample only areas of single return points only. Point-to-Plane distance based data quality measures are determined for each sample point. These measurements are used to determine the above mentioned parameters. This paper details the measurements and analysis of measurements required to determine these metrics, i.e. Discrepancy Angle, Mean and RMSD of errors in flat regions and horizontal errors obtained using measurements extracted from sloping regions (slope greater than 10 degrees). The research is a result of an ad-hoc joint working group of the US Geological Survey and the American Society for Photogrammetry and Remote Sensing (ASPRS) Airborne Lidar Committee.

  15. Reliability of fish size estimates obtained from multibeam imaging sonar

    USGS Publications Warehouse

    Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.

    2013-01-01

    Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄  =  −8.34, SE  =  2.39) and white perch (x̄  = 14.48, SE  =  3.99) but not striped bass (x̄  =  3.71, SE  =  2.58) or channel catfish (x̄  = 3.97, SE  =  5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of bias are apparent when files are processed manually and can be filtered out when producing automated software estimates. Multibeam sonar estimates of fish size should be useful for research and management if these potential sources of bias and imprecision are addressed.

  16. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors

    PubMed Central

    Kwon, Heon-Ju; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu

    2018-01-01

    Background/Aims Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Methods Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (VP) was measured via the assumptive hepatectomy plane. Retrospective liver volume (VR) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) VP and VR were evaluated. Plane-dependent error in VP was defined as the absolute difference between VP and VR. % plane-dependent error was defined as follows: |VP–VR|/W∙100. Results Mean VP, VR, and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in VP were 73.3 mL and 10.7%. Mean error and % error in VR were 64.4 mL and 9.3%. Mean plane-dependent error in VP was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in VP exceeded 10% of W in approximately 10% of the subjects in our study. Conclusions There was approximately 5% plane-dependent error in liver VP on CT volumetry. Plane-dependent error in VP exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane. PMID:28759989

  17. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  18. Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca

    NASA Astrophysics Data System (ADS)

    Matteo, N. A.; Morton, Y. T.

    2010-12-01

    The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.

  19. Adverse Drug Events and Medication Errors in African Hospitals: A Systematic Review.

    PubMed

    Mekonnen, Alemayehu B; Alhawassi, Tariq M; McLachlan, Andrew J; Brien, Jo-Anne E

    2018-03-01

    Medication errors and adverse drug events are universal problems contributing to patient harm but the magnitude of these problems in Africa remains unclear. The objective of this study was to systematically investigate the literature on the extent of medication errors and adverse drug events, and the factors contributing to medication errors in African hospitals. We searched PubMed, MEDLINE, EMBASE, Web of Science and Global Health databases from inception to 31 August, 2017 and hand searched the reference lists of included studies. Original research studies of any design published in English that investigated adverse drug events and/or medication errors in any patient population in the hospital setting in Africa were included. Descriptive statistics including median and interquartile range were presented. Fifty-one studies were included; of these, 33 focused on medication errors, 15 on adverse drug events, and three studies focused on medication errors and adverse drug events. These studies were conducted in nine (of the 54) African countries. In any patient population, the median (interquartile range) percentage of patients reported to have experienced any suspected adverse drug event at hospital admission was 8.4% (4.5-20.1%), while adverse drug events causing admission were reported in 2.8% (0.7-6.4%) of patients but it was reported that a median of 43.5% (20.0-47.0%) of the adverse drug events were deemed preventable. Similarly, the median mortality rate attributed to adverse drug events was reported to be 0.1% (interquartile range 0.0-0.3%). The most commonly reported types of medication errors were prescribing errors, occurring in a median of 57.4% (interquartile range 22.8-72.8%) of all prescriptions and a median of 15.5% (interquartile range 7.5-50.6%) of the prescriptions evaluated had dosing problems. Major contributing factors for medication errors reported in these studies were individual practitioner factors (e.g. fatigue and inadequate knowledge/training) and environmental factors, such as workplace distraction and high workload. Medication errors in the African healthcare setting are relatively common, and the impact of adverse drug events is substantial but many are preventable. This review supports the design and implementation of preventative strategies targeting the most likely contributing factors.

  20. SURBAL: computerized metes and bounds surveying

    Treesearch

    Roger N. Baughman; James H. Patric

    1970-01-01

    A computer program has been developed at West Virginia University for use in metes and bounds surveying. Stations, slope distances, slope angles, and bearings are primary information needed for this program. Other information needed may include magnetic deviation, acceptable closure error, desired map scale, and title designation. SURBAL prints out latitudes and...

  1. Resolving Ethical Disputes Through Arbitration: An Alternative to Code Penalties.

    ERIC Educational Resources Information Center

    Barwis, Gail Lund

    Arbitration cases involving journalism ethics can be grouped into three major categories: outside activities that lead to conflicts of interest, acceptance of gifts that compromise journalistic objectivity, and writing false or misleading information or failing to check facts or correct errors. In most instances, failure to adhere to ethical…

  2. Junior High Student Responsibilities for Basic Skills.

    ERIC Educational Resources Information Center

    Parker, Charles C.

    This paper advances the thesis that students should be trained to recognize acceptable and unacceptable performances in basic skill areas and should assume responsibility for attaining proficiency in these areas. Among the topics discussed are the value of having junior high school students check their own assignments, discover their errors, and…

  3. 49 CFR 236.1023 - Errors and malfunctions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., the railroad shall maintain a database of all safety-relevant hazards as set forth in the PTCSP and... next business day; (2) Be transmitted in a manner and form acceptable to the Associate Administrator... information shall be forwarded to the Associate Administrator as soon as practicable in supplemental reports...

  4. 49 CFR 236.1023 - Errors and malfunctions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., the railroad shall maintain a database of all safety-relevant hazards as set forth in the PTCSP and... next business day; (2) Be transmitted in a manner and form acceptable to the Associate Administrator... information shall be forwarded to the Associate Administrator as soon as practicable in supplemental reports...

  5. 49 CFR 236.1023 - Errors and malfunctions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., the railroad shall maintain a database of all safety-relevant hazards as set forth in the PTCSP and... next business day; (2) Be transmitted in a manner and form acceptable to the Associate Administrator... information shall be forwarded to the Associate Administrator as soon as practicable in supplemental reports...

  6. 49 CFR 236.1023 - Errors and malfunctions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., the railroad shall maintain a database of all safety-relevant hazards as set forth in the PTCSP and... next business day; (2) Be transmitted in a manner and form acceptable to the Associate Administrator... information shall be forwarded to the Associate Administrator as soon as practicable in supplemental reports...

  7. 49 CFR 236.1023 - Errors and malfunctions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., the railroad shall maintain a database of all safety-relevant hazards as set forth in the PTCSP and... next business day; (2) Be transmitted in a manner and form acceptable to the Associate Administrator... information shall be forwarded to the Associate Administrator as soon as practicable in supplemental reports...

  8. A Productivity Analysis of Nonprocedural Languages.

    DTIC Science & Technology

    1982-12-01

    abstracts. The tools they work with are up-to- date, well documented, and f:om acceptable/reliable sources. With their Maket - 4- 1 a nd teeoo in enced...Eie invarsion are possible at any level. Additionally, any fisld carn be indexed at any level. b. Online operation with iateractive error- zorrec- c

  9. The Prevalence and Special Educational Requirements of Dyscompetent Physicians

    ERIC Educational Resources Information Center

    Williams, Betsy W.

    2006-01-01

    Underperformance among physicians is not well studied or defined; yet, the identification and remediation of physicians who are not performing up to acceptable standards is central to quality care and patient safety. Methods for estimating the prevalence of dyscompetence include evaluating available data on medical errors, malpractice claims,…

  10. An assessment of the intra- and inter-reliability of the lumbar paraspinal muscle parameters using CT scan and magnetic resonance imaging.

    PubMed

    Hu, Zhi-Jun; He, Jian; Zhao, Feng-Dong; Fang, Xiang-Qian; Zhou, Li-Na; Fan, Shun-Wu

    2011-06-01

    A reliability study was conducted. To estimate the intra- and intermeasurement errors in the measurements of functional cross-sectional area (FCSA), density, and T2 signal intensity of paraspinal muscles using computed tomography (CT) scan and magnetic resonance imaging (MRI). CT scan and MRI had been used widely to measure the cross-sectional area and degeneration of the back muscles in spine and muscle research. But there is still no systemic study to analyze the reliability of these measurements. This study measured the FCSA and fatty infiltration (density on CT scan and T2 signal intensity on MRI) of the paraspinal muscles at L3-L4, L4-L5, and L5-S1 in 29 patients with chronic low back pain. Two experienced musculoskeletal radiologists and one superior spine surgeon traced the region of interest twice within 3 weeks for measurement of the intra- and interobserver reliability. The intraclass correlation coefficients (ICCs) of the intra-reliability ranged from fair to excellent for FCSA, and good to excellent for fatty infiltration. The ICCs of the inter-reliability ranged from fair to excellent for FCSA, and good to excellent for fatty infiltration. There were no significant differences between CT scan and MRI in reliability results, except in the relative standard error of fatty infiltration measurement. The ICCs of the FCSA measurement between CT scan and MRI ranged from poor to good. The reliabilities of the CT scan and MRI for measuring the FCSA and fatty infiltration of the atrophied lumbar paraspinal muscles were acceptable. It was reliable for using uniform one image method for a single paraspinal muscle evaluation study. And the authors preferred to advise the MRI other than CT scan for paraspinal muscles measurements of FCSA and fatty infiltration.

  11. Improved accuracy of methemoglobin detection by pulse CO-oximetry during hypoxia.

    PubMed

    Feiner, John R; Bickler, Philip E

    2010-11-01

    Methemoglobin in the blood cannot be detected by conventional pulse oximetry and may bias the oximeter's estimate (Spo(2)) of the true arterial functional oxygen saturation (Sao(2)). A recently introduced "pulse CO-oximeter" (Masimo Rainbow SET® Radical-7) that measures SpMet, a noninvasive measurement of the percentage of methemoglobin in arterial blood (%MetHb), was shown to read spuriously high values during hypoxia. In this study we sought to determine whether the manufacturer's modifications have improved the device's ability to detect and accurately measure methemoglobin and deoxyhemoglobin simultaneously. Twelve healthy adult volunteer subjects were fitted with sensors on the middle finger of each hand, and a radial arterial catheter was placed for blood sampling. Intravenous administration of ∼300 mg of sodium nitrite elevated subjects' methemoglobin levels to a 7% to 11% target level, and hypoxia was induced to different levels of Sao(2) (70% to 100%) by varying fractional inspired oxygen. Pulse CO-oximeter readings were compared with arterial blood values measured with a Radiometer ABL800 FLEX multi-wavelength oximeter. Pulse CO-oximeter methemoglobin reading performance was analyzed by the bias (SpMet-%MetHb), and by observing the incidence of meaningful reading errors and predictive value at the various hypoxia levels. Spo(2) bias (Spo(2)--Sao(2)), precision, and root-mean-square error were evaluated during conditions of elevated methemoglobin. Observations spanned 74% to 100% Sao(2) and 0.4% to 14.4% methemoglobin with 307 blood draws and 602 values from the 2 oximeters. Masimo methemoglobin reading bias and precision over the full Sao(2) span was 0.16% and 0.83%, respectively, and was similar across the span. Masimo Spo(2) readings were biased -1.93% across the 70% to 100% Sao(2) range. The Rainbow's methemoglobin readings are acceptably accurate over an oxygen saturation range of 74%-100% and a methemoglobin range of 0%-14%.

  12. The Impact of Trajectory Prediction Uncertainty on Air Traffic Controller Performance and Acceptability

    NASA Technical Reports Server (NTRS)

    Mercer, Joey S.; Bienert, Nancy; Gomez, Ashley; Hunt, Sarah; Kraut, Joshua; Martin, Lynne; Morey, Susan; Green, Steven M.; Prevot, Thomas; Wu, Minghong G.

    2013-01-01

    A Human-In-The-Loop air traffic control simulation investigated the impact of uncertainties in trajectory predictions on NextGen Trajectory-Based Operations concepts, seeking to understand when the automation would become unacceptable to controllers or when performance targets could no longer be met. Retired air traffic controllers staffed two en route transition sectors, delivering arrival traffic to the northwest corner-post of Atlanta approach control under time-based metering operations. Using trajectory-based decision-support tools, the participants worked the traffic under varying levels of wind forecast error and aircraft performance model error, impacting the ground automations ability to make accurate predictions. Results suggest that the controllers were able to maintain high levels of performance, despite even the highest levels of trajectory prediction errors.

  13. Quality by design for herbal drugs: a feedforward control strategy and an approach to define the acceptable ranges of critical quality attributes.

    PubMed

    Yan, Binjun; Li, Yao; Guo, Zhengtai; Qu, Haibin

    2014-01-01

    The concept of quality by design (QbD) has been widely accepted and applied in the pharmaceutical manufacturing industry. There are still two key issues to be addressed in the implementation of QbD for herbal drugs. The first issue is the quality variation of herbal raw materials and the second issue is the difficulty in defining the acceptable ranges of critical quality attributes (CQAs). To propose a feedforward control strategy and a method for defining the acceptable ranges of CQAs for the two issues. In the case study of the ethanol precipitation process of Danshen (Radix Salvia miltiorrhiza) injection, regression models linking input material attributes and process parameters to CQAs were built first and an optimisation model for calculating the best process parameters according to the input materials was established. Then, the feasible material space was defined and the acceptable ranges of CQAs for the previous process were determined. In the case study, satisfactory regression models were built with cross-validated regression coefficients (Q(2) ) all above 91 %. The feedforward control strategy was applied successfully to compensate the quality variation of the input materials, which was able to control the CQAs in the 90-110 % ranges of the desired values. In addition, the feasible material space for the ethanol precipitation process was built successfully, which showed the acceptable ranges of the CQAs for the concentration process. The proposed methodology can help to promote the implementation of QbD for herbal drugs. Copyright © 2013 John Wiley & Sons, Ltd.

  14. SU-F-T-24: Impact of Source Position and Dose Distribution Due to Curvature of HDR Transfer Tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, A; Yue, N

    2016-06-15

    Purpose: Brachytherapy is a highly targeted from of radiotherapy. While this may lead to ideal dose distributions on the treatment planning system, a small error in source location can lead to change in the dose distribution. The purpose of this study is to quantify the impact on source position error due to curvature of the transfer tubes and the impact this may have on the dose distribution. Methods: Since the source travels along the midline of the tube, an estimate of the positioning error for various angles of curvature was determined using geometric properties of the tube. Based on themore » range of values a specific shift was chosen to alter the treatment plans for a number of cervical cancer patients who had undergone HDR brachytherapy boost using tandem and ovoids. Impact of dose to target and organs at risk were determined and checked against guidelines outlined by radiation oncologist. Results: The estimate of the positioning error was 2mm short of the expected position (the curved tube can only cause the source to not reach as far as with a flat tube). Quantitative impact on the dose distribution is still in the process of being analyzed. Conclusion: The accepted positioning tolerance for the source position of a HDR brachytherapy unit is plus or minus 1mm. If there is an additional 2mm discrepancy due to tube curvature, this can result in a source being 1mm to 3mm short of the expected location. While we do always attempt to keep the tubes straight, in some cases such as with tandem and ovoids, the tandem connector does not extend as far out from the patient so the ovoid tubes always contain some degree of curvature. The dose impact of this may be significant.« less

  15. Do surveys with paper and electronic devices differ in quality and cost? Experience from the Rufiji Health and demographic surveillance system in Tanzania.

    PubMed

    Mukasa, Oscar; Mushi, Hildegalda P; Maire, Nicolas; Ross, Amanda; de Savigny, Don

    2017-01-01

    Data entry at the point of collection using mobile electronic devices may make data-handling processes more efficient and cost-effective, but there is little literature to document and quantify gains, especially for longitudinal surveillance systems. To examine the potential of mobile electronic devices compared with paper-based tools in health data collection. Using data from 961 households from the Rufiji Household and Demographic Survey in Tanzania, the quality and costs of data collected on paper forms and electronic devices were compared. We also documented, using qualitative approaches, field workers, whom we called 'enumerators', and households' members on the use of both methods. Existing administrative records were combined with logistics expenditure measured directly from comparison households to approximate annual costs per 1,000 households surveyed. Errors were detected in 17% (166) of households for the paper records and 2% (15) for the electronic records (p < 0.001). There were differences in the types of errors (p = 0.03). Of the errors occurring, a higher proportion were due to accuracy in paper surveys (79%, 95% CI: 72%, 86%) compared with electronic surveys (58%, 95% CI: 29%, 87%). Errors in electronic surveys were more likely to be related to completeness (32%, 95% CI 12%, 56%) than in paper surveys (11%, 95% CI: 7%, 17%).The median duration of the interviews ('enumeration'), per household was 9.4 minutes (90% central range 6.4, 12.2) for paper and 8.3 (6.1, 12.0) for electronic surveys (p = 0.001). Surveys using electronic tools, compared with paper-based tools, were less costly by 28% for recurrent and 19% for total costs. Although there were technical problems with electronic devices, there was good acceptance of both methods by enumerators and members of the community. Our findings support the use of mobile electronic devices for large-scale longitudinal surveys in resource-limited settings.

  16. Convergence study of global meshing on enamel-cement-bracket finite element model

    NASA Astrophysics Data System (ADS)

    Samshuri, S. F.; Daud, R.; Rojan, M. A.; Basaruddin, K. S.; Abdullah, A. B.; Ariffin, A. K.

    2017-09-01

    This paper presents on meshing convergence analysis of finite element (FE) model to simulate enamel-cement-bracket fracture. Three different materials used in this study involving interface fracture are concerned. Complex behavior ofinterface fracture due to stress concentration is the reason to have a well-constructed meshing strategy. In FE analysis, meshing size is a critical factor that influenced the accuracy and computational time of analysis. The convergence study meshing scheme involving critical area (CA) and non-critical area (NCA) to ensure an optimum meshing sizes are acquired for this FE model. For NCA meshing, the area of interest are at the back of enamel, bracket ligature groove and bracket wing. For CA meshing, area of interest are enamel area close to cement layer, the cement layer and bracket base. The value of constant NCA meshing tested are meshing size 1 and 0.4. The value constant CA meshing tested are 0.4 and 0.1. Manipulative variables are randomly selected and must abide the rule of NCA must be higher than CA. This study employed first principle stresses due to brittle failure nature of the materials used. Best meshing size are selected according to convergence error analysis. Results show that, constant CA are more stable compare to constant NCA meshing. Then, 0.05 constant CA meshing are tested to test the accuracy of smaller meshing. However, unpromising result obtained as the errors are increasing. Thus, constant CA 0.1 with NCA mesh of 0.15 until 0.3 are the most stable meshing as the error in this region are lowest. Convergence test was conducted on three selected coarse, medium and fine meshes at the range of NCA mesh of 0.15 until 3 and CA mesh area stay constant at 0.1. The result shows that, at coarse mesh 0.3, the error are 0.0003% compare to 3% acceptable error. Hence, the global meshing are converge as the meshing size at CA 0.1 and NCA 0.15 for this model.

  17. An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.

    PubMed

    Liu, Zhiyuan; Wang, Changhui

    2015-10-23

    In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.

  18. The error in total error reduction.

    PubMed

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Evaluating mixed samples as a source of error in non-invasive genetic studies using microsatellites

    USGS Publications Warehouse

    Roon, David A.; Thomas, M.E.; Kendall, K.C.; Waits, L.P.

    2005-01-01

    The use of noninvasive genetic sampling (NGS) for surveying wild populations is increasing rapidly. Currently, only a limited number of studies have evaluated potential biases associated with NGS. This paper evaluates the potential errors associated with analysing mixed samples drawn from multiple animals. Most NGS studies assume that mixed samples will be identified and removed during the genotyping process. We evaluated this assumption by creating 128 mixed samples of extracted DNA from brown bear (Ursus arctos) hair samples. These mixed samples were genotyped and screened for errors at six microsatellite loci according to protocols consistent with those used in other NGS studies. Five mixed samples produced acceptable genotypes after the first screening. However, all mixed samples produced multiple alleles at one or more loci, amplified as only one of the source samples, or yielded inconsistent electropherograms by the final stage of the error-checking process. These processes could potentially reduce the number of individuals observed in NGS studies, but errors should be conservative within demographic estimates. Researchers should be aware of the potential for mixed samples and carefully design gel analysis criteria and error checking protocols to detect mixed samples.

  20. An alternative sensor-based method for glucose monitoring in children and young people with diabetes.

    PubMed

    Edge, Julie; Acerini, Carlo; Campbell, Fiona; Hamilton-Shield, Julian; Moudiotis, Chris; Rahman, Shakeel; Randell, Tabitha; Smith, Anne; Trevelyan, Nicola

    2017-06-01

    To determine accuracy, safety and acceptability of the FreeStyle Libre Flash Glucose Monitoring System in the paediatric population. Eighty-nine study participants, aged 4-17 years, with type 1 diabetes were enrolled across 9 diabetes centres in the UK. A factory calibrated sensor was inserted on the back of the upper arm and used for up to 14 days. Sensor glucose measurements were compared with capillary blood glucose (BG) measurements. Sensor results were masked to participants. Clinical accuracy of sensor results versus BG results was demonstrated, with 83.8% of results in zone A and 99.4% of results in zones A and B of the consensus error grid. Overall mean absolute relative difference (MARD) was 13.9%. Sensor accuracy was unaffected by patient factors such as age, body weight, sex, method of insulin administration or time of use (day vs night). Participants were in the target glucose range (3.9-10.0 mmol/L) ∼50% of the time (mean 12.1 hours/day), with an average of 2.2 hours/day and 9.5 hours/day in hypoglycaemia and hyperglycaemia, respectively. Sensor application, wear/use of the device and comparison to self-monitoring of blood glucose were rated favourably by most participants/caregivers (84.3-100%). Five device related adverse events were reported across a range of participant ages. Accuracy, safety and user acceptability of the FreeStyle Libre System were demonstrated for the paediatric population. Accuracy of the system was unaffected by subject characteristics, making it suitable for a broad range of children and young people with diabetes. NCT02388815. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  1. [Analysis of the results of the SEIMC External Quality Control Program for HIV-1 and HCV viral loads. Year 2008].

    PubMed

    Mira, Nieves Orta; Serrano, María del Remedio Guna; Martínez, José Carlos Latorre; Ovies, María Rosario; Pérez, José L; Cardona, Concepción Gimeno

    2010-01-01

    Human immunodeficiency virus type 1 (HIV-1) and hepatitis C virus (HCV) viral load determinations are among the most relevant markers for the follow up of patients infected with these viruses. External quality control tools are crucial to ensure the accuracy of results obtained by microbiology laboratories. This article summarized the results obtained from the 2008 SEIMC External Quality Control Program for HIV-1 and HCV viral loads. In the HIV-1 program, a total of five standards were sent. One standard consisted in seronegative human plasma, while the remaining four contained plasma from 3 different viremic patients, in the range of 2-5 log(10) copies/mL; two of these standards were identical aiming to determine repeatability. The specificity was complete for all commercial methods, and no false positive results were reported by the participants. A significant proportion of the laboratories (24% on average) obtained values out of the accepted range (mean +/- 0.2 log(10) copies/mL), depending on the standard and on the method used for quantification. Repeatability was very good, with up to 95% of laboratories reporting results within the limits (D < 0.5 log(10) copias/mL). The HCV program consisted of two standards with different viral load contents. Most of the participants (88,7%) obtained results within the accepted range (mean +/- 1.96 SD log(10) UI/mL). Post-analytical errors due to mistranscription of the results were detected for HCV, but not for the HIV-1 program. Data from this analysis reinforce the utility of proficiency programmes to ensure the quality of the results obtained by a particular laboratory, as well as the importance of the post-analytical phase on the overall quality. Due to the remarkable interlaboratory variability, it is advisable to use the same method and the same laboratory for patient follow up. 2010 Elsevier España S.L. All rights reserved.

  2. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  3. Geometrical correction factors for heat flux meters

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Papell, S. S.

    1974-01-01

    General formulas are derived for determining gage averaging errors of strip-type heat flux meters used in the measurement of one-dimensional heat flux distributions. The local averaging error e(x) is defined as the difference between the measured value of the heat flux and the local value which occurs at the center of the gage. In terms of e(x), a correction procedure is presented which allows a better estimate for the true value of the local heat flux. For many practical problems, it is possible to use relatively large gages to obtain acceptable heat flux measurements.

  4. SABRINA - an interactive geometry modeler for MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, J.T.; Murphy, J.

    One of the most difficult tasks when analyzing a complex three-dimensional system with Monte Carlo is geometry model development. SABRINA attempts to make the modeling process more user-friendly and less of an obstacle. It accepts both combinatorial solid bodies and MCNP surfaces and produces MCNP cells. The model development process in SABRINA is highly interactive and gives the user immediate feedback on errors. Users can view their geometry from arbitrary perspectives while the model is under development and interactively find and correct modeling errors. An example of a SABRINA display is shown. It represents a complex three-dimensional shape.

  5. Flight calibration of compensated and uncompensated pitot-static airspeed probes and application of the probes to supersonic cruise vehicles

    NASA Technical Reports Server (NTRS)

    Webb, L. D.; Washington, H. P.

    1972-01-01

    Static pressure position error calibrations for a compensated and an uncompensated XB-70 nose boom pitot static probe were obtained in flight. The methods (Pacer, acceleration-deceleration, and total temperature) used to obtain the position errors over a Mach number range from 0.5 to 3.0 and an altitude range from 25,000 feet to 70,000 feet are discussed. The error calibrations are compared with the position error determined from wind tunnel tests, theoretical analysis, and a standard NACA pitot static probe. Factors which influence position errors, such as angle of attack, Reynolds number, probe tip geometry, static orifice location, and probe shape, are discussed. Also included are examples showing how the uncertainties caused by position errors can affect the inlet controls and vertical altitude separation of a supersonic transport.

  6. Comparison of Agar Dilution, Disk Diffusion, MicroScan, and Vitek Antimicrobial Susceptibility Testing Methods to Broth Microdilution for Detection of Fluoroquinolone-Resistant Isolates of the Family Enterobacteriaceae

    PubMed Central

    Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.

    1999-01-01

    Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809

  7. Limitations of Surface Mapping Technology in Accurately Identifying Critical Errors in Dental Students' Crown Preparations.

    PubMed

    Furness, Alan R; Callan, Richard S; Mackert, J Rodway; Mollica, Anthony G

    2018-01-01

    The aim of this study was to evaluate the effectiveness of the Planmeca Compare software in identifying and quantifying a common critical error in dental students' crown preparations. In 2014-17, a study was conducted at one U.S. dental school that evaluated an ideal crown prep made by a faculty member on a dentoform to modified preps. Two types of preparation errors were created by the addition of flowable composite to the occlusal surface of identical dies of the preparations to represent the underreduction of the distolingual cusp. The error was divided into two classes: the minor class allowed for 1 mm of occlusal clearance, and the major class allowed for no occlusal clearance. The preparations were then digitally evaluated against the ideal preparation using Planmeca Compare. Percent comparison values were obtained from each trial and averaged together. False positives and false negatives were also identified and used to determine the accuracy of the evaluation. Critical errors that did not involve a substantial change in the surface area of the preparation were inconsistently identified. Within the limitations of this study, the authors concluded that the Compare software was unable to consistently identify common critical errors within an acceptable degree of error.

  8. Evaluation of Trajectory Errors in an Automated Terminal-Area Environment

    NASA Technical Reports Server (NTRS)

    Oseguera-Lohr, Rosa M.; Williams, David H.

    2003-01-01

    A piloted simulation experiment was conducted to document the trajectory errors associated with use of an airplane's Flight Management System (FMS) in conjunction with a ground-based ATC automation system, Center-TRACON Automation System (CTAS) in the terminal area. Three different arrival procedures were compared: current-day (vectors from ATC), modified (current-day with minor updates), and data link with FMS lateral navigation. Six active airline pilots flew simulated arrivals in a fixed-base simulator. The FMS-datalink procedure resulted in the smallest time and path distance errors, indicating that use of this procedure could reduce the CTAS arrival-time prediction error by about half over the current-day procedure. Significant sources of error contributing to the arrival-time error were crosstrack errors and early speed reduction in the last 2-4 miles before the final approach fix. Pilot comments were all very positive, indicating the FMS-datalink procedure was easy to understand and use, and the increased head-down time and workload did not detract from the benefit. Issues that need to be resolved before this method of operation would be ready for commercial use include development of procedures acceptable to controllers, better speed conformance monitoring, and FMS database procedures to support the approach transitions.

  9. Automated estimation of hip prosthesis migration: a feasibility study

    NASA Astrophysics Data System (ADS)

    Vandemeulebroucke, Jef; Deklerck, Rudi; Temmermans, Frederik; Van Gompel, Gert; Buls, Nico; Scheerlinck, Thierry; de Mey, Johan

    2013-09-01

    A common complication associated with hip arthoplasty is prosthesis migration, and for most cemented components a migration greater than 0.85 mm within the first six months after surgery, are an indicator for prosthesis failure. Currently, prosthesis migration is evaluated using X-ray images, which can only reliably estimate migrations larger than 5 mm. We propose an automated method for estimating prosthesis migration more accurately, using CT images and image registration techniques. We report on the results obtained using an experimental set-up, in which a metal prosthesis can be translated and rotated with respect to a cadaver femur, over distances and angles applied using a combination of positioning stages. Images are first preprocessed to reduce artefacts. Bone and prosthesis are extracted using consecutive thresholding and morphological operations. Two registrations are performed, one aligning the bones and the other aligning the prostheses. The migration is estimated as the difference between the found transformations. We use a robust, multi-resolution, stochastic optimization approach, and compare the mean squared intensity differences (MS) to mutual information (MI). 30 high-resolution helical CT scans were acquired for prosthesis translations ranging from 0.05 mm to 4 mm, and rotations ranging from 0.3° to 3° . For the translations, the mean 3D registration error was found to be 0.22 mm for MS, and 0.15 mm for MI. For the rotations, the standard deviation of the estimation error was 0.18° for MS, and 0.08° for MI. The results show that the proposed approach is feasible and that clinically acceptable accuracies can be obtained. Clinical validation studies on patient images will now be undertaken.

  10. Model Based Verification of Cyber Range Event Environments

    DTIC Science & Technology

    2015-12-10

    Model Based Verification of Cyber Range Event Environments Suresh K. Damodaran MIT Lincoln Laboratory 244 Wood St., Lexington, MA, USA...apply model based verification to cyber range event environment configurations, allowing for the early detection of errors in event environment...Environment Representation (CCER) ontology. We also provide an overview of a methodology to specify verification rules and the corresponding error

  11. Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar.

    PubMed

    Li, Zhan; Jupp, David L B; Strahler, Alan H; Schaaf, Crystal B; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S; Chakrabarti, Supriya; Cook, Timothy A; Paynter, Ian; Saenz, Edward J; Schaefer, Michael

    2016-03-02

    Radiometric calibration of the Dual-Wavelength Echidna(®) Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρ(app)), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρ(app) are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρ(app) error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρ(app) from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars.

  12. Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar

    PubMed Central

    Li, Zhan; Jupp, David L. B.; Strahler, Alan H.; Schaaf, Crystal B.; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S.; Chakrabarti, Supriya; Cook, Timothy A.; Paynter, Ian; Saenz, Edward J.; Schaefer, Michael

    2016-01-01

    Radiometric calibration of the Dual-Wavelength Echidna® Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρapp), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρapp are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρapp error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρapp from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars. PMID:26950126

  13. The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2016-01-01

    Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.

  14. Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.

    PubMed

    Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B

    2017-01-01

    In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.

  15. A new accuracy measure based on bounded relative error for time series forecasting

    PubMed Central

    Twycross, Jamie; Garibaldi, Jonathan M.

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480

  16. A new accuracy measure based on bounded relative error for time series forecasting.

    PubMed

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  17. Evaluation of a Teleform-based data collection system: a multi-center obesity research case study.

    PubMed

    Jenkins, Todd M; Wilson Boyce, Tawny; Akers, Rachel; Andringa, Jennifer; Liu, Yanhong; Miller, Rosemary; Powers, Carolyn; Ralph Buncher, C

    2014-06-01

    Utilizing electronic data capture (EDC) systems in data collection and management allows automated validation programs to preemptively identify and correct data errors. For our multi-center, prospective study we chose to use TeleForm, a paper-based data capture software that uses recognition technology to create case report forms (CRFs) with similar functionality to EDC, including custom scripts to identify entry errors. We quantified the accuracy of the optimized system through a data audit of CRFs and the study database, examining selected critical variables for all subjects in the study, as well as an audit of all variables for 25 randomly selected subjects. Overall we found 6.7 errors per 10,000 fields, with similar estimates for critical (6.9/10,000) and non-critical (6.5/10,000) variables-values that fall below the acceptable quality threshold of 50 errors per 10,000 established by the Society for Clinical Data Management. However, error rates were found to widely vary by type of data field, with the highest rate observed with open text fields. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Effect of phase errors in stepped-frequency radar systems

    NASA Astrophysics Data System (ADS)

    Vanbrundt, H. E.

    1988-04-01

    Stepped-frequency waveforms are being considered for inverse synthetic aperture radar (ISAR) imaging from ship and airborne platforms and for detailed radar cross section (RCS) measurements of ships and aircraft. These waveforms make it possible to achieve resolutions of 1.0 foot by using existing radar designs and processing technology. One problem not yet fully resolved in using stepped-frequency waveform for ISAR imaging is the deterioration in signal level caused by random frequency error. Random frequency error of the stepped-frequency source results in reduced peak responses and increased null responses. The resulting reduced signal-to-noise ratio is range dependent. Two of the major concerns addressed in this report are radar range limitations for ISAR and the error in calibration for RCS measurements caused by differences in range between a passive reflector used for an RCS reference and the target to be measured. In addressing these concerns, NOSC developed an analysis to assess the tolerable frequency error in terms of resulting power loss in signal power and signal-to-phase noise.

  19. Analysis of the technology acceptance model in examining hospital nurses' behavioral intentions toward the use of bar code medication administration.

    PubMed

    Song, Lunar; Park, Byeonghwa; Oh, Kyeung Mi

    2015-04-01

    Serious medication errors continue to exist in hospitals, even though there is technology that could potentially eliminate them such as bar code medication administration. Little is known about the degree to which the culture of patient safety is associated with behavioral intention to use bar code medication administration. Based on the Technology Acceptance Model, this study evaluated the relationships among patient safety culture and perceived usefulness and perceived ease of use, and behavioral intention to use bar code medication administration technology among nurses in hospitals. Cross-sectional surveys with a convenience sample of 163 nurses using bar code medication administration were conducted. Feedback and communication about errors had a positive impact in predicting perceived usefulness (β=.26, P<.01) and perceived ease of use (β=.22, P<.05). In a multiple regression model predicting for behavioral intention, age had a negative impact (β=-.17, P<.05); however, teamwork within hospital units (β=.20, P<.05) and perceived usefulness (β=.35, P<.01) both had a positive impact on behavioral intention. The overall bar code medication administration behavioral intention model explained 24% (P<.001) of the variance. Identified factors influencing bar code medication administration behavioral intention can help inform hospitals to develop tailored interventions for RNs to reduce medication administration errors and increase patient safety by using this technology.

  20. Acceptance, commissioning and clinical use of the WOmed T-200 kilovoltage X-ray therapy unit

    PubMed Central

    Zucchetti, Paolo

    2015-01-01

    Objective: The objective of this work was to characterize the performance of the WOmed T-200-kilovoltage (kV) therapy machine. Methods: Mechanical functionality, radiation leakage, alignment and interlocks were investigated. Half-value layers (HVLs) (first and second HVLs) from X-ray beams generated from tube potentials between 30 and 200 kV were measured. Reference dose was determined in water. Beam start-up characteristics, dose linearity and reproducibility, beam flatness, and uniformity as well as deviations from inverse square law were assessed. Relative depth doses (RDDs) were determined in water and water-equivalent plastic. The quality assurance program included a dosimetry audit with thermoluminescent dosemeters. Results: All checks on machine performance were satisfactory. HVLs ranged between 0.45–4.52 mmAl and 0.69–1.78 mmCu. Dose rates varied between 0.2 and 3 Gy min−1 with negligible time-end errors. There were differences in measured RDDs from published data. Beam outputs were confirmed with the dosimetry audit. The use of published backscatter factors was implemented to account for changes in phantom scatter for treatments with irregularly shaped fields. Conclusion: Guidance on the determination of HVL and RDD in kV beams can be contradictory. RDDs were determined through measurement and curve fitting. These differed from published RDD data, and the differences observed were larger in the low-kV energy range. Advances in knowledge: This article reports on the comprehensive and novel approach to the acceptance, commissioning and clinical use of a modern kV therapy machine. The challenges in the dosimetry of kV beams faced by the medical physicist in the clinic are highlighted. PMID:26224430

  1. Cross-cultural adaptation and psychometric evaluations of the Turkish version of Parkinson Fatigue Scale.

    PubMed

    Ozturk, Erhan Arif; Kocer, Bilge Gonenli; Umay, Ebru; Cakci, Aytul

    2018-06-07

    The objectives of the present study were to translate and cross-culturally adapt the English version of the Parkinson Fatigue Scale into Turkish, to evaluate its psychometric properties, and to compare them with that of other language versions. A total of 144 patients with idiopathic Parkinson disease were included in the study. The Turkish version of Parkinson Fatigue Scale was evaluated for data quality, scaling assumptions, acceptability, reliability, and validity. The questionnaire response rate was 100% for both test and retest. The percentage of missing data was zero for items, and the percentage of computable scores was full. Floor and ceiling effects were absent. The Parkinson Fatigue Scale provides an acceptable internal consistency (Cronbach's alpha was 0.974 for 1st test and 0.964 for a retest, and corrected item-to-total correlations were ranged from 0.715 to 0.906) and test-retest reliability (Cohen's kappa coefficients were ranged from 0.632 to 0.786 for individuals items, and intraclass correlation coefficient was 0.887 for the overall Parkinson Fatigue Scale Score). An exploratory factor analysis of the items revealed a single factor explaining 71.7% of variance. The goodness-of-fit statistics for the one-factorial confirmatory factor analysis were Tucker Lewis index = 0.961, comparative fit index = 0.971 and root mean square error of approximation = 0.077 for a single factor. The average Parkinson Fatigue Scale Score was correlated significantly with sociodemographic data, clinical characteristics and scores of rating scales. The Turkish version of the Parkinson Fatigue Scale seems to be culturally well adapted and have good psychometric properties. The scale can be used in further studies to assess the fatigue in patients with Parkinson's disease.

  2. Performance Errors in Weight Training and Their Correction.

    ERIC Educational Resources Information Center

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  3. Suppression of the Nonlinear Zeeman Effect and Heading Error in Earth-Field-Range Alkali-Vapor Magnetometers.

    PubMed

    Bao, Guzhi; Wickenbrock, Arne; Rochester, Simon; Zhang, Weiping; Budker, Dmitry

    2018-01-19

    The nonlinear Zeeman effect can induce splitting and asymmetries of magnetic-resonance lines in the geophysical magnetic-field range. This is a major source of "heading error" for scalar atomic magnetometers. We demonstrate a method to suppress the nonlinear Zeeman effect and heading error based on spin locking. In an all-optical synchronously pumped magnetometer with separate pump and probe beams, we apply a radio-frequency field which is in phase with the precessing magnetization. This results in the collapse of the multicomponent asymmetric magnetic-resonance line with ∼100  Hz width in the Earth-field range into a single peak with a width of 22 Hz, whose position is largely independent of the orientation of the sensor within a range of orientation angles. The technique is expected to be broadly applicable in practical magnetometry, potentially boosting the sensitivity and accuracy of Earth-surveying magnetometers by increasing the magnetic-resonance amplitude, decreasing its width, and removing the important and limiting heading-error systematic.

  4. Analysis of GRACE Range-rate Residuals with Emphasis on Reprocessed Star-Camera Datasets

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.; Naeimi, M.; Bandikova, T.; Guerr, T. M.; Klinger, B.

    2015-12-01

    Since March 2002 the two GRACE satellites orbit the Earth at rela-tively low altitude. Determination of the gravity field of the Earth including itstemporal variations from the satellites' orbits and the inter-satellite measure-ments is the goal of the mission. Yet, the time-variable gravity signal has notbeen fully exploited. This can be seen better in the computed post-fit range-rateresiduals. The errors reflected in the range-rate residuals are due to the differ-ent sources as systematic errors, mismodelling errors and tone errors. Here, weanalyse the effect of three different star-camera data sets on the post-fit range-rate residuals. On the one hand, we consider the available attitude data andon other hand we take the two different data sets which has been reprocessedat Institute of Geodesy, Hannover and Institute of Theoretical Geodesy andSatellite Geodesy, TU Graz Austria respectively. Then the differences in therange-rate residuals computed from different attitude dataset are analyzed inthis study. Details will be given and results will be discussed.

  5. Enhancing Deep-Water Low-Resolution Gridded Bathymetry Using Single Image Super-Resolution

    NASA Astrophysics Data System (ADS)

    Elmore, P. A.; Nock, K.; Bonanno, D.; Smith, L.; Ferrini, V. L.; Petry, F. E.

    2017-12-01

    We present research to employ single-image super-resolution (SISR) algorithms to enhance knowledge of the seafloor using the 1-minute GEBCO 2014 grid when 100m grids from high-resolution sonar systems are available for training. Our numerical upscaling experiments of x15 upscaling of the GEBCO grid along three areas of the Eastern Pacific Ocean along mid-ocean ridge systems where we have these 100m gridded bathymetry data sets, which we accept as ground-truth. We show that four SISR algorithms can enhance this low-resolution knowledge of bathymetry versus bicubic or Spline-In-Tension algorithms through upscaling under these conditions: 1) rough topography is present in both training and testing areas and 2) the range of depths and features in the training area contains the range of depths in the enhancement area. We quantitatively judged successful SISR enhancement versus bicubic interpolation when Student's hypothesis testing show significant improvement of the root-mean squared error (RMSE) between upscaled bathymetry and 100m gridded ground-truth bathymetry at p < 0.05. In addition, we found evidence that random forest based SISR methods may provide more robust enhancements versus non-forest based SISR algorithms.

  6. MEASUREMENT OF THE INTENSITY OF THE PROTON BEAM OF THE HARVARD UNIVERSITY SYNCHROCYCLOTRON FOR ENERGY-SPECTRAL MEASUREMENTS OF NUCLEAR SECONDARIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoro, R.T.; Peelle, R.W.

    1964-03-01

    Two thin helium-filled parallel-plate ionization chambers were designed for use in continuously monitoring the 160-Mev proton beam of the Harvard University Synchrocyclotron over an intensity range from 10/sup 5/ to 10/sup 10/ protons/ sec. The ionlzation chambers were calibrated by two independert methods. In four calibrations the charge collected in the ionization chambers was compared with that deposited in a Faraday cup which followed the ionization chambers in the proton beam. In a second method, a calibration was made by individually counting beam protons with a pnir of thin scintillation detectors. The ionization chamber response was found to be flatmore » within 2% for a five-decade range of beam intensity. Comparison of the Faraday-cup calibrations with that from proton counting shows agreement to within 5%, which is considered satisfactory. The experimental results were also in agreement, within estimated errors, with the ionization chamber response calculated using an accepted value of the average energy loss per ion pair for helium. A slow shift in the calibrations with time is ascribed to a gradual contamination of the helium of the chambers by air leakage. (auth)« less

  7. Catadioptric lenses in Visible Light Communications

    NASA Astrophysics Data System (ADS)

    Garcia-Marquez, J.; Valencia, J. C.; Perez, H.; Topsu, S.

    2015-04-01

    Since few years ago, visible light communications (VLC) have experience an accelerated interest from a research point of view. The beginning of this decade has seen many improvements in VLC at an electronic level. High rates of transmission at low bit error ratios (BER) have been reported. A few numbers of start-ups have initiated activities to offer a variety of applications ranging from indoor geo-localization to internet, but in spite of these advancements, some other problems arise. Long-range transmissions mean a high BER which reduce the number of applications. In this sense, new redesigned optical collectors or in some cases, optical reflectors must be considered to ensure a low BER at higher distance transmissions. Here we also expose a preliminary design of a catadioptric and monolithical lens for a LI-FI receiver with two rotationally symmetrical main piecewise surfaces za and zb. These surfaces are represented in a system of cylindrical coordinates with an anterior surface za with a central and refractive sector surrounded by a peripheral reflective sector and a back piecewise surface zb with a central refractive sector and a reflective sector, both characterized as ideal for capturing light within large acceptance angles.

  8. Bias Reduction and Filter Convergence for Long Range Stereo

    NASA Technical Reports Server (NTRS)

    Sibley, Gabe; Matthies, Larry; Sukhatme, Gaurav

    2005-01-01

    We are concerned here with improving long range stereo by filtering image sequences. Traditionally, measurement errors from stereo camera systems have been approximated as 3-D Gaussians, where the mean is derived by triangulation and the covariance by linearized error propagation. However, there are two problems that arise when filtering such 3-D measurements. First, stereo triangulation suffers from a range dependent statistical bias; when filtering this leads to over-estimating the true range. Second, filtering 3-D measurements derived via linearized error propagation leads to apparent filter divergence; the estimator is biased to under-estimate range. To address the first issue, we examine the statistical behavior of stereo triangulation and show how to remove the bias by series expansion. The solution to the second problem is to filter with image coordinates as measurements instead of triangulated 3-D coordinates.

  9. SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platt, M; Platt, M; Lamba, M

    2016-06-15

    Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actualmore » shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.« less

  10. Assessing explicit error reporting in the narrative electronic medical record using keyword searching.

    PubMed

    Cao, Hui; Stetson, Peter; Hripcsak, George

    2003-01-01

    Many types of medical errors occur in and outside of hospitals, some of which have very serious consequences and increase cost. Identifying errors is a critical step for managing and preventing them. In this study, we assessed the explicit reporting of medical errors in the electronic record. We used five search terms "mistake," "error," "incorrect," "inadvertent," and "iatrogenic" to survey several sets of narrative reports including discharge summaries, sign-out notes, and outpatient notes from 1991 to 2000. We manually reviewed all the positive cases and identified them based on the reporting of physicians. We identified 222 explicitly reported medical errors. The positive predictive value varied with different keywords. In general, the positive predictive value for each keyword was low, ranging from 3.4 to 24.4%. Therapeutic-related errors were the most common reported errors and these reported therapeutic-related errors were mainly medication errors. Keyword searches combined with manual review indicated some medical errors that were reported in medical records. It had a low sensitivity and a moderate positive predictive value, which varied by search term. Physicians were most likely to record errors in the Hospital Course and History of Present Illness sections of discharge summaries. The reported errors in medical records covered a broad range and were related to several types of care providers as well as non-health care professionals.

  11. Being a Victim of Medical Error in Brazil: An (Un)Real Dilemma

    PubMed Central

    Mendonça, Vitor Silva; Custódio, Eda Marconi

    2016-01-01

    Medical error stems from inadequate professional conduct that is capable of producing harm to life or exacerbating the health of another, whether through act or omission. This situation has become increasingly common in Brazil and worldwide. In this study, the aim was to understand what being the victim of medical error is like and to investigate the circumstances imposed on this condition of victims in Brazil. A semi-structured interview was conducted with twelve people who had gone through situations of medical error in their lives, creating a space for narratives of their experiences and deep reflection on the phenomenon. The concept of medical error has a negative connotation, often being associated with the incompetence of a medical professional. Medical error in Brazil is demonstrated by low-quality professional performance and represents the current reality of the country because of the common lack of respect and consideration for patients. Victims often remark on their loss of identity, as their social functions have been interrupted and they do not expect to regain such. It was found, however, little assumption of error in the involved doctors’ discourses and attitudes, which felt a need to judge the medical conduct in an attempt to assert their rights. Medical error in Brazil presents a punitive character and is little discussed in medical and scientific circles. The stigma of medical error is closely connected to the value and cultural judgments of the country, making it difficult to accept, both by victims and professionals. PMID:27403461

  12. Test-Retest Reliability and Minimal Detectable Change of the D2 Test of Attention in Patients with Schizophrenia.

    PubMed

    Lee, Posen; Lu, Wen-Shian; Liu, Chin-Hsuan; Lin, Hung-Yu; Hsieh, Ching-Lin

    2017-12-08

    The d2 Test of Attention (D2) is a commonly used measure of selective attention for patients with schizophrenia. However, its test-retest reliability and minimal detectable change (MDC) are unknown in patients with schizophrenia, limiting its utility in both clinical and research settings. The aim of the present study was to examine the test-retest reliability and MDC of the D2 in patients with schizophrenia. A rater administered the D2 on 108 patients with schizophrenia twice at a 1-month interval. Test-retest reliability was determined through the calculation of the intra-class correlation coefficient (ICC). We also carried out Bland-Altman analysis, which included a scatter plot of the differences between test and retest against their mean. Systematic biases were evaluated by use of a paired t-test. The ICCs for the D2 ranged from 0.78 to 0.94. The MDCs (MDC%) of the seven subscores were 102.3 (29.7), 19.4 (85.0), 7.2 (94.6), 21.0 (69.0), 104.0 (33.1), 105.0 (35.8), and 7.8 (47.8), which represented limited-to-acceptable random measurement error. Trends in the Bland-Altman plots of the omissions (E1), commissions (E2), and errors (E) were noted, presenting that the data had heteroscedasticity. According to the results, the D2 had good test-retest reliability, especially in the scores of TN, TN-E, and CP. For the further research, finding a way to improve the administration procedure to reduce random measurement error would be important for the E1, E2, E, and FR subscores. © The Author(s) 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. A Prospective Multicenter Evaluation of the Accuracy of a Novel Implanted Continuous Glucose Sensor: PRECISE II.

    PubMed

    Christiansen, Mark P; Klaff, Leslie J; Brazg, Ronald; Chang, Anna R; Levy, Carol J; Lam, David; Denham, Douglas S; Atiee, George; Bode, Bruce W; Walters, Steven J; Kelley, Lynne; Bailey, Timothy S

    2018-03-01

    Persistent use of real-time continuous glucose monitoring (CGM) improves diabetes control in individuals with type 1 diabetes (T1D) and type 2 diabetes (T2D). PRECISE II was a nonrandomized, blinded, prospective, single-arm, multicenter study that evaluated the accuracy and safety of the implantable Eversense CGM system among adult participants with T1D and T2D (NCT02647905). The primary endpoint was the mean absolute relative difference (MARD) between paired Eversense and Yellow Springs Instrument (YSI) reference measurements through 90 days postinsertion for reference glucose values from 40 to 400 mg/dL. Additional endpoints included Clarke Error Grid analysis and sensor longevity. The primary safety endpoint was the incidence of device-related or sensor insertion/removal procedure-related serious adverse events (SAEs) through 90 days postinsertion. Ninety participants received the CGM system. The overall MARD value against reference glucose values was 8.8% (95% confidence interval: 8.1%-9.3%), which was significantly lower than the prespecified 20% performance goal for accuracy (P < 0.0001). Ninety-three percent of CGM values were within 20/20% of reference values over the total glucose range of 40-400 mg/dL. Clarke Error Grid analysis showed 99.3% of samples in the clinically acceptable error zones A (92.8%) and B (6.5%). Ninety-one percent of sensors were functional through day 90. One related SAE (1.1%) occurred during the study for removal of a sensor. The PRECISE II trial demonstrated that the Eversense CGM system provided accurate glucose readings through the intended 90-day sensor life with a favorable safety profile.

  14. Concurrent validation of an inertial measurement system to quantify kicking biomechanics in four football codes.

    PubMed

    Blair, Stephanie; Duthie, Grant; Robertson, Sam; Hopkins, William; Ball, Kevin

    2018-05-17

    Wearable inertial measurement systems (IMS) allow for three-dimensional analysis of human movements in a sport-specific setting. This study examined the concurrent validity of a IMS (Xsens MVN system) for measuring lower extremity and pelvis kinematics in comparison to a Vicon motion analysis system (MAS) during kicking. Thirty footballers from Australian football (n = 10), soccer (n = 10), rugby league and rugby union (n = 10) clubs completed 20 kicks across four conditions. Concurrent validity was assessed using a linear mixed-modelling approach, which allowed the partition of between and within-subject variance from the device measurement error. Results were expressed in raw and standardised units for assessments of differences in means and measurement error, and interpreted via non-clinical magnitude-based inferences. Trivial to small differences were found in linear velocities (foot and pelvis), angular velocities (knee, shank and thigh), sagittal joint (knee and hip) and segment angle (shank and pelvis) means (mean difference: 0.2-5.8%) between the IMS and MAS in Australian football, soccer and the rugby codes. Trivial to small measurement errors (from 0.1 to 5.8%) were found between the IMS and MAS in all kinematic parameters. The IMS demonstrated acceptable levels of concurrent validity compared to a MAS when measuring kicking biomechanics across the four football codes. Wearable IMS offers various benefits over MAS, such as, out-of-laboratory testing, larger measurement range and quick data output, to help improve the ecological validity of biomechanical testing and the timing of feedback. The results advocate the use of IMS to quantify biomechanics of high-velocity movements in sport-specific settings. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Designing image segmentation studies: Statistical power, sample size and reference standard quality.

    PubMed

    Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C

    2017-12-01

    Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Test-retest reliability of sudden ankle inversion measurements in subjects with healthy ankle joints.

    PubMed

    Eechaute, Christophe; Vaes, Peter; Duquet, William; Van Gheluwe, Bart

    2007-01-01

    Sudden ankle inversion tests have been used to investigate whether the onset of peroneal muscle activity is delayed in patients with chronically unstable ankle joints. Before interpreting test results of latency times in patients with chronic ankle instability and healthy subjects, the reliability of these measures must be first demonstrated. To investigate the test-retest reliability of variables measured during a sudden ankle inversion movement in standing subjects with healthy ankle joints. Validation study. Research laboratory. 15 subjects with healthy ankle joints (30 ankles). Subjects stood on an ankle inversion platform with both feet tightly fixed to independently moveable trapdoors. An unexpected sudden ankle inversion of 50 degrees was imposed. We measured latency and motor response times and electromechanical delay of the peroneus longus muscle, along with the time and angular position of the first and second decelerating moments, the mean and maximum inversion speed, and the total inversion time. Correlation coefficients and standard error of measurements were calculated. Intraclass correlation coefficients ranged from 0.17 for the electromechanical delay of the peroneus longus muscle (standard error of measurement = 2.7 milliseconds) to 0.89 for the maximum inversion speed (standard error of measurement = 34.8 milliseconds). The reliability of the latency and motor response times of the peroneus longus muscle, the time of the first and second decelerating moments, and the mean and maximum inversion speed was acceptable in subjects with healthy ankle joints and supports the investigation of the reliability of these measures in subjects with chronic ankle instability. The lower reliability of the electromechanical delay of the peroneus longus muscle and the angular positions of both decelerating moments calls the use of these variables into question.

  17. Comparison of pitot traverses taken at varying distances downstream of obstructions.

    PubMed

    Guffey, S E; Booth, D W

    1999-01-01

    This study determined the deviations between pitot traverses taken under "ideal" conditions--at least seven duct diameter's lengths (i.e., distance = 7D) from obstructions, elbows, junction fittings, and other disturbances to flows--with those taken downstream from commonplace disturbances. Two perpendicular 10-point, log-linear velocity pressure traverses were taken at various distances downstream of tested upstream conditions. Upstream conditions included a plain duct opening, a junction fitting, a single 90 degrees elbow, and two elbows rotated 90 degrees from each other into two orthogonal planes. Airflows determined from those values were compared with the values measured more than 40D downstream of the same obstructions under ideal conditions. The ideal measurements were taken on three traverse diameters in the same plane separated by 120 degrees in honed drawn-over-mandrel tubing. In all cases the pitot tubes were held in place by devices that effectively eliminated alignment errors and insertion depth errors. Duct velocities ranged from 1500 to 4500 ft/min. Results were surprisingly good if one employed two perpendicular traverses. When the averages of two perpendicular traverses was taken, deviations from ideal value were 6% or less even for traverses taken as close as 2D distance from the upstream disturbances. At 3D distance, deviations seldom exceeded 5%. With single diameter traverses, errors seldom exceeded 5% at 6D or more downstream from the disturbance. Interestingly, percentage deviations were about the same at high and low velocities. This study demonstrated that two perpendicular pitot traverses can be taken as close as 3D from these disturbances with acceptable (< or = 5%) deviations from measurements taken under ideal conditions.

  18. A comparison between a new model and current models for estimating trunk segment inertial parameters.

    PubMed

    Wicke, Jason; Dumas, Genevieve A; Costigan, Patrick A

    2009-01-05

    Modeling of the body segments to estimate segment inertial parameters is required in the kinetic analysis of human motion. A new geometric model for the trunk has been developed that uses various cross-sectional shapes to estimate segment volume and adopts a non-uniform density function that is gender-specific. The goal of this study was to test the accuracy of the new model for estimating the trunk's inertial parameters by comparing it to the more current models used in biomechanical research. Trunk inertial parameters estimated from dual X-ray absorptiometry (DXA) were used as the standard. Twenty-five female and 24 male college-aged participants were recruited for the study. Comparisons of the new model to the accepted models were accomplished by determining the error between the models' trunk inertial estimates and that from DXA. Results showed that the new model was more accurate across all inertial estimates than the other models. The new model had errors within 6.0% for both genders, whereas the other models had higher average errors ranging from 10% to over 50% and were much more inconsistent between the genders. In addition, there was little consistency in the level of accuracy for the other models when estimating the different inertial parameters. These results suggest that the new model provides more accurate and consistent trunk inertial estimates than the other models for both female and male college-aged individuals. However, similar studies need to be performed using other populations, such as elderly or individuals from a distinct morphology (e.g. obese). In addition, the effect of using different models on the outcome of kinetic parameters, such as joint moments and forces needs to be assessed.

  19. Non-coding glucometers among pediatric patients with diabetes: looking for the target population and an accuracy evaluation of no-coding personal glucometer.

    PubMed

    Fendler, Wojciech; Hogendorf, Anna; Szadkowska, Agnieszka; Młynarski, Wojciech

    2011-01-01

    Self-monitoring of blood glucose (SMBG) is one of the cornerstones of diabetes management. To evaluate the potential for miscoding of a personal glucometer, to define a target population among pediatric patients with diabetes for a non-coding glucometer and the accuracy of the Contour TS non-coding system. Potential for miscoding during self-monitoring of blood glucose was evaluated by means of an anonymous questionnaire, with worst and best case scenarios evaluated depending on the responses pattern. Testing of the Contour TS system was performed according to guidelines set by the national committee for clinical laboratory standards. Estimated frequency of individuals prone to non-coding ranged from 68.21% (95% 60.70- 75.72%) to 7.95% (95%CI 3.86-12.31%) for the worse and best case scenarios respectively. Factors associated with increased likelihood of non-coding were: a smaller number of tests per day, a greater number of individuals involved in testing and self-testing by the patient with diabetes. The Contour TS device showed intra- and inter-assay accuracy -95%, linear association with laboratory measurements (R2=0.99, p <0.0001) and consistent, but small bias of -1.12% (95% Confidence Interval -3.27 to 1.02%). Clarke error grid analysis showed 4% of values within the benign error zone (B) with the other measurements yielding an acceptably accurate result (zone A). The Contour TS system showed sufficient accuracy to be safely used in monitoring of pediatric diabetic patients. Patients from families with a high throughput of test-strips or multiple individuals involved in SMBG using the same meter are candidates for clinical use of such devices due to an increased risk of calibration errors.

  20. Analysing the accuracy of machine learning techniques to develop an integrated influent time series model: case study of a sewage treatment plant, Malaysia.

    PubMed

    Ansari, Mozafar; Othman, Faridah; Abunama, Taher; El-Shafie, Ahmed

    2018-04-01

    The function of a sewage treatment plant is to treat the sewage to acceptable standards before being discharged into the receiving waters. To design and operate such plants, it is necessary to measure and predict the influent flow rate. In this research, the influent flow rate of a sewage treatment plant (STP) was modelled and predicted by autoregressive integrated moving average (ARIMA), nonlinear autoregressive network (NAR) and support vector machine (SVM) regression time series algorithms. To evaluate the models' accuracy, the root mean square error (RMSE) and coefficient of determination (R 2 ) were calculated as initial assessment measures, while relative error (RE), peak flow criterion (PFC) and low flow criterion (LFC) were calculated as final evaluation measures to demonstrate the detailed accuracy of the selected models. An integrated model was developed based on the individual models' prediction ability for low, average and peak flow. An initial assessment of the results showed that the ARIMA model was the least accurate and the NAR model was the most accurate. The RE results also prove that the SVM model's frequency of errors above 10% or below - 10% was greater than the NAR model's. The influent was also forecasted up to 44 weeks ahead by both models. The graphical results indicate that the NAR model made better predictions than the SVM model. The final evaluation of NAR and SVM demonstrated that SVM made better predictions at peak flow and NAR fit well for low and average inflow ranges. The integrated model developed includes the NAR model for low and average influent and the SVM model for peak inflow.

  1. Nonlinear analysis and dynamic compensation of stylus scanning measurement with wide range

    NASA Astrophysics Data System (ADS)

    Hui, Heiyang; Liu, Xiaojun; Lu, Wenlong

    2011-12-01

    Surface topography is an important geometrical feature of a workpiece that influences its quality and functions such as friction, wearing, lubrication and sealing. Precision measurement of surface topography is fundamental for product quality characterizing and assurance. Stylus scanning technique is a widely used method for surface topography measurement, and it is also regarded as the international standard method for 2-D surface characterizing. Usually surface topography, including primary profile, waviness and roughness, can be measured precisely and efficiently by this method. However, by stylus scanning method to measure curved surface topography, the nonlinear error is unavoidable because of the difference of horizontal position of the actual measured point from given sampling point and the nonlinear transformation process from vertical displacement of the stylus tip to angle displacement of the stylus arm, and the error increases with the increasing of measuring range. In this paper, a wide range stylus scanning measurement system based on cylindrical grating interference principle is constructed, the originations of the nonlinear error are analyzed, the error model is established and a solution to decrease the nonlinear error is proposed, through which the error of the collected data is dynamically compensated.

  2. Error Analysis of Indirect Broadband Monitoring of Multilayer Optical Coatings using Computer Simulations

    NASA Astrophysics Data System (ADS)

    Semenov, Z. V.; Labusov, V. A.

    2017-11-01

    Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.

  3. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  4. The impact of registration accuracy on imaging validation study design: A novel statistical power calculation.

    PubMed

    Gibson, Eli; Fenster, Aaron; Ward, Aaron D

    2013-10-01

    Novel imaging modalities are pushing the boundaries of what is possible in medical imaging, but their signal properties are not always well understood. The evaluation of these novel imaging modalities is critical to achieving their research and clinical potential. Image registration of novel modalities to accepted reference standard modalities is an important part of characterizing the modalities and elucidating the effect of underlying focal disease on the imaging signal. The strengths of the conclusions drawn from these analyses are limited by statistical power. Based on the observation that in this context, statistical power depends in part on uncertainty arising from registration error, we derive a power calculation formula relating registration error, number of subjects, and the minimum detectable difference between normal and pathologic regions on imaging, for an imaging validation study design that accommodates signal correlations within image regions. Monte Carlo simulations were used to evaluate the derived models and test the strength of their assumptions, showing that the model yielded predictions of the power, the number of subjects, and the minimum detectable difference of simulated experiments accurate to within a maximum error of 1% when the assumptions of the derivation were met, and characterizing sensitivities of the model to violations of the assumptions. The use of these formulae is illustrated through a calculation of the number of subjects required for a case study, modeled closely after a prostate cancer imaging validation study currently taking place at our institution. The power calculation formulae address three central questions in the design of imaging validation studies: (1) What is the maximum acceptable registration error? (2) How many subjects are needed? (3) What is the minimum detectable difference between normal and pathologic image regions? Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Balancing the books - a statistical theory of prospective budgets in Earth System science

    NASA Astrophysics Data System (ADS)

    O'Kane, J. Philip

    An honest declaration of the error in a mass, momentum or energy balance, ɛ, simply raises the question of its acceptability: "At what value of ɛ is the attempted balance to be rejected?" Answering this question requires a reference quantity against which to compare ɛ. This quantity must be a mathematical function of all the data used in making the balance. To deliver this function, a theory grounded in a workable definition of acceptability is essential. A distinction must be drawn between a retrospective balance and a prospective budget in relation to any natural space-filling body. Balances look to the past; budgets look to the future. The theory is built on the application of classical sampling theory to the measurement and closure of a prospective budget. It satisfies R.A. Fisher's "vital requirement that the actual and physical conduct of experiments should govern the statistical procedure of their interpretation". It provides a test, which rejects, or fails to reject, the hypothesis that the closing error on the budget, when realised, was due to sampling error only. By increasing the number of measurements, the discrimination of the test can be improved, controlling both the precision and accuracy of the budget and its components. The cost-effective design of such measurement campaigns is discussed briefly. This analysis may also show when campaigns to close a budget on a particular space-filling body are not worth the effort for either scientific or economic reasons. Other approaches, such as those based on stochastic processes, lack this finality, because they fail to distinguish between different types of error in the mismatch between a set of realisations of the process and the measured data.

  6. [Errors in laboratory daily practice].

    PubMed

    Larrose, C; Le Carrer, D

    2007-01-01

    Legislation set by GBEA (Guide de bonne exécution des analyses) requires that, before performing analysis, the laboratory directors have to check both the nature of the samples and the patients identity. The data processing of requisition forms, which identifies key errors, was established in 2000 and in 2002 by the specialized biochemistry laboratory, also with the contribution of the reception centre for biological samples. The laboratories follow a strict criteria of defining acceptability as a starting point for the reception to then check requisition forms and biological samples. All errors are logged into the laboratory database and analysis report are sent to the care unit specifying the problems and the consequences they have on the analysis. The data is then assessed by the laboratory directors to produce monthly or annual statistical reports. This indicates the number of errors, which are then indexed to patient files to reveal the specific problem areas, therefore allowing the laboratory directors to teach the nurses and enable corrective action.

  7. Sonority contours in word recognition

    NASA Astrophysics Data System (ADS)

    McLennan, Sean

    2003-04-01

    Contrary to the Generativist distinction between competence and performance which asserts that speech or perception errors are due to random, nonlinguistic factors, it seems likely that errors are principled and possibly governed by some of the same constraints as language. A preliminary investigation of errors modeled after the child's ``Chain Whisper'' game (a degraded stimulus task) suggests that a significant number of recognition errors can be characterized as an improvement in syllable sonority contour towards the linguistically least-marked, voiceless-stop-plus-vowel syllable. An independent study of sonority contours showed that approximately half of the English lexicon can be uniquely identified by their contour alone. Additionally, ``sororities'' (groups of words that share a single sonority contour), surprisingly, show no correlation to familiarity or frequency in either size or membership. Together these results imply that sonority contours may be an important factor in word recognition and in defining word ``neighborhoods.'' Moreover, they suggest that linguistic markedness constraints may be more prevalent in performance-related phenomena than previously accepted.

  8. When is Testing Sufficient

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda H.; Arthur, James D.; Stapko, Ruth K.; Davani, Darush

    1999-01-01

    The Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center has been investigating how projects can determine when sufficient testing has been completed. For most projects, schedules are underestimated, and the last phase of the software development, testing, must be decreased. Two questions are frequently asked: "To what extent is the software error-free? " and "How much time and effort is required to detect and remove the remaining errors? " Clearly, neither question can be answered with absolute certainty. Nonetheless, the ability to answer these questions with some acceptable level of confidence is highly desirable. First, knowing the extent to which a product is error-free, we can judge when it is time to terminate testing. Secondly, if errors are judged to be present, we can perform a cost/benefit trade-off analysis to estimate when the software will be ready for use and at what cost. This paper explains the efforts of the SATC to help projects determine what is sufficient testing and when is the most cost-effective time to stop testing.

  9. Analysis of tractable distortion metrics for EEG compression applications.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-07-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.

  10. Evaluation of drug content (potency) for compounded and FDA-approved formulations of doxycycline on receipt and after 21 days of storage.

    PubMed

    KuKanich, Kate; KuKanich, Butch; Slead, Tanner; Warner, Matt

    2017-10-01

    OBJECTIVE To determine drug content (potency) of compounded doxycycline formulations for veterinary use and of US FDA-approved doxycycline formulations for human use < 24 hours after receipt (day 1) and after 21 days of storage under recommended conditions (day 21). DESIGN Evaluation study. SAMPLE FDA-approved doxycycline tablets (100 mg), capsules (100 mg), and liquid suspension (10 mg/mL) and compounded doxycycline formulations from 3 pharmacies (tablets [25, 100, and 150 mg; 1 product/source], chews [100 mg; 1 product/source], and liquid suspensions or solution [6 mg/mL {2 sources} and 50 mg/mL {1 source}]). PROCEDURES Doxycycline content was measured in 5 samples of each tablet, chew, or capsule formulation and 5 replicates/bottle of liquid formulation on days 1 and 21 by liquid chromatography and compared with US Pharmacopeia acceptable ranges. RESULTS All FDA-approved formulations had acceptable content on days 1 and 21. On day 1, mean doxycycline content for the 3 compounded tablet formulations was 89%, 98%, and 116% (3/5, 5/5, and 1/5 samples within acceptable ranges); day 21 content range was 86% to 112% (1/5, 5/5, and 4/5 samples within acceptable ranges). Day 1 content of chews was 81%, 78%, and 98% (0/5, 0/5, and 5/5 samples within acceptable ranges), and that of compounded liquids was 50%, 52%, and 85% (no results within acceptable ranges). No chews or compounded liquid formulations met USP standards on day 21. CONCLUSIONS AND CLINICAL RELEVANCE FDA-approved doxycycline should be prescribed when possible. Whole tablets yielded the most consistent doxycycline content for compounded formulations.

  11. Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada

    USGS Publications Warehouse

    Hess, G.W.; Bohman, L.R.

    1996-01-01

    Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.

  12. Inter-satellite links for satellite autonomous integrity monitoring

    NASA Astrophysics Data System (ADS)

    Rodríguez-Pérez, Irma; García-Serrano, Cristina; Catalán Catalán, Carlos; García, Alvaro Mozo; Tavella, Patrizia; Galleani, Lorenzo; Amarillo, Francisco

    2011-01-01

    A new integrity monitoring mechanisms to be implemented on-board on a GNSS taking advantage of inter-satellite links has been introduced. This is based on accurate range and Doppler measurements not affected neither by atmospheric delays nor ground local degradation (multipath and interference). By a linear combination of the Inter-Satellite Links Observables, appropriate observables for both satellite orbits and clock monitoring are obtained and by the proposed algorithms it is possible to reduce the time-to-alarm and the probability of undetected satellite anomalies.Several test cases have been run to assess the performances of the new orbit and clock monitoring algorithms in front of a complete scenario (satellite-to-satellite and satellite-to-ground links) and in a satellite-only scenario. The results of this experimentation campaign demonstrate that the Orbit Monitoring Algorithm is able to detect orbital feared events when the position error at the worst user location is still under acceptable limits. For instance, an unplanned manoeuvre in the along-track direction is detected (with a probability of false alarm equals to 5 × 10-9) when the position error at the worst user location is 18 cm. The experimentation also reveals that the clock monitoring algorithm is able to detect phase jumps, frequency jumps and instability degradation on the clocks but the latency of detection as well as the detection performances strongly depends on the noise added by the clock measurement system.

  13. Trimming algorithm of frequency modulation for CIAE-230 MeV proton superconducting synchrocyclotron model cavity

    NASA Astrophysics Data System (ADS)

    Li, Pengzhan; Zhang, Tianjue; Ji, Bin; Hou, Shigang; Guo, Juanjuan; Yin, Meng; Xing, Jiansheng; Lv, Yinlong; Guan, Fengping; Lin, Jun

    2017-01-01

    A new project, the 230 MeV proton superconducting synchrocyclotron for cancer therapy, was proposed at CIAE in 2013. A model cavity is designed to verify the frequency modulation trimming algorithm featuring a half-wave structure and eight sets of rotating blades for 1 kHz frequency modulation. Based on the electromagnetic (EM) field distribution analysis of the model cavity, the variable capacitor works as a function of time and the frequency can be written in Maclaurin series. Curve fitting is applied for theoretical frequency and original simulation frequency. The second-order fitting excels at the approximation given its minimum variance. Constant equivalent inductance is considered as an important condition in the calculation. The equivalent parameters of theoretical frequency can be achieved through this conversion. Then the trimming formula for rotor blade outer radius is found by discretization in time domain. Simulation verification has been performed and the results show that the calculation radius with minus 0.012 m yields an acceptable result. The trimming amendment in the time range of 0.328-0.4 ms helps to reduce the frequency error to 0.69% in Simulation C with an increment of 0.075 mm/0.001 ms, which is half of the error in Simulation A (constant radius in 0.328-0.4 ms). The verification confirms the feasibility of the trimming algorithm for synchrocyclotron frequency modulation.

  14. Compressed storage of arterial pressure waveforms by selection of significant points.

    PubMed

    de Graaf, P M; van Goudoever, J; Wesseling, K H

    1997-09-01

    Continuous records of arterial blood pressure can be obtained non-invasively with Finapres, even for periods of 24 hours. Increasingly, storage of such records is done digitally, requiring large disc capacities. It is therefore necessary to find methods to store blood pressure waveforms in compressed form. The method of selection of significant points known from ECG data compression is adapted. Points are selected as significant wherever the first derivative of the pressure wave changes sign. As a second stage recursive partitioning is used to select additional points such that the difference between the selected points, linearly interpolated, and the original curve remains below a maximum. This method is tested on finger arterial pressure waveform epochs of 60 s duration taken from 32 patients with a wide range of blood pressures and heart rates. An average compression factor of 4.6 (SD 1.0) is obtained when accepting a maximum difference of 3 mmHg. The root mean squared error is 1 mmHg averaged over the group of patient waveforms. Clinically relevant parameters such as systolic, diastolic and mean pressure are reproduced with an offset error of less than 0.5 (0.3) mmHg and scatter less than 0.6 (0.1) mmHg. It is concluded that a substantial compression factor can be achieved with a simple and computationally fast algorithm and little deterioration in waveform quality and pressure level accuracy.

  15. Problems in separating species with similar habits and vocalizations

    USGS Publications Warehouse

    Robbins, C.S.; Stallcup, R.W.; Ralph, C. John; Scott, J. Michael

    1981-01-01

    The possibilities for species misidentification based on vocalization or habitat association are high. However, the magnitude of the errors actually perpetrated is generally within an acceptable range in most types of bird survey work. Examples of problems discussed are: congeners that are similar in appearance or in song (such as Chimney and Vaux's Swifts, Chaetura pelagica, C. vauxi; Hammond's, Dusky and Gray Flycatchers, Empidonax hammondii, E. oberholseri, E. wrightii; Willow and Alder Flycatchers, E. traillii, E. alnorum; Common and Fish Crows, Corvus brachyrhynchos, C. ossifragus); birds that are misidentified because they are not expected by the observer (House Finches, Carpodacus mexicanus, invading new areas of eastern U.S.); birds that imitate other species (especially Starling, Sturnus vulgaris, and Mockingbird, Mimus polyglottos); birds in mixed flocks; birds with geographic differences in vocalizations (Solitary Vireo, Vireo solitarius); woodpeckers that are only heard drumming; and nests or eggs that are misidentified. Equally serious problems are the errors resulting from undetected species and from careless recording or failure to check manuscripts against original data. The quality of published count work can be improved considerably by (1) recognizing the problems that exist, (2) standardizing techniques for dealing with situations where not all birds can be identified, and (3) routinely applying all appropriate safeguards such as verification by mist netting and measuring, photography, tape recording or playback, additional observations, and careful verification of all entries in the final manuscript.

  16. Quality Control of Meteorological Observations

    NASA Technical Reports Server (NTRS)

    Collins, William; Dee, Dick; Rukhovets, Leonid

    1999-01-01

    For the first time, a problem of the meteorological observation quality control (QC) was formulated by L.S. Gandin at the Main Geophysical Observatory in the 70's. Later in 1988 L.S. Gandin began adapting his ideas in complex quality control (CQC) to the operational environment at the National Centers for Environmental Prediction. The CQC was first applied by L.S.Gandin and his colleagues to detection and correction of errors in rawinsonde heights and temperatures using a complex of hydrostatic residuals.Later, a full complex of residuals, vertical and horizontal optimal interpolations and baseline checks were added for the checking and correction of a wide range of meteorological variables. some other of Gandin's ideas were applied and substantially developed at other meteorological centers. A new statistical QC was recently implemented in the Goddard Data Assimilation System. The central component of any quality control is a buddy check which is a test of individual suspect observations against available nearby non-suspect observations. A novel feature of this test is that the error variances which are used for QC decision are re-estimated on-line. As a result, the allowed tolerances for suspect observations can depend on local atmospheric conditions. The system is then better able to accept extreme values observed in deep cyclones, jet streams and so on. The basic statements of this adaptive buddy check are described. Some results of the on-line QC including moisture QC are presented.

  17. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  18. Temporal consistent depth map upscaling for 3DTV

    NASA Astrophysics Data System (ADS)

    Schwarz, Sebastian; Sjöström, Mârten; Olsson, Roger

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  19. Human machine interface by using stereo-based depth extraction

    NASA Astrophysics Data System (ADS)

    Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  20. Large Aperture "Photon Bucket" Optical Receiver Performance in High Background Environments

    NASA Technical Reports Server (NTRS)

    Vilnrotter, Victor A.; Hoppe, D.

    2011-01-01

    The potential development of large aperture groundbased "photon bucket" optical receivers for deep space communications, with acceptable performance even when pointing close to the sun, is receiving considerable attention. Sunlight scattered by the atmosphere becomes significant at micron wavelengths when pointing to a few degrees from the sun, even with the narrowest bandwidth optical filters. In addition, high quality optical apertures in the 10-30 meter range are costly and difficult to build with accurate surfaces to ensure narrow fields-of-view (FOV). One approach currently under consideration is to polish the aluminum reflector panels of large 34-meter microwave antennas to high reflectance, and accept the relatively large FOV generated by state-of-the-art polished aluminum panels with rms surface accuracies on the order of a few microns, corresponding to several-hundred micro-radian FOV, hence generating centimeter-diameter focused spots at the Cassegrain focus of 34-meter antennas. Assuming pulse-position modulation (PPM) and Poisson-distributed photon-counting detection, a "polished panel" photon-bucket receiver with large FOV will collect hundreds of background photons per PPM slot, along with comparable signal photons due to its large aperture. It is demonstrated that communications performance in terms of PPM symbol-error probability in high-background high-signal environments depends more strongly on signal than on background photons, implying that large increases in background energy can be compensated by a disproportionally small increase in signal energy. This surprising result suggests that large optical apertures with relatively poor surface quality may nevertheless provide acceptable performance for deep-space optical communications, potentially enabling the construction of cost-effective hybrid RF/optical receivers in the future.

  1. Enhanced orbit determination filter sensitivity analysis: Error budget development

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Burkhart, P. D.

    1994-01-01

    An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.

  2. Electronic acquisition of OSCE performance using tablets.

    PubMed

    Hochlehnert, Achim; Schultz, Jobst-Hendrik; Möltner, Andreas; Tımbıl, Sevgi; Brass, Konstantin; Jünger, Jana

    2015-01-01

    Objective Structured Clinical Examinations (OSCEs) often involve a considerable amount of resources in terms of materials and organization since the scores are often recorded on paper. Computer-assisted administration is an alternative with which the need for material resources can be reduced. In particular, the use of tablets seems sensible because these are easy to transport and flexible to use. User acceptance concerning the use of tablets during OSCEs has not yet been extensively investigated. The aim of this study was to evaluate tablet-based OSCEs from the perspective of the user (examiner) and the student examinee. For two OSCEs in Internal Medicine at the University of Heidelberg, user acceptance was analyzed regarding tablet-based administration (satisfaction with functionality) and the subjective amount of effort as perceived by the examiners. Standardized questionnaires and semi-standardized interviews were conducted (complete survey of all participating examiners). In addition, for one OSCE, the subjective evaluation of this mode of assessment was gathered from a random sample of participating students in semi-standardized interviews. Overall, the examiners were very satisfied with using tablets during the assessment. The subjective amount of effort to use the tablet was found on average to be "hardly difficult". The examiners identified the advantages of this mode of administration as being in particular the ease of use and low rate of error. During the interviews of the examinees, acceptance for the use of tablets during the assessment was also detected. Overall, it was found that the use of tablets during OSCEs was well accepted by both examiners and examinees. We expect that this mode of assessment also offers advantages regarding assessment documentation, use of resources, and rate of error in comparison with paper-based assessments; all of these aspects should be followed up on in further studies.

  3. Ten Ways to Cope with Foreign Language Anxiety.

    ERIC Educational Resources Information Center

    Donley, Philip

    1997-01-01

    Proposes strategies for reducing foreign language anxiety in the classroom: (1) discuss feelings with instructor and other students; (2) relax, exercise, and eat well; (3) prepare for and attend every class; (4) keep foreign language class in perspective; (5) seek opportunities to practice the language and accept errors are a part of the learning…

  4. Adaptive Methods for Compressible Flow

    DTIC Science & Technology

    1994-03-01

    labor -intensive task of purpose of this work is to demonstrate the generating acceptable surface triangulations, advantages of integrating the CAD/CAM...sintilar results). L 1 (’-1)(2sn~p) boundary error (MUSCL) The flow variables wre then given by .04 .78% M=asOIne/i .02 AM% v= acosO /sintt .01 .0 p

  5. Experimental comparison of icing cloud instruments

    NASA Technical Reports Server (NTRS)

    Olsen, W.; Takeuchi, D. M.; Adams, K.

    1983-01-01

    Icing cloud instruments were tested in the spray cloud Icing Research Tunnel (IRT) in order to determine their relative accuracy and their limitations over a broad range of conditions. It was found that the average of the readings from each of the liquid water content (LWC) instruments tested agreed closely with each other and with the IRT calibration; but all have a data scatter (+ or - one standard deviation) of about + or - 20 percent. The effect of this + or - 20 percent uncertainty is probably acceptable in aero-penalty and deicer experiments. Existing laser spectrometers proved to be too inaccurate for LWC measurements. The error due to water runoff was the same for all ice accretion LWC instruments. Any given laser spectrometer proved to be highly repeatable in its indications of volume median drop size (DVM), LWC and drop size distribution. However, there was a significant disagreement between different spectrometers of the same model, even after careful standard calibration and data analysis. The scatter about the mean of the DVM data from five Axial Scattering Spectrometer Probes was + or - 20 percent (+ or - one standard deviation) and the average was 20 percent higher than the old IRT calibration. The + or - 20 percent uncertainty in DVM can cause an unacceptable variation in the drag coefficient of an airfoil with ice; however, the variation in a deicer performance test may be acceptable.

  6. Dimensions and predictors of disability—A baseline study of patients entering somatic rehabilitation in secondary care

    PubMed Central

    2018-01-01

    Purpose The purpose of this study was to investigate disability among patients who were accepted for admission to a Norwegian rehabilitation center and to identify predictors of disability. Materials and methods In a cross-sectional study including 967 adult participants, the World Health Organization Disability Assessment Schedule version 2.0 36-item version was used for assessing overall and domain-specific disability as outcome variables. Patients completed the Hospital Anxiety and Depression Scale (HADS), EuroQoL EQ-5D-5L and questions about multi-morbidity, smoking and perceived physical fitness. Additionally, the main health condition, sociodemographic and environmental variables obtained from referrals and public registers were used as predictor variables. Descriptive statistics and linear regression analyses were performed. Results The mean (standard error) overall disability score was 30.0 (0.5), domain scores ranged from 11.9 to 44.7. Neurological diseases, multi-morbidity, low education, impaired physical fitness, pain, and higher HADS depressive score increased the overall disability score. A low HADS depressive score predicted a lower disability score in all domains. Conclusions A moderate overall disability score was found among patients accepted for admission to a rehabilitation center but “life activities” and “participation in society” had the highest domain scores. This should be taken into account when rehabilitation strategies are developed. PMID:29499064

  7. Analysis of ionospheric refraction error corrections for GRARR systems

    NASA Technical Reports Server (NTRS)

    Mallinckrodt, A. J.; Parker, H. C.; Berbert, J. H.

    1971-01-01

    A determination is presented of the ionospheric refraction correction requirements for the Goddard range and range rate (GRARR) S-band, modified S-band, very high frequency (VHF), and modified VHF systems. The relation ships within these four systems are analyzed to show that the refraction corrections are the same for all four systems and to clarify the group and phase nature of these corrections. The analysis is simplified by recognizing that the range rate is equivalent to a carrier phase range change measurement. The equation for the range errors are given.

  8. Predictability of CFSv2 in the tropical Indo-Pacific region, at daily and subseasonal time scales

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, V.

    2018-06-01

    The predictability of a coupled climate model is evaluated at daily and intraseasonal time scales in the tropical Indo-Pacific region during boreal summer and winter. This study has assessed the daily retrospective forecasts of the Climate Forecast System version 2 from the National Centers of Environmental Prediction for the period 1982-2010. The growth of errors in the forecasts of daily precipitation, monsoon intraseasonal oscillation (MISO) and the Madden-Julian oscillation (MJO) is studied. The seasonal cycle of the daily climatology of precipitation is reasonably well predicted except for the underestimation during the peak of summer. The anomalies follow the typical pattern of error growth in nonlinear systems and show no difference between summer and winter. The initial errors in all the cases are found to be in the nonlinear phase of the error growth. The doubling time of small errors is estimated by applying Lorenz error formula. For summer and winter, the doubling time of the forecast errors is in the range of 4-7 and 5-14 days while the doubling time of the predictability errors is 6-8 and 8-14 days, respectively. The doubling time in MISO during the summer and MJO during the winter is in the range of 12-14 days, indicating higher predictability and providing optimism for long-range prediction. There is no significant difference in the growth of forecasts errors originating from different phases of MISO and MJO, although the prediction of the active phase seems to be slightly better.

  9. Why You Should Believe Cold Fusion is Real

    NASA Astrophysics Data System (ADS)

    Storms, Edmund K.

    2005-03-01

    Nuclear reactions are now claimed to be initiated in certain solid materials at an energy too low to overcome the Coulomb barrier. These reactions include fusion, accelerated radioactive decay, and transmutation involving heavy elements. Evidence is based on hundreds of measurements of anomalous energy using a variety of calorimeters at levels far in excess of error, measurement of nuclear products using many normally accepted techniques, observations of many patterns of behavior common to all studies, measurement of anomalous energetic emissions using accepted techniques, and an understanding of most variables that have hindered reproducibility in the past. This evidence can be found at www.LENR-CANR.orgwww.LENR-CANR.org. Except for an accepted theory, the claims have met all requirements normally required before a new idea is accepted by conventional science, yet rejection continues. How long can the US afford to reject a clean and potentially cheap source of energy, especially when other nations are attempting to develop this energy and the need for such an energy source is so great?

  10. Bolus Guide: A Novel Insulin Bolus Dosing Decision Support Tool Based on Selection of Carbohydrate Ranges

    PubMed Central

    Shapira, Gali; Yodfat, Ofer; HaCohen, Arava; Feigin, Paul; Rubin, Richard

    2010-01-01

    Background Optimal continuous subcutaneous insulin infusion (CSII) therapy emphasizes the relationship between insulin dose and carbohydrate consumption. One widely used tool (bolus calculator) requires the user to enter discrete carbohydrate values; however, many patients might not estimate carbohydrates accurately. This study assessed carbohydrate estimation accuracy in type 1 diabetes CSII users and compared simulated blood glucose (BG) outcomes using the bolus calculator and the “bolus guide,” an alternative system based on ranges of carbohydrate load. Methods Patients (n = 60) estimated the carbohydrate load of a representative sample of meals of known carbohydrate value. The estimated error distribution [coefficient of variation (CV)] was the basis for a computer simulation (n = 1.6 million observations) of insulin recommendations for the bolus guide and bolus calculator, translated into outcome blood glucose (OBG) ranges (≤60, 61–200, >201 mg/dl). Patients (n = 30) completed questionnaires assessing satisfaction with the bolus guide. Results The CV of typical meals ranged from 27.9% to 44.5%. The percentage of simulated OBG for the calculator and the bolus guide in the <60 mg/dl range were 20.8% and 17.2%, respectively, and 13.8% and 15.8%, respectively, in the >200 mg/dl range. The mean and median scores of all bolus guide satisfaction items and ease of learning and use were 4.17 and 4.2, respectively (of 5.0). Conclusion The bolus guide recommendation based on carbohydrate range selection is substantially similar to the calculator based on carbohydrate point estimation and appears to be highly accepted by type 1 diabetes insulin pump users. PMID:20663453

  11. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  12. LiDAR error estimation with WAsP engineering

    NASA Astrophysics Data System (ADS)

    Bingöl, F.; Mann, J.; Foussekis, D.

    2008-05-01

    The LiDAR measurements, vertical wind profile in any height between 10 to 150m, are based on assumption that the measured wind is a product of a homogenous wind. In reality there are many factors affecting the wind on each measurement point which the terrain plays the main role. To model LiDAR measurements and predict possible error in different wind directions for a certain terrain we have analyzed two experiment data sets from Greece. In both sites LiDAR and met, mast data have been collected and the same conditions are simulated with RisØ/DTU software, WAsP Engineering 2.0. Finally measurement data is compared with the model results. The model results are acceptable and very close for one site while the more complex one is returning higher errors at higher positions and in some wind directions.

  13. Inverse sequential detection of parameter changes in developing time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1992-01-01

    Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.

  14. Biomass Thermogravimetric Analysis: Uncertainty Determination Methodology and Sampling Maps Generation

    PubMed Central

    Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín

    2010-01-01

    The objective of this study was to develop a methodology for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG), including moisture, volatile matter, fixed carbon and ash content. The sampling procedure of the TG analysis was of particular interest and was conducted with care. The results of the present study were compared to those of a prompt analysis, and a correlation between the mean values and maximum sampling errors of the methods were not observed. In general, low and acceptable levels of uncertainty and error were obtained, demonstrating that the properties evaluated by TG analysis were representative of the overall fuel composition. The accurate determination of the thermal properties of biomass with precise confidence intervals is of particular interest in energetic biomass applications. PMID:20717532

  15. Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support

    PubMed Central

    Seidling, Hanna M; Phansalkar, Shobha; Seger, Diane L; Paterno, Marilyn D; Shaykevich, Shimon; Haefeli, Walter E

    2011-01-01

    Background Clinical decision support systems can prevent knowledge-based prescription errors and improve patient outcomes. The clinical effectiveness of these systems, however, is substantially limited by poor user acceptance of presented warnings. To enhance alert acceptance it may be useful to quantify the impact of potential modulators of acceptance. Methods We built a logistic regression model to predict alert acceptance of drug–drug interaction (DDI) alerts in three different settings. Ten variables from the clinical and human factors literature were evaluated as potential modulators of provider alert acceptance. ORs were calculated for the impact of knowledge quality, alert display, textual information, prioritization, setting, patient age, dose-dependent toxicity, alert frequency, alert level, and required acknowledgment on acceptance of the DDI alert. Results 50 788 DDI alerts were analyzed. Providers accepted only 1.4% of non-interruptive alerts. For interruptive alerts, user acceptance positively correlated with frequency of the alert (OR 1.30, 95% CI 1.23 to 1.38), quality of display (4.75, 3.87 to 5.84), and alert level (1.74, 1.63 to 1.86). Alert acceptance was higher in inpatients (2.63, 2.32 to 2.97) and for drugs with dose-dependent toxicity (1.13, 1.07 to 1.21). The textual information influenced the mode of reaction and providers were more likely to modify the prescription if the message contained detailed advice on how to manage the DDI. Conclusion We evaluated potential modulators of alert acceptance by assessing content and human factors issues, and quantified the impact of a number of specific factors which influence alert acceptance. This information may help improve clinical decision support systems design. PMID:21571746

  16. Assumption-versus data-based approaches to summarizing species' ranges.

    PubMed

    Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro

    2018-06-01

    For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.

  17. Dynamic Calibration and Verification Device of Measurement System for Dynamic Characteristic Coefficients of Sliding Bearing

    PubMed Central

    Chen, Runlin; Wei, Yangyang; Shi, Zhaoyang; Yuan, Xiaoyang

    2016-01-01

    The identification accuracy of dynamic characteristics coefficients is difficult to guarantee because of the errors of the measurement system itself. A novel dynamic calibration method of measurement system for dynamic characteristics coefficients is proposed in this paper to eliminate the errors of the measurement system itself. Compared with the calibration method of suspension quality, this novel calibration method is different because the verification device is a spring-mass system, which can simulate the dynamic characteristics of sliding bearing. The verification device is built, and the calibration experiment is implemented in a wide frequency range, in which the bearing stiffness is simulated by the disc springs. The experimental results show that the amplitude errors of this measurement system are small in the frequency range of 10 Hz–100 Hz, and the phase errors increase along with the increasing of frequency. It is preliminarily verified by the simulated experiment of dynamic characteristics coefficients identification in the frequency range of 10 Hz–30 Hz that the calibration data in this frequency range can support the dynamic characteristics test of sliding bearing in this frequency range well. The bearing experiments in greater frequency ranges need higher manufacturing and installation precision of calibration device. Besides, the processes of calibration experiments should be improved. PMID:27483283

  18. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  19. Reducing Error Rates for Iris Image using higher Contrast in Normalization process

    NASA Astrophysics Data System (ADS)

    Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa

    2017-08-01

    Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.

  20. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    NASA Astrophysics Data System (ADS)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  1. Cost effectiveness of a pharmacist-led information technology intervention for reducing rates of clinically important errors in medicines management in general practices (PINCER).

    PubMed

    Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J

    2014-06-01

    We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.

  2. Evaluation of SMART sensor displays for multidimensional precision control of Space Shuttle remote manipulator

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.; Brown, J. W.; Lewis, J. L.

    1982-01-01

    An enhanced proximity sensor and display system was developed at the Jet Propulsion Laboratory (JPL) and tested on the full scale Space Shuttle Remote Manipulator at the Johnson Space Center (JSC) Manipulator Development Facility (MDF). The sensor system, integrated with a four-claw end effector, measures range error up to 6 inches, and pitch and yaw alignment errors within + or 15 deg., and displays error data on both graphic and numeric displays. The errors are referenced to the end effector control axes through appropriate data processing by a dedicated microcomputer acting on the sensor data in real time. Both display boxes contain a green lamp which indicates whether the combination of range, pitch and yaw errors will assure a successful grapple. More than 200 test runs were completed in early 1980 by three operators at JSC for grasping static and capturing slowly moving targets. The tests have indicated that the use of graphic/numeric displays of proximity sensor information improves precision control of grasp/capture range by more than a factor of two for both static and dynamic grapple conditions.

  3. Accuracy and Repeatability of Trajectory Rod Measurement Using Laser Scanners.

    PubMed

    Liscio, Eugene; Guryn, Helen; Stoewner, Daniella

    2017-12-22

    Three-dimensional (3D) technologies contribute greatly to bullet trajectory analysis and shooting reconstruction. There are few papers which address the errors associated with utilizing laser scanning for bullet trajectory documentation. This study examined the accuracy and precision of laser scanning for documenting trajectory rods in drywall for angles between 25° and 90°. The inherent error range of 0.02°-2.10° was noted while the overall error for laser scanning ranged between 0.04° and 1.98°. The inter- and intraobserver errors for trajectory rod placement and virtual trajectory marking showed that the range of variation for rod placement was between 0.1°-1° in drywall and 0.05°-0.5° in plywood. Virtual trajectory marking accuracy tests showed that 75% of data values were below 0.91° and 0.61° on azimuth and vertical angles, respectively. In conclusion, many contributing factors affect bullet trajectory analysis, and the use of 3D technologies can aid in reduction of errors associated with documentation. © 2017 American Academy of Forensic Sciences.

  4. Acoustic sensor for real-time control for the inductive heating process

    DOEpatents

    Kelley, John Bruce; Lu, Wei-Yang; Zutavern, Fred J.

    2003-09-30

    Disclosed is a system and method for providing closed-loop control of the heating of a workpiece by an induction heating machine, including generating an acoustic wave in the workpiece with a pulsed laser; optically measuring displacements of the surface of the workpiece in response to the acoustic wave; calculating a sub-surface material property by analyzing the measured surface displacements; creating an error signal by comparing an attribute of the calculated sub-surface material properties with a desired attribute; and reducing the error signal below an acceptable limit by adjusting, in real-time, as often as necessary, the operation of the inductive heating machine.

  5. Measurement of diffusion coefficients from solution rates of bubbles

    NASA Technical Reports Server (NTRS)

    Krieger, I. M.

    1979-01-01

    The rate of solution of a stationary bubble is limited by the diffusion of dissolved gas molecules away from the bubble surface. Diffusion coefficients computed from measured rates of solution give mean values higher than accepted literature values, with standard errors as high as 10% for a single observation. Better accuracy is achieved with sparingly soluble gases, small bubbles, and highly viscous liquids. Accuracy correlates with the Grashof number, indicating that free convection is the major source of error. Accuracy should, therefore, be greatly increased in a gravity-free environment. The fact that the bubble will need no support is an additional important advantage of Spacelab for this measurement.

  6. Studies of atmospheric refraction effects on laser data

    NASA Technical Reports Server (NTRS)

    Dunn, P. J.; Pearce, W. A.; Johnson, T. S.

    1982-01-01

    The refraction effect from three perspectives was considered. An analysis of the axioms on which the accepted correction algorithms were based was the first priority. The integrity of the meteorological measurements on which the correction model is based was also considered and a large quantity of laser observations was processed in an effort to detect any serious anomalies in them. The effect of refraction errors on geodetic parameters estimated from laser data using the most recent analysis procedures was the focus of the third element of study. The results concentrate on refraction errors which were found to be critical in the eventual use of the data for measurements of crustal dynamics.

  7. Synopsis of timing measurement techniques used in telecommunications

    NASA Technical Reports Server (NTRS)

    Zampetti, George

    1993-01-01

    Historically, Maximum Time Interval Error (MTIE) and Maximum Relative Time Interval Error (MRTIE) have been the main measurement techniques used to characterize timing performance in telecommunications networks. Recently, a new measurement technique, Time Variance (TVAR) has gained acceptance in the North American (ANSI) standards body. TVAR was developed in concurrence with NIST to address certain inadequacies in the MTIE approach. The advantages and disadvantages of each of these approaches are described. Real measurement examples are presented to illustrate the critical issues in actual telecommunication applications. Finally, a new MTIE measurement is proposed (ZTIE) that complements TVAR. Together, TVAR and ZTIE provide a very good characterization of network timing.

  8. Good people who try their best can have problems: recognition of human factors and how to minimise error.

    PubMed

    Brennan, Peter A; Mitchell, David A; Holmes, Simon; Plint, Simon; Parry, David

    2016-01-01

    Human error is as old as humanity itself and is an appreciable cause of mistakes by both organisations and people. Much of the work related to human factors in causing error has originated from aviation where mistakes can be catastrophic not only for those who contribute to the error, but for passengers as well. The role of human error in medical and surgical incidents, which are often multifactorial, is becoming better understood, and includes both organisational issues (by the employer) and potential human factors (at a personal level). Mistakes as a result of individual human factors and surgical teams should be better recognised and emphasised. Attitudes and acceptance of preoperative briefing has improved since the introduction of the World Health Organization (WHO) surgical checklist. However, this does not address limitations or other safety concerns that are related to performance, such as stress and fatigue, emotional state, hunger, awareness of what is going on situational awareness, and other factors that could potentially lead to error. Here we attempt to raise awareness of these human factors, and highlight how they can lead to error, and how they can be minimised in our day-to-day practice. Can hospitals move from being "high risk industries" to "high reliability organisations"? Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine

    PubMed Central

    Liu, Zhiyuan; Wang, Changhui

    2015-01-01

    In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method. PMID:26512675

  10. Medication administration errors from a nursing viewpoint: a formal consensus of definition and scenarios using a Delphi technique.

    PubMed

    Shawahna, Ramzi; Masri, Dina; Al-Gharabeh, Rawan; Deek, Rawan; Al-Thayba, Lama; Halaweh, Masa

    2016-02-01

    To develop and achieve formal consensus on a definition of medication administration errors and scenarios that should or should not be considered as medication administration errors in hospitalised patient settings. Medication administration errors occur frequently in hospitalised patient settings. Currently, there is no formal consensus on a definition of medication administration errors or scenarios that should or should not be considered as medication administration errors. This was a descriptive study using Delphi technique. A panel of experts (n = 50) recruited from major hospitals, nursing schools and universities in Palestine took part in the study. Three Delphi rounds were followed to achieve consensus on a proposed definition of medication administration errors and a series of 61 scenarios representing potential medication administration error situations formulated into a questionnaire. In the first Delphi round, key contact nurses' views on medication administration errors were explored. In the second Delphi round, consensus was achieved to accept the proposed definition of medication administration errors and to include 36 (59%) scenarios and exclude 1 (1·6%) as medication administration errors. In the third Delphi round, consensus was achieved to consider further 14 (23%) and exclude 2 (3·3%) as medication administration errors while the remaining eight (13·1%) were considered equivocal. Of the 61 scenarios included in the Delphi process, experts decided to include 50 scenarios as medication administration errors, exclude three scenarios and include or exclude eight scenarios depending on the individual clinical situation. Consensus on a definition and scenarios representing medication administration errors can be achieved using formal consensus techniques. Researchers should be aware that using different definitions of medication administration errors, inclusion or exclusion of medication administration error situations could significantly affect the rate of medication administration errors reported in their studies. Consensual definitions and medication administration error situations can be used in future epidemiology studies investigating medication administration errors in hospitalised patient settings which may permit and promote direct comparisons of different studies. © 2015 John Wiley & Sons Ltd.

  11. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds

    NASA Astrophysics Data System (ADS)

    Xiong, B.; Oude Elberink, S.; Vosselman, G.

    2014-07-01

    In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.

  12. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  13. Moment expansion for ionospheric range error

    NASA Technical Reports Server (NTRS)

    Mallinckrodt, A.; Reich, R.; Parker, H.; Berbert, J.

    1972-01-01

    On a plane earth, the ionospheric or tropospheric range error depends only on the total refractivity content or zeroth moment of the refracting layer and the elevation angle. On a spherical earth, however, the dependence is more complex; so for more accurate results it has been necessary to resort to complex ray-tracing calculations. A simple, high-accuracy alternative to the ray-tracing calculation is presented. By appropriate expansion of the angular dependence in the ray-tracing integral in a power series in height, an expression is obtained for the range error in terms of a simple function of elevation angle, E, at the expansion height and of the mth moment of the refractivity, N, distribution about the expansion height. The rapidity of convergence is heavily dependent on the choice of expansion height. For expansion heights in the neighborhood of the centroid of the layer (300-490 km), the expansion to N = 2 (three terms) gives results accurate to about 0.4% at E = 10 deg. As an analytic tool, the expansion affords some insight on the influence of layer shape on range errors in special problems.

  14. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    PubMed

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-04

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  15. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors.

    PubMed

    Kwon, Heon-Ju; Kim, Kyoung Won; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu

    2018-03-01

    Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (V P ) was measured via the assumptive hepatectomy plane. Retrospective liver volume (V R ) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) V P and V R were evaluated. Plane-dependent error in V P was defined as the absolute difference between V P and V R . % plane-dependent error was defined as follows: |V P -V R |/W∙100. Mean V P , V R , and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in V P were 73.3 mL and 10.7%. Mean error and % error in V R were 64.4 mL and 9.3%. Mean plane-dependent error in V P was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in V P exceeded 10% of W in approximately 10% of the subjects in our study. There was approximately 5% plane-dependent error in liver V P on CT volumetry. Plane-dependent error in V P exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.

  16. Accuracy of Digital vs Conventional Implant Impression Approach: A Three-Dimensional Comparative In Vitro Analysis.

    PubMed

    Basaki, Kinga; Alkumru, Hasan; De Souza, Grace; Finer, Yoav

    To assess the three-dimensional (3D) accuracy and clinical acceptability of implant definitive casts fabricated using a digital impression approach and to compare the results with those of a conventional impression method in a partially edentulous condition. A mandibular reference model was fabricated with implants in the first premolar and molar positions to simulate a patient with bilateral posterior edentulism. Ten implant-level impressions per method were made using either an intraoral scanner with scanning abutments for the digital approach or an open-tray technique and polyvinylsiloxane material for the conventional approach. 3D analysis and comparison of implant location on resultant definitive casts were performed using laser scanner and quality control software. The inter-implant distances and interimplant angulations for each implant pair were measured for the reference model and for each definitive cast (n = 20 per group); these measurements were compared to calculate the magnitude of error in 3D for each definitive cast. The influence of implant angulation on definitive cast accuracy was evaluated for both digital and conventional approaches. Statistical analysis was performed using t test (α = .05) for implant position and angulation. Clinical qualitative assessment of accuracy was done via the assessment of the passivity of a master verification stent for each implant pair, and significance was analyzed using chi-square test (α = .05). A 3D error of implant positioning was observed for the two impression techniques vs the reference model, with mean ± standard deviation (SD) error of 116 ± 94 μm and 56 ± 29 μm for the digital and conventional approaches, respectively (P = .01). In contrast, the inter-implant angulation errors were not significantly different between the two techniques (P = .83). Implant angulation did not have a significant influence on definitive cast accuracy within either technique (P = .64). The verification stent demonstrated acceptable passive fit for 11 out of 20 casts and 18 out of 20 casts for the digital and conventional methods, respectively (P = .01). Definitive casts fabricated using the digital impression approach were less accurate than those fabricated from the conventional impression approach for this simulated clinical scenario. A significant number of definitive casts generated by the digital technique did not meet clinically acceptable accuracy for the fabrication of a multiple implant-supported restoration.

  17. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    PubMed

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  18. Prescribing errors during hospital inpatient care: factors influencing identification by pharmacists.

    PubMed

    Tully, Mary P; Buchan, Iain E

    2009-12-01

    To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.

  19. Performance Analysis of Ranging Techniques for the KPLO Mission

    NASA Astrophysics Data System (ADS)

    Park, Sungjoon; Moon, Sangman

    2018-03-01

    In this study, the performance of ranging techniques for the Korea Pathfinder Lunar Orbiter (KPLO) space communication system is investigated. KPLO is the first lunar mission of Korea, and pseudo-noise (PN) ranging will be used to support the mission along with sequential ranging. We compared the performance of both ranging techniques using the criteria of accuracy, acquisition probability, and measurement time. First, we investigated the end-to-end accuracy error of a ranging technique incorporating all sources of errors such as from ground stations and the spacecraft communication system. This study demonstrates that increasing the clock frequency of the ranging system is not required when the dominant factor of accuracy error is independent of the thermal noise of the ranging technique being used in the system. Based on the understanding of ranging accuracy, the measurement time of PN and sequential ranging are further investigated and compared, while both techniques satisfied the accuracy and acquisition requirements. We demonstrated that PN ranging performed better than sequential ranging in the signal-to-noise ratio (SNR) regime where KPLO will be operating, and we found that the T2B (weighted-voting balanced Tausworthe, voting v = 2) code is the best choice among the PN codes available for the KPLO mission.

  20. Computer adaptive test approach to the assessment of children and youth with brachial plexus birth palsy.

    PubMed

    Mulcahey, M J; Merenda, Lisa; Tian, Feng; Kozin, Scott; James, Michelle; Gogola, Gloria; Ni, Pengsheng

    2013-01-01

    This study examined the psychometric properties of item pools relevant to upper-extremity function and activity performance and evaluated simulated 5-, 10-, and 15-item computer adaptive tests (CATs). In a multicenter, cross-sectional study of 200 children and youth with brachial plexus birth palsy (BPBP), parents responded to upper-extremity (n = 52) and activity (n = 34) items using a 5-point response scale. We used confirmatory and exploratory factor analysis, ordinal logistic regression, item maps, and standard errors to evaluate the psychometric properties of the item banks. Validity was evaluated using analysis of variance and Pearson correlation coefficients. Results show that the two item pools have acceptable model fit, scaled well for children and youth with BPBP, and had good validity, content range, and precision. Simulated CATs performed comparably to the full item banks, suggesting that a reduced number of items provide similar information to the entire set of items. Copyright © 2013 by the American Occupational Therapy Association, Inc.

Top