Cecconi, Maurizio; Rhodes, Andrew; Poloniecki, Jan; Della Rocca, Giorgio; Grounds, R Michael
2009-01-01
Bland-Altman analysis is used for assessing agreement between two measurements of the same clinical variable. In the field of cardiac output monitoring, its results, in terms of bias and limits of agreement, are often difficult to interpret, leading clinicians to use a cutoff of 30% in the percentage error in order to decide whether a new technique may be considered a good alternative. This percentage error of +/- 30% arises from the assumption that the commonly used reference technique, intermittent thermodilution, has a precision of +/- 20% or less. The combination of two precisions of +/- 20% equates to a total error of +/- 28.3%, which is commonly rounded up to +/- 30%. Thus, finding a percentage error of less than +/- 30% should equate to the new tested technique having an error similar to the reference, which therefore should be acceptable. In a worked example in this paper, we discuss the limitations of this approach, in particular in regard to the situation in which the reference technique may be either more or less precise than would normally be expected. This can lead to inappropriate conclusions being drawn from data acquired in validation studies of new monitoring technologies. We conclude that it is not acceptable to present comparison studies quoting percentage error as an acceptability criteria without reporting the precision of the reference technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, S
2015-06-15
Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less
Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C
2010-01-01
Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.
Quality Assurance of Chemical Measurements.
ERIC Educational Resources Information Center
Taylor, John K.
1981-01-01
Reviews aspects of quality control (methods to control errors) and quality assessment (verification that systems are operating within acceptable limits) including an analytical measurement system, quality control by inspection, control charts, systematic errors, and use of SRMs, materials for which properties are certified by the National Bureau…
Wonnemann, Meinolf; Frömke, Cornelia; Koch, Armin
2015-01-01
We investigated different evaluation strategies for bioequivalence trials with highly variable drugs on their resulting empirical type I error and empirical power. The classical 'unscaled' crossover design with average bioequivalence evaluation, the Add-on concept of the Japanese guideline, and the current 'scaling' approach of EMA were compared. Simulation studies were performed based on the assumption of a single dose drug administration while changing the underlying intra-individual variability. Inclusion of Add-on subjects following the Japanese concept led to slight increases of the empirical α-error (≈7.5%). For the approach of EMA we noted an unexpected tremendous increase of the rejection rate at a geometric mean ratio of 1.25. Moreover, we detected error rates slightly above the pre-set limit of 5% even at the proposed 'scaled' bioequivalence limits. With the classical 'unscaled' approach and the Japanese guideline concept the goal of reduced subject numbers in bioequivalence trials of HVDs cannot be achieved. On the other hand, widening the acceptance range comes at the price that quite a number of products will be accepted bioequivalent that had not been accepted in the past. A two-stage design with control of the global α therefore seems the better alternative.
The evolution of Crew Resource Management training in commercial aviation
NASA Technical Reports Server (NTRS)
Helmreich, R. L.; Merritt, A. C.; Wilhelm, J. A.
1999-01-01
In this study, we describe changes in the nature of Crew Resource Management (CRM) training in commercial aviation, including its shift from cockpit to crew resource management. Validation of the impact of CRM is discussed. Limitations of CRM, including lack of cross-cultural generality are considered. An overarching framework that stresses error management to increase acceptance of CRM concepts is presented. The error management approach defines behavioral strategies taught in CRM as error countermeasures that are employed to avoid error, to trap errors committed, and to mitigate the consequences of error.
1991-07-01
predicted by equation using actual chart response obtained from each calibration gas response. (Concentration of cal. gas,l Calibration error, % span • ppm...Analyzer predicted by cali- Col. gas Chart divisions equation* bration Cylinder conc., error,** Drift,***INo. ppm or % Pretest Posttest Pretest Posttest...2m ~J * Correlation coef. * qgq’jq **Analyzer ca.error, % spn (Cal. gas conc. conc. predicted ) x 1003 cal spanSpan value Acceptable limit x ɚ% of
Self-calibrating multiplexer circuit
Wahl, Chris P.
1997-01-01
A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.
Quality Leadership and Quality Control
Badrick, Tony
2003-01-01
Different quality control rules detect different analytical errors with varying levels of efficiency depending on the type of error present, its prevalence and the number of observations. The efficiency of a rule can be gauged by inspection of a power function graph. Control rules are only part of a process and not an end in itself; just as important are the trouble-shooting systems employed when a failure occurs. 'Average of patient normals' may develop as a usual adjunct to conventional quality control serum based programmes. Acceptable error can be based on various criteria; biological variation is probably the most sensible. Once determined, acceptable error can be used as limits in quality control rule systems. A key aspect of an organisation is leadership, which links the various components of the quality system. Leadership is difficult to characterise but its key aspects include trust, setting an example, developing staff and critically setting the vision for the organisation. Organisations also have internal characteristics such as the degree of formalisation, centralisation, and complexity. Medical organisations can have internal tensions because of the dichotomy between the bureaucratic and the shadow medical structures. PMID:18568046
Adherence to balance tolerance limits at the Upper Mississippi Science Center, La Crosse, Wisconsin.
Myers, C.T.; Kennedy, D.M.
1998-01-01
Verification of balance accuracy entails applying a series of standard masses to a balance prior to use and recording the measured values. The recorded values for each standard should have lower and upper weight limits or tolerances that are accepted as verification of balance accuracy under normal operating conditions. Balance logbooks for seven analytical balances at the Upper Mississippi Science Center were checked over a 3.5-year period to determine if the recorded weights were within the established tolerance limits. A total of 9435 measurements were checked. There were 14 instances in which the balance malfunctioned and operators recorded a rationale in the balance logbook. Sixty-three recording errors were found. Twenty-eight operators were responsible for two types of recording errors: Measurements of weights were recorded outside of the tolerance limit but not acknowledged as an error by the operator (n = 40); and measurements were recorded with the wrong number of decimal places (n = 23). The adherence rate for following tolerance limits was 99.3%. To ensure the continued adherence to tolerance limits, the quality-assurance unit revised standard operating procedures to require more frequent review of balance logbooks.
Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution
NASA Astrophysics Data System (ADS)
Samohyl, Robert Wayne
2017-10-01
This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The article presents R code throughout.
Wang, Li; Sievenpiper, John L; de Souza, Russell J; Thomaz, Michele; Blatz, Susan; Grey, Vijaylaxmi; Fusch, Christoph; Balion, Cynthia
2013-08-01
The lack of accuracy of point of care (POC) glucose monitors has limited their use in the diagnosis of neonatal hypoglycemia. Hematocrit plays an important role in explaining discordant results. The objective of this study was to to assess the effect of hematocrit on the diagnostic performance of Abbott Precision Xceed Pro (PXP) and Nova StatStrip (StatStrip) monitors in neonates. All blood samples ordered for laboratory glucose measurement were analyzed using the PXP and StatStrip and compared with the laboratory analyzer (ABL 800 Blood Gas analyzer [ABL]). Acceptable error targets were ±15% for glucose monitoring and ±5% for diagnosis. A total of 307 samples from 176 neonates were analyzed. Overall, 90% of StatStrip and 75% of PXP values met the 15% error limit and 45% of StatStrip and 32% of PXP values met the 5% error limit. At glucose concentrations ≤4 mmol/L, 83% of StatStrip and 79% of PXP values met the 15% error limit, while 37% of StatStrip and 38% of PXP values met the 5% error limit. Hematocrit explained 7.4% of the difference between the PXP and ABL whereas it accounted for only 0.09% of the difference between the StatStrip and ABL. The ROC analysis showed the screening cut point with the best performance for identifying neonatal hypoglycemia was 3.2 mmol/L for StatStrip and 3.3 mmol/L for PXP. Despite a negligible hematocrit effect for the StatStrip, it did not achieve recommended error limits. The StatStrip and PXP glucose monitors remain suitable only for neonatal hypoglycemia screening with confirmation required from a laboratory analyzer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, D. K.; Taylor, A. S.; Edwards, T.B.
2005-06-26
The objective of this investigation was to appeal to the available ComPro{trademark} database of glass compositions and measured PCTs that have been generated in the study of High Level Waste (HLW)/Low Activity Waste (LAW) glasses to define an Acceptable Glass Composition Region (AGCR). The term AGCR refers to a glass composition region in which the durability response (as defined by the Product Consistency Test (PCT)) is less than some pre-defined, acceptable value that satisfies the Waste Acceptance Product Specifications (WAPS)--a value of 10 g/L was selected for this study. To assess the effectiveness of a specific classification or index systemmore » to differentiate between acceptable and unacceptable glasses, two types of errors (Type I and Type II errors) were monitored. A Type I error reflects that a glass with an acceptable durability response (i.e., a measured NL [B] < 10 g/L) is classified as unacceptable by the system of composition-based constraints. A Type II error occurs when a glass with an unacceptable durability response is classified as acceptable by the system of constraints. Over the course of the efforts to meet this objective, two approaches were assessed. The first (referred to as the ''Index System'') was based on the use of an evolving system of compositional constraints which were used to explore the possibility of defining an AGCR. This approach was primarily based on ''glass science'' insight to establish the compositional constraints. Assessments of the Brewer and Taylor Index Systems did not result in the definition of an AGCR. Although the Taylor Index System minimized Type I errors which allowed access to composition regions of interest to improve melt rate or increase waste loadings for DWPF as compared to the current durability model, Type II errors were also committed. In the context of the application of a particular classification system in the process control system, Type II errors are much more serious than Type I errors. A Type I error only reflects that the particular constraint system being used is overly conservative (i.e., its application restricts access to glasses that have an acceptable measured durability response). A Type II error results in a more serious misclassification that could result in allowing the transfer of a Slurry Mix Evaporator (SME) batch to the melter, which is predicted to produce a durable product based on the specific system applied but in reality does not meet the defined ''acceptability'' criteria. More specifically, a nondurable product could be produced in DWPF. Given the presence of Type II errors, the Index System approach was deemed inadequate for further implementation consideration at the DWPF. The second approach (the JMP partitioning process) was purely data driven and empirically derived--glass science was not a factor. In this approach, the collection of composition--durability data in ComPro was sequentially partitioned or split based on the best available specific criteria and variables. More specifically, the JMP software chose the oxide (Al{sub 2}O{sub 3} for this dataset) that most effectively partitions the PCT responses (NL [B]'s)--perhaps not 100% effective based on a single oxide. Based on this initial split, a second request was made to split a particular set of the ''Y'' values (good or bad PCTs based on the 10 g/L limit) based on the next most critical ''X'' variable. This ''splitting'' or ''partitioning'' process was repeated until an AGCR was defined based on the use of only 3 oxides (Al{sub 2}O{sub 3}, CaO, and MgO) and critical values of > 3.75 wt% Al{sub 2}O{sub 3}, {ge} 0.616 wt% CaO, and < 3.521 wt% MgO. Using this set of criteria, the ComPro database was partitioned in which no Type II errors were committed. The automated partitioning function screened or removed 978 of the 2406 ComPro glasses which did cause some initial concerns regarding excessive conservatism regardless of its ability to identify an AGCR. However, a preliminary review of glasses within the 1428 ''acceptable'' glasses defining the ACGR includes glass systems of interest to support the accelerated mission.« less
Fine-resolution imaging of solar features using Phase-Diverse Speckle
NASA Technical Reports Server (NTRS)
Paxman, Richard G.
1995-01-01
Phase-diverse speckle (PDS) is a novel imaging technique intended to overcome the degrading effects of atmospheric turbulence on fine-resolution imaging. As its name suggests, PDS is a blend of phase-diversity and speckle-imaging concepts. PDS reconstructions on solar data were validated by simulation, by demonstrating internal consistency of PDS estimates, and by comparing PDS reconstructions with those produced from well accepted speckle-imaging processing. Several sources of error in data collected with the Swedish Vacuum Solar Telescope (SVST) were simulated: CCD noise, quantization error, image misalignment, and defocus error, as well as atmospheric turbulence model error. The simulations demonstrate that fine-resolution information can be reliably recovered out to at least 70% of the diffraction limit without significant introduction of image artifacts. Additional confidence in the SVST restoration is obtained by comparing its spatial power spectrum with previously-published power spectra derived from both space-based images and earth-based images corrected with traditional speckle-imaging techniques; the shape of the spectrum is found to match well the previous measurements. In addition, the imagery is found to be consistent with, but slightly sharper than, imagery reconstructed with accepted speckle-imaging techniques.
Applications of inertial-sensor high-inheritance instruments to DSN precision antenna pointing
NASA Technical Reports Server (NTRS)
Goddard, R. E.
1992-01-01
Laboratory test results of the initialization and tracking performance of an existing inertial-sensor-based instrument are given. The instrument, although not primarily designed for precision antenna pointing applications, demonstrated an on-average 10-hour tracking error of several millidegrees. The system-level instrument performance is shown by analysis to be sensor limited. Simulated instrument improvements show a tracking error of less than 1 mdeg, which would provide acceptable performance, i.e., low pointing loss, for the DSN 70-m antenna sub network, operating at Ka-band (1-cm wavelength).
Applications of inertial-sensor high-inheritance instruments to DSN precision antenna pointing
NASA Technical Reports Server (NTRS)
Goddard, R. E.
1992-01-01
Laboratory test results of the initialization and tracking performance of an existing inertial-sensor-based instrument are given. The instrument, although not primarily designed for precision antenna pointing applications, demonstrated an on-average 10-hour tracking error of several millidegrees. The system-level instrument performance is shown by analysis to be sensor limited. Simulated instrument improvements show a tracking error of less than 1 mdeg, which would provide acceptable performance, i.e., low pointing loss, for the Deep Space Network 70-m antenna subnetwork, operating at Ka-band (1-cm wavelength).
Furness, Alan R; Callan, Richard S; Mackert, J Rodway; Mollica, Anthony G
2018-01-01
The aim of this study was to evaluate the effectiveness of the Planmeca Compare software in identifying and quantifying a common critical error in dental students' crown preparations. In 2014-17, a study was conducted at one U.S. dental school that evaluated an ideal crown prep made by a faculty member on a dentoform to modified preps. Two types of preparation errors were created by the addition of flowable composite to the occlusal surface of identical dies of the preparations to represent the underreduction of the distolingual cusp. The error was divided into two classes: the minor class allowed for 1 mm of occlusal clearance, and the major class allowed for no occlusal clearance. The preparations were then digitally evaluated against the ideal preparation using Planmeca Compare. Percent comparison values were obtained from each trial and averaged together. False positives and false negatives were also identified and used to determine the accuracy of the evaluation. Critical errors that did not involve a substantial change in the surface area of the preparation were inconsistently identified. Within the limitations of this study, the authors concluded that the Compare software was unable to consistently identify common critical errors within an acceptable degree of error.
Measurement accuracies in band-limited extrapolation
NASA Technical Reports Server (NTRS)
Kritikos, H. N.
1982-01-01
The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.
Freeform solar concentrator with a highly asymmetric acceptance cone
NASA Astrophysics Data System (ADS)
Wheelwright, Brian; Angel, J. Roger P.; Coughenour, Blake; Hammer, Kimberly
2014-10-01
A solar concentrator with a highly asymmetric acceptance cone is investigated. Concentrating photovoltaic systems require dual-axis sun tracking to maintain nominal concentration throughout the day. In addition to collecting direct rays from the solar disk, which subtends ~0.53 degrees, concentrating optics must allow for in-field tracking errors due to mechanical misalignment of the module, wind loading, and control loop biases. The angular range over which the concentrator maintains <90% of on-axis throughput is defined as the optical acceptance angle. Concentrators with substantial rotational symmetry likewise exhibit rotationally symmetric acceptance angles. In the field, this is sometimes a poor match with azimuth-elevation trackers, which have inherently asymmetric tracking performance. Pedestal-mounted trackers with low torsional stiffness about the vertical axis have better elevation tracking than azimuthal tracking. Conversely, trackers which rotate on large-footprint circular tracks are often limited by elevation tracking performance. We show that a line-focus concentrator, composed of a parabolic trough primary reflector and freeform refractive secondary, can be tailored to have a highly asymmetric acceptance angle. The design is suitable for a tracker with excellent tracking accuracy in the elevation direction, and poor accuracy in the azimuthal direction. In the 1000X design given, when trough optical errors (2mrad rms slope deviation) are accounted for, the azimuthal acceptance angle is +/- 1.65°, while the elevation acceptance angle is only +/-0.29°. This acceptance angle does not include the angular width of the sun, which consumes nearly all of the elevation tolerance at this concentration level. By decreasing the average concentration, the elevation acceptance angle can be increased. This is well-suited for a pedestal alt-azimuth tracker with a low cost slew bearing (without anti-backlash features).
NASA Astrophysics Data System (ADS)
Jiao, Yi; Duan, Zhe
2017-01-01
In a diffraction-limited storage ring, half integer resonances can have strong effects on the beam dynamics, associated with the large detuning terms from the strong focusing and strong sextupoles as required for an ultralow emittance. In this study, the limitation of half integer resonances on the available momentum acceptance (MA) was statistically analyzed based on one design of the High Energy Photon Source (HEPS). It was found that the probability of MA reduction due to crossing of half integer resonances is closely correlated with the level of beta beats at the nominal tunes, but independent of the error sources. The analysis indicated that for the presented HEPS lattice design, the rms amplitude of beta beats should be kept below 1.5% horizontally and 2.5% vertically to reach a small MA reduction probability of about 1%.
Use and limitations of ASHRAE solar algorithms in solar energy utilization studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sowell, E.F.
1978-01-01
Algorithms for computer calculation of solar radiation based on cloud cover data, recommended by the ASHRAE Task Group on Energy Requirements for Buildings, are examined for applicability in solar utilization studies. The implementation is patterned after a well-known computer program, NBSLD. The results of these algorithms, including horizontal and tilted surface insolation and useful energy collectable, are compared to observations and results obtainable by the Liu and Jordan method. For purposes of comparison, data for Riverside, CA from 1960 through 1963 are examined. It is shown that horizontal values so predicted are frequently less than 10% and always less thanmore » 23% in error when compared to averages of hourly measurements during important collection hours in 1962. Average daily errors range from -14 to 9% over the year. When averaged on an hourly basis over four years, there is a 21% maximum discrepancy compared to the Liu and Jordan method. Corresponding tilted-surface discrepancies are slightly higher, as are those for useful energy collected. Possible sources of these discrepancies and errors are discussed. Limitations of the algorithms and various implementations are examined, and it is suggested that certain assumptions acceptable for building loads analysis may not be acceptable for solar utilization studies. In particular, it is shown that the method of separatingg diffuse and direct components in the presence of clouds requires careful consideration in order to achieve accuracy and efficiency in any implementation.« less
2009-01-01
Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to validate imipenem susceptibility. Etest, whereever available, may be used as an easy method to confirm imipenem susceptibility. PMID:19291298
NASA Technical Reports Server (NTRS)
Diorio, Kimberly A.; Voska, Ned (Technical Monitor)
2002-01-01
This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.
Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N
2018-05-01
Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.
A large-scale test of free-energy simulation estimates of protein-ligand binding affinities.
Mikulskis, Paulius; Genheden, Samuel; Ryde, Ulf
2014-10-27
We have performed a large-scale test of alchemical perturbation calculations with the Bennett acceptance-ratio (BAR) approach to estimate relative affinities for the binding of 107 ligands to 10 different proteins. Employing 20-Å truncated spherical systems and only one intermediate state in the perturbations, we obtain an error of less than 4 kJ/mol for 54% of the studied relative affinities and a precision of 0.5 kJ/mol on average. However, only four of the proteins gave acceptable errors, correlations, and rankings. The results could be improved by using nine intermediate states in the simulations or including the entire protein in the simulations using periodic boundary conditions. However, 27 of the calculated affinities still gave errors of more than 4 kJ/mol, and for three of the proteins the results were not satisfactory. This shows that the performance of BAR calculations depends on the target protein and that several transformations gave poor results owing to limitations in the molecular-mechanics force field or the restricted sampling possible within a reasonable simulation time. Still, the BAR results are better than docking calculations for most of the proteins.
Ma, Pei-Luen; Jheng, Yan-Wun; Jheng, Bi-Wei; Hou, I-Ching
2017-01-01
Bar code medication administration (BCMA) could reduce medical errors and promote patient safety. This research uses modified information systems success model (M-ISS model) to evaluate nurses' acceptance to BCMA. The result showed moderate correlation between medication administration safety (MAS) to system quality, information quality, service quality, user satisfaction, and limited satisfaction.
Modified SPC for short run test and measurement process in multi-stations
NASA Astrophysics Data System (ADS)
Koh, C. K.; Chin, J. F.; Kamaruddin, S.
2018-03-01
Due to short production runs and measurement error inherent in electronic test and measurement (T&M) processes, continuous quality monitoring through real-time statistical process control (SPC) is challenging. Industry practice allows the installation of guard band using measurement uncertainty to reduce the width of acceptance limit, as an indirect way to compensate the measurement errors. This paper presents a new SPC model combining modified guard band and control charts (\\bar{\\text{Z}} chart and W chart) for short runs in T&M process in multi-stations. The proposed model standardizes the observed value with measurement target (T) and rationed measurement uncertainty (U). S-factor (S f) is introduced to the control limits to improve the sensitivity in detecting small shifts. The model was embedded in automated quality control system and verified with a case study in real industry.
Acoustic sensor for real-time control for the inductive heating process
Kelley, John Bruce; Lu, Wei-Yang; Zutavern, Fred J.
2003-09-30
Disclosed is a system and method for providing closed-loop control of the heating of a workpiece by an induction heating machine, including generating an acoustic wave in the workpiece with a pulsed laser; optically measuring displacements of the surface of the workpiece in response to the acoustic wave; calculating a sub-surface material property by analyzing the measured surface displacements; creating an error signal by comparing an attribute of the calculated sub-surface material properties with a desired attribute; and reducing the error signal below an acceptable limit by adjusting, in real-time, as often as necessary, the operation of the inductive heating machine.
Measurement of diffusion coefficients from solution rates of bubbles
NASA Technical Reports Server (NTRS)
Krieger, I. M.
1979-01-01
The rate of solution of a stationary bubble is limited by the diffusion of dissolved gas molecules away from the bubble surface. Diffusion coefficients computed from measured rates of solution give mean values higher than accepted literature values, with standard errors as high as 10% for a single observation. Better accuracy is achieved with sparingly soluble gases, small bubbles, and highly viscous liquids. Accuracy correlates with the Grashof number, indicating that free convection is the major source of error. Accuracy should, therefore, be greatly increased in a gravity-free environment. The fact that the bubble will need no support is an additional important advantage of Spacelab for this measurement.
Detection and avoidance of errors in computer software
NASA Technical Reports Server (NTRS)
Kinsler, Les
1989-01-01
The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.
Vibration characteristics of teak wood filled steel tubes
NASA Astrophysics Data System (ADS)
Danawade, Bharatesh Adappa; Malagi, Ravindra Rachappa
2018-05-01
The objective of this paper is to determine fundamental frequency and damping ratio of teak wood filled steel tubes. Mechanically bonded teak wood filled steel tubes have been evaluated by experimental impact hammer test using modal analysis. The results of impact hammer test were verified and validated by finite element tool ANSYS using harmonic analysis. The error between the two methods was observed to be within acceptable limit.
Ali, Sam; Byanyima, Rosemary Kusaba; Ononge, Sam; Ictho, Jerry; Nyamwiza, Jean; Loro, Emmanuel Lako Ernesto; Mukisa, John; Musewa, Angella; Nalutaaya, Annet; Ssenyonga, Ronald; Kawooya, Ismael; Temper, Benjamin; Katamba, Achilles; Kalyango, Joan; Karamagi, Charles
2018-05-04
Ultrasonography is essential in the prenatal diagnosis and care for the pregnant mothers. However, the measurements obtained often contain a small percentage of unavoidable error that may have serious clinical implications if substantial. We therefore evaluated the level of intra and inter-observer error in measuring mean sac diameter (MSD) and crown-rump length (CRL) in women between 6 and 10 weeks' gestation at Mulago hospital. This was a cross-sectional study conducted from January to March 2016. We enrolled 56 women with an intrauterine single viable embryo. The women were scanned using a transvaginal (TVS) technique by two observers who were blinded of each other's measurements. Each observer measured the CRL twice and the MSD once for each woman. Intra-class correlation coefficients (ICCs), 95% limits of agreement (LOA) and technical error of measurement (TEM) were used for analysis. Intra-observer ICCs for CRL measurements were 0.995 and 0.993 while inter-observer ICCs were 0.988 for CRL and 0.955 for MSD measurements. Intra-observer 95% LOA for CRL were ± 2.04 mm and ± 1.66 mm. Inter-observer LOA were ± 2.35 mm for CRL and ± 4.87 mm for MSD. The intra-observer relative TEM for CRL were 4.62% and 3.70% whereas inter-observer relative TEM were 5.88% and 5.93% for CRL and MSD respectively. Intra- and inter-observer error of CRL and MSD measurements among pregnant women at Mulago hospital were acceptable. This implies that at Mulago hospital, the error in pregnancy dating is within acceptable margins of ±3 days in first trimester, and the CRL and MSD cut offs of ≥7 mm and ≥ 25 mm respectively are fit for diagnosis of miscarriage on TVS. These findings should be extrapolated to the whole country with caution. Sonographers can achieve acceptable and comparable diagnostic accuracy levels of MSD and CLR measurements with proper training and adherence to practice guidelines.
Preventing medication errors in cancer chemotherapy.
Cohen, M R; Anderson, R W; Attilio, R M; Green, L; Muller, R J; Pruemer, J M
1996-04-01
Recommendations for preventing medication errors in cancer chemotherapy are made. Before a health care provider is granted privileges to prescribe, dispense, or administer antineoplastic agents, he or she should undergo a tailored educational program and possibly testing or certification. Appropriate reference materials should be developed. Each institution should develop a dose-verification process with as many independent checks as possible. A detailed checklist covering prescribing, transcribing, dispensing, and administration should be used. Oral orders are not acceptable. All doses should be calculated independently by the physician, the pharmacist, and the nurse. Dosage limits should be established and a review process set up for doses that exceed the limits. These limits should be entered into pharmacy computer systems, listed on preprinted order forms, stated on the product packaging, placed in strategic locations in the institution, and communicated to employees. The prescribing vocabulary must be standardized. Acronyms, abbreviations, and brand names must be avoided and steps taken to avoid other sources of confusion in the written orders, such as trailing zeros. Preprinted antineoplastic drug order forms containing checklists can help avoid errors. Manufacturers should be encouraged to avoid or eliminate ambiguities in drug names and dosing information. Patients must be educated about all aspects of their cancer chemotherapy, as patients represent a last line of defense against errors. An interdisciplinary team at each practice site should review every medication error reported. Pharmacists should be involved at all sites where antineoplastic agents are dispensed. Although it may not be possible to eliminate all medication errors in cancer chemotherapy, the risk can be minimized through specific steps. Because of their training and experience, pharmacists should take the lead in this effort.
A regret-induced status-quo bias
Nicolle, A.; Fleming, S.M.; Bach, D.R.; Driver, J.; Dolan, R. J.
2011-01-01
A suboptimal bias towards accepting the ‘status-quo’ option in decision-making is well established behaviorally, but the underlying neural mechanisms are less clear. Behavioral evidence suggests the emotion of regret is higher when errors arise from rejection rather than acceptance of a status-quo option. Such asymmetry in the genesis of regret might drive the status-quo bias on subsequent decisions, if indeed erroneous status-quo rejections have a greater neuronal impact than erroneous status-quo acceptances. To test this, we acquired human fMRI data during a difficult perceptual decision task that incorporated a trial-to-trial intrinsic status-quo option, with explicit signaling of outcomes (error or correct). Behaviorally, experienced regret was higher after an erroneous status-quo rejection compared to acceptance. Anterior insula and medial prefrontal cortex showed increased BOLD signal after such status-quo rejection errors. In line with our hypothesis, a similar pattern of signal change predicted acceptance of the status-quo on a subsequent trial. Thus, our data link a regret-induced status-quo bias to error-related activity on the preceding trial. PMID:21368043
NASA Astrophysics Data System (ADS)
Nunez, F.; Romero, A.; Clua, J.; Mas, J.; Tomas, A.; Catalan, A.; Castellsaguer, J.
2005-08-01
MARES (Muscle Atrophy Research and Exercise System) is a computerized ergometer for neuromuscular research to be flown and installed onboard the International Space Station in 2007. Validity of data acquired depends on controlling and reducing all significant error sources. One of them is the misalignment of the joint rotation axis with respect to the motor axis.The error induced on the measurements is proportional to the misalignment between both axis. Therefore, the restraint system's performance is critical [1]. MARES HRS (Human Restraint System) assures alignment within an acceptable range while performing the exercise (results: elbow movement:13.94mm+/-5.45, Knee movement: 22.36mm+/- 6.06 ) and reproducibility of human positioning (results: elbow movement: 2.82mm+/-1.56, Knee movement 7.45mm+/-4.8 ). These results allow limiting measurement errors induced by misalignment.
Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V
2018-03-01
Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.
Acceptance threshold theory can explain occurrence of homosexual behaviour.
Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra
2015-01-01
Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Wasserman, Melanie; Renfrew, Megan R; Green, Alexander R; Lopez, Lenny; Tan-McGrory, Aswita; Brach, Cindy; Betancourt, Joseph R
2014-01-01
Since the 1999 Institute of Medicine (IOM) report To Err is Human, progress has been made in patient safety, but few efforts have focused on safety in patients with limited English proficiency (LEP). This article describes the development, content, and testing of two new evidence-based Agency for Healthcare Research and Quality (AHRQ) tools for LEP patient safety. In the content development phase, a comprehensive mixed-methods approach was used to identify common causes of errors for LEP patients, high-risk scenarios, and evidence-based strategies to address them. Based on our findings, Improving Patient Safety Systems for Limited English Proficient Patients: A Guide for Hospitals contains recommendations to improve detection and prevention of medical errors across diverse populations, and TeamSTEPPS Enhancing Safety for Patients with Limited English Proficiency Module trains staff to improve safety through team communication and incorporating interpreters in the care process. The Hospital Guide was validated with leaders in quality and safety at diverse hospitals, and the TeamSTEPPS LEP module was field-tested in varied settings within three hospitals. Both tools were found to be implementable, acceptable to their audiences, and conducive to learning. Further research on the impact of the combined use of the guide and module would shed light on their value as a multifaceted intervention. © 2014 National Association for Healthcare Quality.
Cappella, Annalisa; Amadasi, Alberto; Castoldi, Elisa; Mazzarelli, Debora; Gaudio, Daniel; Cattaneo, Cristina
2014-11-01
The distinction between perimortem and postmortem fractures is an important challenge for forensic anthropology. Such a crucial task is presently based on macro-morphological criteria widely accepted in the scientific community. However, several limits affect these parameters which have not yet been investigated thoroughly. This study aims at highlighting the pitfalls and errors in evaluating perimortem or postmortem fractures. Two trained forensic anthropologists were asked to classify 210 fractures of known origin in four skeletons (three victims of blunt force trauma and one natural death) as perimortem, postmortem, or dubious, twice in 6 months in order to assess intraobserver error also. Results show large errors, ranging from 14.8 to 37% for perimortem fractures and from 5.5 to 14.8% for postmortem ones; more than 80% of errors concerned trabecular bone. This supports the need for more objective and reliable criteria for a correct assessment of peri- and postmortem bone fractures. © 2014 American Academy of Forensic Sciences.
Analysis of tractable distortion metrics for EEG compression applications.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-07-01
Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2014 CFR
2014-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2013 CFR
2013-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2012 CFR
2012-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
Evaluating mixed samples as a source of error in non-invasive genetic studies using microsatellites
Roon, David A.; Thomas, M.E.; Kendall, K.C.; Waits, L.P.
2005-01-01
The use of noninvasive genetic sampling (NGS) for surveying wild populations is increasing rapidly. Currently, only a limited number of studies have evaluated potential biases associated with NGS. This paper evaluates the potential errors associated with analysing mixed samples drawn from multiple animals. Most NGS studies assume that mixed samples will be identified and removed during the genotyping process. We evaluated this assumption by creating 128 mixed samples of extracted DNA from brown bear (Ursus arctos) hair samples. These mixed samples were genotyped and screened for errors at six microsatellite loci according to protocols consistent with those used in other NGS studies. Five mixed samples produced acceptable genotypes after the first screening. However, all mixed samples produced multiple alleles at one or more loci, amplified as only one of the source samples, or yielded inconsistent electropherograms by the final stage of the error-checking process. These processes could potentially reduce the number of individuals observed in NGS studies, but errors should be conservative within demographic estimates. Researchers should be aware of the potential for mixed samples and carefully design gel analysis criteria and error checking protocols to detect mixed samples.
Maeda, Takuma; Hattori, Kohshi; Sumiyoshi, Miho; Kanazawa, Hiroko; Ohnishi, Yoshihiko
2018-06-01
The fourth-generation FloTrac/Vigileo™ improved its algorithm to follow changes in systemic vascular resistance index (SVRI). This revision may improve the accuracy and trending ability of CI even in patients who undergo abdominal aortic aneurysm (AAA) surgery which cause drastic change of SVRI by aortic clamping. The purpose of this study is to elucidate the accuracy and trending ability of the fourth-generation FloTrac/Vigileo™ in patients with AAA surgery by comparing the FloTrac/Vigileo™-derived CI (CI FT ) with that measured by three-dimensional echocardiography (CI 3D ). Twenty-six patients undergoing elective AAA surgery were included in this study. CI FT and CI3 D were determined simultaneously in eight points including before and after aortic clamp. We used CI 3D as the reference method. In the Bland-Altman analysis, CI FT had a wide limit of agreement with CI 3D showing a percentage error of 46.7%. Subgroup analysis showed that the percentage error between CO 3D and CO FT was 56.3% in patients with cardiac index < 2.5 L/min/m 2 and 28.4% in patients with cardiac index ≥ 2.5 L/min/m 2 . SVRI was significantly higher in patients with cardiac index < 2.5 L/min/m 2 (1703 ± 330 vs. 2757 ± 798; p < 0.001). The tracking ability of fourth generation of FloTrac/Vigileo™ after aortic clamp was not clinically acceptable (26.9%). The degree of accuracy of the fourth-generation FloTrac/Vigileo™ in patients with AAA surgery was not acceptable. The tracking ability of the fourth-generation FloTrac/Vigileo™ after aortic clamp was below the acceptable limit.
Korte, Erik A; Pozzi, Nicole; Wardrip, Nina; Ayyoubi, M Tayyeb; Jortani, Saeed A
2018-07-01
There are 13 million blood transfusions each year in the US. Limitations in the donor pool, storage capabilities, mass casualties, access in remote locations and reactivity of donors all limit the availability of transfusable blood products to patients. HBOC-201 (Hemopure®) is a second-generation glutaraldehyde-polymer of bovine hemoglobin, which can serve as an "oxygen bridge" to maintain oxygen carrying capacity while transfusion products are unavailable. Hemopure presents the advantages of extended shelf life, ambient storage, and limited reactive potential, but its extracellular location can also cause significant interference in modern laboratory analyzers similar to severe hemolysis. Observed error in 26 commonly measured analytes was determined on 4 different analytical platforms in plasma from a patient therapeutically transfused Hemopure as well as donor blood spiked with Hemopure at a level equivalent to the therapeutic loading dose (10% v/v). Significant negative error ratios >50% of the total allowable error (>0.5tAE) were reported in 23/104 assays (22.1%), positive bias of >0.5tAE in 26/104 assays (25.0%), and acceptable bias between -0.5tAE and 0.5tAE error ratio was reported in 44/104 (42.3%). Analysis failed in the presence of Hemopure in 11/104 (10.6%). Observed error is further subdivided by platform, wavelength, dilution and reaction method. Administration of Hemopure (or other hemoglobin-based oxygen carriers) presents a challenge to laboratorians tasked with analyzing patient specimens. We provide laboratorians with a reference to evaluate patient samples, select optimal analytical platforms for specific analytes, and predict possible bias beyond the 4 analytical platforms included in this study. Copyright © 2018 Elsevier B.V. All rights reserved.
Operational Interventions to Maintenance Error
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki
1997-01-01
A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.
Reduction of Maintenance Error Through Focused Interventions
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)
1997-01-01
It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.
Cohen, Trevor; Blatter, Brett; Almeida, Carlos; Patel, Vimla L.
2007-01-01
Objective Contemporary error research suggests that the quest to eradicate error is misguided. Error commission, detection, and recovery are an integral part of cognitive work, even at the expert level. In collaborative workspaces, the perception of potential error is directly observable: workers discuss and respond to perceived violations of accepted practice norms. As perceived violations are captured and corrected preemptively, they do not fit Reason’s widely accepted definition of error as “failure to achieve an intended outcome.” However, perceived violations suggest the aversion of potential error, and consequently have implications for error prevention. This research aims to identify and describe perceived violations of the boundaries of accepted procedure in a psychiatric emergency department (PED), and how they are resolved in practice. Design Clinical discourse from fourteen PED patient rounds was audio-recorded. Excerpts from recordings suggesting perceived violations or incidents of miscommunication were extracted and analyzed using qualitative coding methods. The results are interpreted in relation to prior research on vulnerabilities to error in the PED. Results Thirty incidents of perceived violations or miscommunication are identified and analyzed. Of these, only one medication error was formally reported. Other incidents would not have been detected by a retrospective analysis. Conclusions The analysis of perceived violations expands the data available for error analysis beyond occasional reported adverse events. These data are prospective: responses are captured in real time. This analysis supports a set of recommendations to improve the quality of care in the PED and other critical care contexts. PMID:17329728
Measuring human remains in the field: Grid technique, total station, or MicroScribe?
Sládek, Vladimír; Galeta, Patrik; Sosna, Daniel
2012-09-10
Although three-dimensional (3D) coordinates for human intra-skeletal landmarks are among the most important data that anthropologists have to record in the field, little is known about the reliability of various measuring techniques. We compared the reliability of three techniques used for 3D measurement of human remain in the field: grid technique (GT), total station (TS), and MicroScribe (MS). We measured 365 field osteometric points on 12 skeletal sequences excavated at the Late Medieval/Early Modern churchyard in Všeruby, Czech Republic. We compared intra-observer, inter-observer, and inter-technique variation using mean difference (MD), mean absolute difference (MAD), standard deviation of difference (SDD), and limits of agreement (LA). All three measuring techniques can be used when accepted error ranges can be measured in centimeters. When a range of accepted error measurable in millimeters is needed, MS offers the best solution. TS can achieve the same reliability as does MS, but only when the laser beam is accurately pointed into the center of the prism. When the prism is not accurately oriented, TS produces unreliable data. TS is more sensitive to initialization than is MS. GT measures human skeleton with acceptable reliability for general purposes but insufficiently when highly accurate skeletal data are needed. We observed high inter-technique variation, indicating that just one technique should be used when spatial data from one individual are recorded. Subadults are measured with slightly lower error than are adults. The effect of maximum excavated skeletal length has little practical significance in field recording. When MS is not available, we offer practical suggestions that can help to increase reliability when measuring human skeleton in the field. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
The Michelson Stellar Interferometer Error Budget for Triple Triple-Satellite Configuration
NASA Technical Reports Server (NTRS)
Marathay, Arvind S.; Shiefman, Joe
1996-01-01
This report presents the results of a study of the instrumentation tolerances for a conventional style Michelson stellar interferometer (MSI). The method used to determine the tolerances was to determine the change, due to the instrument errors, in the measured fringe visibility and phase relative to the ideal values. The ideal values are those values of fringe visibility and phase that would be measured by a perfect MSI and are attributable solely to the object being detected. Once the functional relationship for changes in visibility and phase as a function of various instrument errors is understood it is then possible to set limits on the instrument errors in order to ensure that the measured visibility and phase are different from the ideal values by no more than some specified amount. This was done as part of this study. The limits we obtained are based on a visibility error of no more than 1% and a phase error of no more than 0.063 radians (this comes from 1% of 2(pi) radians). The choice of these 1% limits is supported in the literture. The approach employed in the study involved the use of ASAP (Advanced System Analysis Program) software provided by Breault Research Organization, Inc., in conjunction with parallel analytical calculations. The interferometer accepts object radiation into two separate arms each consisting of an outer mirror, an inner mirror, a delay line (made up of two moveable mirrors and two static mirrors), and a 10:1 afocal reduction telescope. The radiation coming out of both arms is incident on a slit plane which is opaque with two openings (slits). One of the two slits is centered directly under one of the two arms of the interferometer and the other slit is centered directly under the other arm. The slit plane is followed immediately by an ideal combining lens which images the radiation in the fringe plane (also referred to subsequently as the detector plane).
Tucker, Neil; Reid, Duncan; McNair, Peter
2007-01-01
The slump test is a tool to assess the mechanosensitivity of the neuromeningeal structures within the vertebral canal. While some studies have investigated the reliability of aspects of this test within the same day, few have assessed the reliability across days. Therefore, the purpose of this pilot study was to investigate reliability when measuring active knee extension range of motion (AROM) in a modified slump test position within trials on a single day and across days. Ten male and ten female asymptomatic subjects, ages 20-49 (mean age 30.1, SD 6.4) participated in the study. Knee extension AROM in a modified slump position with the cervical spine in a flexed position and then in an extended position was measured via three trials on two separate days. Across three trials, knee extension AROM increased significantly with a mean magnitude of 2 degrees within days for both cervical spine positions (P>0.05). The findings showed that there was no statistically significant difference in knee extension AROM measurements across days (P>0.05). The intraclass correlation coefficients for the mean of the three trials across days were 0.96 (lower limit 95% CI: 0.90) with the cervical spine flexed and 0.93 (lower limit 95% CI: 0.83) with cervical extension. Measurement error was calculated by way of the typical error and 95% limits of agreement, and visually represented in Bland and Altman plots. The typical error for the cervical flexed and extended positions averaged across trials was 2.6 degrees and 3.3 degrees , respectively. The limits of agreement were narrow, and the Bland and Altman plots also showed minimal bias in the joint angles across days with a random distribution of errors across the range of measured angles. This study demonstrated that knee extension AROM could be reliably measured across days in subjects without pathology and that the measurement error was acceptable. Implications of variability over multiple trials are discussed. The modified set-up for the test using the Kincom dynamometer and elevated thigh position may be useful to clinical researchers in determining the mechanosensitivity of the nervous system.
Tucker, Neil; Reid, Duncan; McNair, Peter
2007-01-01
The slump test is a tool to assess the mechanosensitivity of the neuromeningeal structures within the vertebral canal. While some studies have investigated the reliability of aspects of this test within the same day, few have assessed the reliability across days. Therefore, the purpose of this pilot study was to investigate reliability when measuring active knee extension range of motion (AROM) in a modified slump test position within trials on a single day and across days. Ten male and ten female asymptomatic subjects, ages 20–49 (mean age 30.1, SD 6.4) participated in the study. Knee extension AROM in a modified slump position with the cervical spine in a flexed position and then in an extended position was measured via three trials on two separate days. Across three trials, knee extension AROM increased significantly with a mean magnitude of 2° within days for both cervical spine positions (P>0.05). The findings showed that there was no statistically significant difference in knee extension AROM measurements across days (P>0.05). The intraclass correlation coefficients for the mean of the three trials across days were 0.96 (lower limit 95% CI: 0.90) with the cervical spine flexed and 0.93 (lower limit 95% CI: 0.83) with cervical extension. Measurement error was calculated by way of the typical error and 95% limits of agreement, and visually represented in Bland and Altman plots. The typical error for the cervical flexed and extended positions averaged across trials was 2.6° and 3.3°, respectively. The limits of agreement were narrow, and the Bland and Altman plots also showed minimal bias in the joint angles across days with a random distribution of errors across the range of measured angles. This study demonstrated that knee extension AROM could be reliably measured across days in subjects without pathology and that the measurement error was acceptable. Implications of variability over multiple trials are discussed. The modified set-up for the test using the Kincom dynamometer and elevated thigh position may be useful to clinical researchers in determining the mechanosensitivity of the nervous system. PMID:19066666
Fundamental Physical Limits for the Size of Future Planetary Surface Exploration Systems
NASA Astrophysics Data System (ADS)
Andrews, F.; Hobbs, S. E.; Honstvet, I.; Snelling, M.
2004-04-01
With the current interest in the potential use of Nanotechnology for spacecraft, it becomes increasingly likely that environmental sensor probes, such as the "lab-on-a-chip" concept, will take advantage of this technology and become orders of magnitude smaller than current sensor systems. This paper begins to investigate how small these systems could theoretically become, and what are the governing laws and limiting factors that determine that minimum size. The investigation focuses on the three primary subsystems for a sensor network of this nature Sensing, Information Processing and Communication. In general, there are few fundamental physical laws that limit the size of the sensor system. Limits tend to be driven by factors other than the laws of physics. These include user requirements, such as the acceptable probability of error, and the potential external environment.
Test load verification through strain data analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.; Harrington, F.
1995-01-01
A traditional binding acceptance criterion on polycrystalline structures is the experimental verification of the ultimate factor of safety. At fracture, the induced strain is inelastic and about an order-of-magnitude greater than designed for maximum expected operational limit. At this extreme strained condition, the structure may rotate and displace at the applied verification load such as to unknowingly distort the load transfer into the static test article. Test may result in erroneously accepting a submarginal design or rejecting a reliable one. A technique was developed to identify, monitor, and assess the load transmission error through two back-to-back surface-measured strain data. The technique is programmed for expediency and convenience. Though the method was developed to support affordable aerostructures, the method is also applicable for most high-performance air and surface transportation structural systems.
Heterogenic Solid Biofuel Sampling Methodology and Uncertainty Associated with Prompt Analysis
Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Patiño, David; Collazo, Joaquín
2010-01-01
Accurate determination of the properties of biomass is of particular interest in studies on biomass combustion or cofiring. The aim of this paper is to develop a methodology for prompt analysis of heterogeneous solid fuels with an acceptable degree of accuracy. Special care must be taken with the sampling procedure to achieve an acceptable degree of error and low statistical uncertainty. A sampling and error determination methodology for prompt analysis is presented and validated. Two approaches for the propagation of errors are also given and some comparisons are made in order to determine which may be better in this context. Results show in general low, acceptable levels of uncertainty, demonstrating that the samples obtained in the process are representative of the overall fuel composition. PMID:20559506
Customization of user interfaces to reduce errors and enhance user acceptance.
Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram
2014-03-01
Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Cargo Movement Operations System (CMOS). Software Test Description
1990-10-28
resulting in errors in paragraph numbers and titles. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION...location to test the update of the truck manifest. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION...CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [
Brennan, Peter A; Mitchell, David A; Holmes, Simon; Plint, Simon; Parry, David
2016-01-01
Human error is as old as humanity itself and is an appreciable cause of mistakes by both organisations and people. Much of the work related to human factors in causing error has originated from aviation where mistakes can be catastrophic not only for those who contribute to the error, but for passengers as well. The role of human error in medical and surgical incidents, which are often multifactorial, is becoming better understood, and includes both organisational issues (by the employer) and potential human factors (at a personal level). Mistakes as a result of individual human factors and surgical teams should be better recognised and emphasised. Attitudes and acceptance of preoperative briefing has improved since the introduction of the World Health Organization (WHO) surgical checklist. However, this does not address limitations or other safety concerns that are related to performance, such as stress and fatigue, emotional state, hunger, awareness of what is going on situational awareness, and other factors that could potentially lead to error. Here we attempt to raise awareness of these human factors, and highlight how they can lead to error, and how they can be minimised in our day-to-day practice. Can hospitals move from being "high risk industries" to "high reliability organisations"? Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C
2018-06-01
Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.
Rectifying calibration error of Goldmann applanation tonometer is easy!
Choudhari, Nikhil S; Moorthy, Krishna P; Tungikar, Vinod B; Kumar, Mohan; George, Ronnie; Rao, Harsha L; Senthil, Sirisha; Vijaya, Lingam; Garudadri, Chandra Sekhar
2014-11-01
Purpose: Goldmann applanation tonometer (GAT) is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland) were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn't suffice. We followed the South East Asia Glaucoma Interest Group's definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively). Results: Twelve out of 29 (41.3%) GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6%) faulty instruments. Only one (8.3%) faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.
Ruangsetakit, Varee
2015-11-01
To re-examine relative accuracy of intraocular lens (IOL) power calculation of immersion ultrasound biometry (IUB) and partial coherence interferometry (PCI) based on a new approach that limits its interest on the cases in which the IUB's IOL and PCI's IOL assignments disagree. Prospective observational study of 108 eyes that underwent cataract surgeries at Taksin Hospital. Two halves ofthe randomly chosen sample eyes were implanted with the IUB- and PCI-assigned lens. Postoperative refractive errors were measured in the fifth week. More accurate calculation was based on significantly smaller mean absolute errors (MAEs) and root mean squared errors (RMSEs) away from emmetropia. The distributions of the errors were examined to ensure that the higher accuracy was significant clinically as well. The (MAEs, RMSEs) were smaller for PCI of (0.5106 diopter (D), 0.6037D) than for IUB of (0.7000D, 0.8062D). The higher accuracy was principally contributedfrom negative errors, i.e., myopia. The MAEs and RMSEs for (IUB, PCI)'s negative errors were (0.7955D, 0.5185D) and (0.8562D, 0.5853D). Their differences were significant. The 72.34% of PCI errors fell within a clinically accepted range of ± 0.50D, whereas 50% of IUB errors did. PCI's higher accuracy was significant statistically and clinically, meaning that lens implantation based on PCI's assignments could improve postoperative outcomes over those based on IUB's assignments.
Improving ROLO lunar albedo model using PLEIADES-HR satellites extra-terrestrial observations
NASA Astrophysics Data System (ADS)
Meygret, Aimé; Blanchet, Gwendoline; Colzy, Stéphane; Gross-Colzy, Lydwine
2017-09-01
The accurate on orbit radiometric calibration of optical sensors has become a challenge for space agencies which have developed different technics involving on-board calibration systems, ground targets or extra-terrestrial targets. The combination of different approaches and targets is recommended whenever possible and necessary to reach or demonstrate a high accuracy. Among these calibration targets, the moon is widely used through the well-known ROLO (RObotic Lunar Observatory) model developed by USGS. A great and worldwide recognized work was done to characterize the moon albedo which is very stable. However the more and more demanding needs for calibration accuracy have reached the limitations of the model. This paper deals with two mains limitations: the residual error when modelling the phase angle dependency and the absolute accuracy of the model which is no more acceptable for the on orbit calibration of radiometers. Thanks to PLEIADES high resolution satellites agility, a significant data base of moon and stars images was acquired, allowing to show the limitations of ROLO model and to characterize the errors. The phase angle residual dependency is modelled using PLEIADES 1B images acquired for different quasi-complete moon cycles with a phase angle varying by less than 1°. The absolute albedo residual error is modelled using PLEIADES 1A images taken over stars and the moon. The accurate knowledge of the stars spectral irradiance is transferred to the moon spectral albedo using the satellite as a transfer radiometer. This paper describes the data set used, the ROLO model residual errors and their modelling, the quality of the proposed correction and show some calibration results using this improved model.
Gibson, Eli; Fenster, Aaron; Ward, Aaron D
2013-10-01
Novel imaging modalities are pushing the boundaries of what is possible in medical imaging, but their signal properties are not always well understood. The evaluation of these novel imaging modalities is critical to achieving their research and clinical potential. Image registration of novel modalities to accepted reference standard modalities is an important part of characterizing the modalities and elucidating the effect of underlying focal disease on the imaging signal. The strengths of the conclusions drawn from these analyses are limited by statistical power. Based on the observation that in this context, statistical power depends in part on uncertainty arising from registration error, we derive a power calculation formula relating registration error, number of subjects, and the minimum detectable difference between normal and pathologic regions on imaging, for an imaging validation study design that accommodates signal correlations within image regions. Monte Carlo simulations were used to evaluate the derived models and test the strength of their assumptions, showing that the model yielded predictions of the power, the number of subjects, and the minimum detectable difference of simulated experiments accurate to within a maximum error of 1% when the assumptions of the derivation were met, and characterizing sensitivities of the model to violations of the assumptions. The use of these formulae is illustrated through a calculation of the number of subjects required for a case study, modeled closely after a prostate cancer imaging validation study currently taking place at our institution. The power calculation formulae address three central questions in the design of imaging validation studies: (1) What is the maximum acceptable registration error? (2) How many subjects are needed? (3) What is the minimum detectable difference between normal and pathologic image regions? Copyright © 2013 Elsevier B.V. All rights reserved.
Allegrini, Maria-Cristina; Canullo, Roberto; Campetella, Giandiego
2009-04-01
Knowledge of accuracy and precision rates is particularly important for long-term studies. Vegetation assessments include many sources of error related to overlooking and misidentification, that are usually influenced by some factors, such as cover estimate subjectivity, observer biased species lists and experience of the botanist. The vegetation assessment protocol adopted in the Italian forest monitoring programme (CONECOFOR) contains a Quality Assurance programme. The paper presents the different phases of QA, separates the 5 main critical points of the whole protocol as sources of random or systematic errors. Examples of Measurement Quality Objectives (MQOs) expressed as Data Quality Limits (DQLs) are given for vascular plant cover estimates, in order to establish the reproducibility of the data. Quality control activities were used to determine the "distance" between the surveyor teams and the control team. Selected data were acquired during the training and inter-calibration courses. In particular, an index of average cover by species groups was used to evaluate the random error (CV 4%) as the dispersion around the "true values" of the control team. The systematic error in the evaluation of species composition, caused by overlooking or misidentification of species, was calculated following the pseudo-turnover rate; detailed species censuses on smaller sampling units were accepted as the pseudo-turnover which always fell below the 25% established threshold; species density scores recorded at community level (100 m(2) surface) rarely exceeded that limit.
Problem of data quality and the limitations of the infrastructure approach
NASA Astrophysics Data System (ADS)
Behlen, Fred M.; Sayre, Richard E.; Rackus, Edward; Ye, Dingzhong
1998-07-01
The 'Infrastructure Approach' is a PACS implementation methodology wherein the archive, network and information systems interfaces are acquired first, and workstations are installed later. The approach allows building a history of archived image data, so that most prior examinations are available in digital form when workstations are deployed. A limitation of the Infrastructure Approach is that the deferred use of digital image data defeats many data quality management functions that are provided automatically by human mechanisms when data is immediately used for the completion of clinical tasks. If the digital data is used solely for archiving while reports are interpreted from film, the radiologist serves only as a check against lost films, and another person must be designated as responsible for the quality of the digital data. Data from the Radiology Information System and the PACS were analyzed to assess the nature and frequency of system and data quality errors. The error level was found to be acceptable if supported by auditing and error resolution procedures requiring additional staff time, and in any case was better than the loss rate of a hardcopy film archive. It is concluded that the problem of data quality compromises but does not negate the value of the Infrastructure Approach. The Infrastructure Approach should best be employed only to a limited extent, and that any phased PACS implementation should have a substantial complement of workstations dedicated to softcopy interpretation for at least some applications, and with full deployment following not long thereafter.
Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure Validation Simulation Study
NASA Technical Reports Server (NTRS)
Murdoch, Jennifer L.; Bussink, Frank J. L.; Chamberlain, James P.; Chartrand, Ryan C.; Palmer, Michael T.; Palmer, Susan O.
2008-01-01
The Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure (ITP) Validation Simulation Study investigated the viability of an ITP designed to enable oceanic flight level changes that would not otherwise be possible. Twelve commercial airline pilots with current oceanic experience flew a series of simulated scenarios involving either standard or ITP flight level change maneuvers and provided subjective workload ratings, assessments of ITP validity and acceptability, and objective performance measures associated with the appropriate selection, request, and execution of ITP flight level change maneuvers. In the majority of scenarios, subject pilots correctly assessed the traffic situation, selected an appropriate response (i.e., either a standard flight level change request, an ITP request, or no request), and executed their selected flight level change procedure, if any, without error. Workload ratings for ITP maneuvers were acceptable and not substantially higher than for standard flight level change maneuvers, and, for the majority of scenarios and subject pilots, subjective acceptability ratings and comments for ITP were generally high and positive. Qualitatively, the ITP was found to be valid and acceptable. However, the error rates for ITP maneuvers were higher than for standard flight level changes, and these errors may have design implications for both the ITP and the study's prototype traffic display. These errors and their implications are discussed.
Seidling, Hanna M; Phansalkar, Shobha; Seger, Diane L; Paterno, Marilyn D; Shaykevich, Shimon; Haefeli, Walter E
2011-01-01
Background Clinical decision support systems can prevent knowledge-based prescription errors and improve patient outcomes. The clinical effectiveness of these systems, however, is substantially limited by poor user acceptance of presented warnings. To enhance alert acceptance it may be useful to quantify the impact of potential modulators of acceptance. Methods We built a logistic regression model to predict alert acceptance of drug–drug interaction (DDI) alerts in three different settings. Ten variables from the clinical and human factors literature were evaluated as potential modulators of provider alert acceptance. ORs were calculated for the impact of knowledge quality, alert display, textual information, prioritization, setting, patient age, dose-dependent toxicity, alert frequency, alert level, and required acknowledgment on acceptance of the DDI alert. Results 50 788 DDI alerts were analyzed. Providers accepted only 1.4% of non-interruptive alerts. For interruptive alerts, user acceptance positively correlated with frequency of the alert (OR 1.30, 95% CI 1.23 to 1.38), quality of display (4.75, 3.87 to 5.84), and alert level (1.74, 1.63 to 1.86). Alert acceptance was higher in inpatients (2.63, 2.32 to 2.97) and for drugs with dose-dependent toxicity (1.13, 1.07 to 1.21). The textual information influenced the mode of reaction and providers were more likely to modify the prescription if the message contained detailed advice on how to manage the DDI. Conclusion We evaluated potential modulators of alert acceptance by assessing content and human factors issues, and quantified the impact of a number of specific factors which influence alert acceptance. This information may help improve clinical decision support systems design. PMID:21571746
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Vajda, E G; Skedros, J G; Bloebaum, R D
1998-10-01
Backscattered electron (BSE) imaging has proven to be a useful method for analyzing the mineral distribution in microscopic regions of bone. However, an accepted method of standardization has not been developed, limiting the utility of BSE imaging for truly quantitative analysis. Previous work has suggested that BSE images can be standardized by energy-dispersive x-ray spectrometry (EDX). Unfortunately, EDX-standardized BSE images tend to underestimate the mineral content of bone when compared with traditional ash measurements. The goal of this study is to investigate the nature of the deficit between EDX-standardized BSE images and ash measurements. A series of analytical standards, ashed bone specimens, and unembedded bone specimens were investigated to determine the source of the deficit previously reported. The primary source of error was found to be inaccurate ZAF corrections to account for the organic phase of the bone matrix. Conductive coatings, methylmethacrylate embedding media, and minor elemental constituents in bone mineral introduced negligible errors. It is suggested that the errors would remain constant and an empirical correction could be used to account for the deficit. However, extensive preliminary testing of the analysis equipment is essential.
Sánchez-Durán, José A; Hidalgo-López, José A; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando
2015-08-19
Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics.
Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie
2014-01-01
This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).
Automatically generated acceptance test: A software reliability experiment
NASA Technical Reports Server (NTRS)
Protzel, Peter W.
1988-01-01
This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.
Global and regional kinematics with GPS
NASA Technical Reports Server (NTRS)
King, Robert W.
1994-01-01
The inherent precision of the doubly differenced phase measurement and the low cost of instrumentation made GPS the space geodetic technique of choice for regional surveys as soon as the constellation reached acceptable geometry in the area of interest: 1985 in western North America, the early 1990's in most of the world. Instrument and site-related errors for horizontal positioning are usually less than 3 mm, so that the dominant source of error is uncertainty in the reference frame defined by the satellites orbits and the tracking stations used to determine them. Prior to about 1992, when the tracking network for most experiments was globally sparse, the number of fiducial sites or the level at which they could be tied to an SLR or VLBI reference frame usually, set the accuracy limit. Recently, with a global network of over 30 stations, the limit is set more often by deficiencies in models for non-gravitational forces acting on the satellites. For regional networks in the northern hemisphere, reference frame errors are currently about 3 parts per billion (ppb) in horizontal position, allowing centimeter-level accuracies over intercontinental distances and less than 1 mm for a 100 km baseline. The accuracy of GPS measurements for monitoring height variations is generally 2-3 times worse than for horizontal motions. As for VLBI, the primary source of error is unmodeled fluctuations in atmospheric water vapor, but both reference frame uncertainties and some instrument errors are more serious for vertical than horizontal measurements. Under good conditions, daily repeatabilities at the level of 10 mm rms were achieved. This paper will summarize the current accuracy of GPS measurements and their implication for the use of SLR to study regional kinematics.
Characterisation of false-positive observations in botanical surveys
2017-01-01
Errors in botanical surveying are a common problem. The presence of a species is easily overlooked, leading to false-absences; while misidentifications and other mistakes lead to false-positive observations. While it is common knowledge that these errors occur, there are few data that can be used to quantify and describe these errors. Here we characterise false-positive errors for a controlled set of surveys conducted as part of a field identification test of botanical skill. Surveys were conducted at sites with a verified list of vascular plant species. The candidates were asked to list all the species they could identify in a defined botanically rich area. They were told beforehand that their final score would be the sum of the correct species they listed, but false-positive errors counted against their overall grade. The number of errors varied considerably between people, some people create a high proportion of false-positive errors, but these are scattered across all skill levels. Therefore, a person’s ability to correctly identify a large number of species is not a safeguard against the generation of false-positive errors. There was no phylogenetic pattern to falsely observed species; however, rare species are more likely to be false-positive as are species from species rich genera. Raising the threshold for the acceptance of an observation reduced false-positive observations dramatically, but at the expense of more false negative errors. False-positive errors are higher in field surveying of plants than many people may appreciate. Greater stringency is required before accepting species as present at a site, particularly for rare species. Combining multiple surveys resolves the problem, but requires a considerable increase in effort to achieve the same sensitivity as a single survey. Therefore, other methods should be used to raise the threshold for the acceptance of a species. For example, digital data input systems that can verify, feedback and inform the user are likely to reduce false-positive errors significantly. PMID:28533972
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.
Sampling command generator corrects for noise and dropouts in recorded data
NASA Technical Reports Server (NTRS)
Anderson, T. O.
1973-01-01
Generator measures period between zero crossings of reference signal and accepts as correct timing points only those zero crossings which occur acceptably close to nominal time predicted from last accepted command. Unidirectional crossover points are used exclusively so errors from analog nonsymmetry of crossover detector are avoided.
Development and implementation of a human accuracy program in patient foodservice.
Eden, S H; Wood, S M; Ptak, K M
1987-04-01
For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.
Perceptually tuned low-bit-rate video codec for ATM networks
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien
1996-02-01
In order to maintain high visual quality in transmitting low bit-rate video signals over asynchronous transfer mode (ATM) networks, a layered coding scheme that incorporates the human visual system (HVS), motion compensation (MC), and conditional replenishment (CR) is presented in this paper. An empirical perceptual model is proposed to estimate the spatio- temporal just-noticeable distortion (STJND) profile for each frame, by which perceptually important (PI) prediction-error signals can be located. Because of the limited channel capacity of the base layer, only coded data of motion vectors, the PI signals within a small strip of the prediction-error image and, if there are remaining bits, the PI signals outside the strip are transmitted by the cells of the base-layer channel. The rest of the coded data are transmitted by the second-layer cells which may be lost due to channel error or network congestion. Simulation results show that visual quality of the reconstructed CIF sequence is acceptable when the capacity of the base-layer channel is allocated with 2 multiplied by 64 kbps and the cells of the second layer are all lost.
Probabilistic confidence for decisions based on uncertain reliability estimates
NASA Astrophysics Data System (ADS)
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
Automation bias in electronic prescribing.
Lyell, David; Magrabi, Farah; Raban, Magdalena Z; Pont, L G; Baysari, Melissa T; Day, Richard O; Coiera, Enrico
2017-03-16
Clinical decision support (CDS) in e-prescribing can improve safety by alerting potential errors, but introduces new sources of risk. Automation bias (AB) occurs when users over-rely on CDS, reducing vigilance in information seeking and processing. Evidence of AB has been found in other clinical tasks, but has not yet been tested with e-prescribing. This study tests for the presence of AB in e-prescribing and the impact of task complexity and interruptions on AB. One hundred and twenty students in the final two years of a medical degree prescribed medicines for nine clinical scenarios using a simulated e-prescribing system. Quality of CDS (correct, incorrect and no CDS) and task complexity (low, low + interruption and high) were varied between conditions. Omission errors (failure to detect prescribing errors) and commission errors (acceptance of false positive alerts) were measured. Compared to scenarios with no CDS, correct CDS reduced omission errors by 38.3% (p < .0001, n = 120), 46.6% (p < .0001, n = 70), and 39.2% (p < .0001, n = 120) for low, low + interrupt and high complexity scenarios respectively. Incorrect CDS increased omission errors by 33.3% (p < .0001, n = 120), 24.5% (p < .009, n = 82), and 26.7% (p < .0001, n = 120). Participants made commission errors, 65.8% (p < .0001, n = 120), 53.5% (p < .0001, n = 82), and 51.7% (p < .0001, n = 120). Task complexity and interruptions had no impact on AB. This study found evidence of AB omission and commission errors in e-prescribing. Verification of CDS alerts is key to avoiding AB errors. However, interventions focused on this have had limited success to date. Clinicians should remain vigilant to the risks of CDS failures and verify CDS.
Performance comparison for Barnes model 12-1000, Exotech model 100, and Ideas Inc. Biometer Mark 2
NASA Technical Reports Server (NTRS)
Robinson, B. (Principal Investigator)
1981-01-01
Results of tests show that all channels of all instruments, except channel 3 of the Biometer Mark 2, were stable in response to input signals were linear, and were adequately stable in response to temperature changes. The Biometer Mark 2 is labelled with an inappropriate description of the units measured and the dynamic range is a inappropriate for field measurements causing unnecessarily high fractional errors. This instrument is, therefore, quantization limited. The dynamic range and noise performance of the Model 12-1000 are appropriate for remote sensing field research. The field of view and performance of the Model 100A and the Model 12-1000 are satisfactory. The Biometer Mark 2 has not, as yet, been satisfactorily equipped with an acceptable field of view determining device. Neither the widely used aperture plate nor the 24 deg cone are acceptable.
Analysis of space telescope data collection system
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Schoggen, W. O.
1982-01-01
An analysis of the expected performance for the Multiple Access (MA) system is provided. The analysis covers the expected bit error rate performance, the effects of synchronization loss, the problem of self-interference, and the problem of phase ambiguity. The problem of false acceptance of a command word due to data inversion is discussed. A mathematical determination of the probability of accepting an erroneous command word due to a data inversion is presented. The problem is examined for three cases: (1) a data inversion only, (2) a data inversion and a random error within the same command word, and a block (up to 256 48-bit words) containing both a data inversion and a random error.
Morris, Gail; Conner, L Mike
2017-01-01
Global positioning system (GPS) technologies have improved the ability of researchers to monitor wildlife; however, use of these technologies is often limited by monetary costs. Some researchers have begun to use commercially available GPS loggers as a less expensive means of tracking wildlife, but data regarding performance of these devices are limited. We tested a commercially available GPS logger (i-gotU GT-120) by placing loggers at ground control points with locations known to < 30 cm. In a preliminary investigation, we collected locations every 15 minutes for several days to estimate location error (LE) and circular error probable (CEP). Using similar methods, we then investigated the influence of cover on LE, CEP, and fix success rate (FSR) by constructing cover over ground control points. We found mean LE was < 10 m and mean 50% CEP was < 7 m. FSR was not significantly influenced by cover and in all treatments remained near 100%. Cover had a minor but significant effect on LE. Denser cover was associated with higher mean LE, but the difference in LE between the no cover and highest cover treatments was only 2.2 m. Finally, the most commonly used commercially available devices provide a measure of estimated horizontal position error (EHPE) which potentially may be used to filter inaccurate locations. Using data combined from the preliminary and cover investigations, we modeled LE as a function of EHPE and number of satellites. We found support for use of both EHPE and number of satellites in predicting LE; however, use of EHPE to filter inaccurate locations resulted in the loss of many locations with low error in return for only modest improvements in LE. Even without filtering, the accuracy of the logger was likely sufficient for studies which can accept average location errors of approximately 10 m.
Well-tempered metadynamics: a smoothly-converging and tunable free-energy method
NASA Astrophysics Data System (ADS)
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-03-01
We present [1] a method for determining the free energy dependence on a selected number of order parameters using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevantregions of the order parameter space. The algorithm is tested on the reconstruction of alanine dipeptide free energy landscape. [1] A. Barducci, G. Bussi and M. Parrinello, Phys. Rev. Lett., accepted (2007).
NASA Astrophysics Data System (ADS)
Lichti, Derek D.; Chow, Jacky; Lahamy, Hervé
One of the important systematic error parameters identified in terrestrial laser scanners is the collimation axis error, which models the non-orthogonality between two instrumental axes. The quality of this parameter determined by self-calibration, as measured by its estimated precision and its correlation with the tertiary rotation angle κ of the scanner exterior orientation, is strongly dependent on instrument architecture. While the quality is generally very high for panoramic-type scanners, it is comparably poor for hybrid-style instruments. Two methods for improving the quality of the collimation axis error in hybrid instrument self-calibration are proposed herein: (1) the inclusion of independent observations of the tertiary rotation angle κ; and (2) the use of a new collimation axis error model. Five real datasets were captured with two different hybrid-style scanners to test each method's efficacy. While the first method achieves the desired outcome of complete decoupling of the collimation axis error from κ, it is shown that the high correlation is simply transferred to other model variables. The second method achieves partial parameter de-correlation to acceptable levels. Importantly, it does so without any adverse, secondary correlations and is therefore the method recommended for future use. Finally, systematic error model identification has been greatly aided in previous studies by graphical analyses of self-calibration residuals. This paper presents results showing the architecture dependence of this technique, revealing its limitations for hybrid scanners.
Development and validity of a method for the evaluation of printed education material
Castro, Mauro Silveira; Pilger, Diogo; Fuchs, Flávio Danni; Ferreira, Maria Beatriz Cardoso
Objectives To develop and study the validity of an instrument for evaluation of Printed Education Materials (PEM); to evaluate the use of acceptability indices; to identify possible influences of professional aspects. Methods An instrument for PEM evaluation was developed which included tree steps: domain identification, item generation and instrument design. A reading to easy PEM was developed for education of patient with systemic hypertension and its treatment with hydrochlorothiazide. Construct validity was measured based on previously established errors purposively introduced into the PEM, which served as extreme groups. An acceptability index was applied taking into account the rate of professionals who should approve each item. Participants were 10 physicians (9 men) and 5 nurses (all women). Results Many professionals identified intentional errors of crude character. Few participants identified errors that needed more careful evaluation, and no one detected the intentional error that required literature analysis. Physicians considered as acceptable 95.8% of the items of the PEM, and nurses 29.2%. The differences between the scoring were statistically significant in 27% of the items. In the overall evaluation, 66.6% were considered as acceptable. The analysis of each item revealed a behavioral pattern for each professional group. Conclusions The use of instruments for evaluation of printed education materials is required and may improve the quality of the PEM available for the patients. Not always are the acceptability indices totally correct or represent high quality of information. The professional experience, the practice pattern, and perhaps the gendre of the reviewers may influence their evaluation. An analysis of the PEM by professionals in communication, in drug information, and patients should be carried out to improve the quality of the proposed material. PMID:25214924
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Xu, Wei; Zhou, Yuyang; Fu, Zhongfang; Rodriguez, Marcus
2017-12-01
Previous studies have shown that dispositional mindfulness is associated with less psychological symptoms in cancer patients. The present study investigated how dispositional mindfulness is related to psychological symptoms in advanced gastrointestinal cancer patients by considering the roles of self-acceptance and perceived stress. A total of 176 patients with advanced gastrointestinal cancer were recruited to complete a series of questionnaires including Mindfulness Attention Awareness Scale, Self-acceptance Questionnaire, Chinese Perceived Stress Scale, and General Health Questionnaire. Results showed that the proposed model fitted the data very well (χ 2 = 7.564, df = 7, P = .364, χ 2 /df = 1.094, Goodness of Fit Index (GFI) = 0.986, Comparative Fit Index (CFI) = 0.998, Tucker Lewis Index (TLI) = 0.995, Root Mean Square Error of Approximation (RMSEA) = 0.023). Further analyses revealed that, self-acceptance and perceived stress mediated the relation between dispositional mindfulness and psychological symptoms (indirect effect = -0.052, 95% confidence interval = -0.087 ~ -0.024), while self-acceptance also mediated the relation between dispositional mindfulness and perceived stress (indirect effect = -0.154, 95% confidence interval = -0.261 ~ -0.079). Self-acceptance and perceived stress played critical roles in the relation between dispositional mindfulness and psychological symptoms. Limitations, clinical implications, and directions for future research were discussed. Copyright © 2017 John Wiley & Sons, Ltd.
High-density force myography: A possible alternative for upper-limb prosthetic control.
Radmand, Ashkan; Scheme, Erik; Englehart, Kevin
2016-01-01
Several multiple degree-of-freedom upper-limb prostheses that have the promise of highly dexterous control have recently been developed. Inadequate controllability, however, has limited adoption of these devices. Introducing more robust control methods will likely result in higher acceptance rates. This work investigates the suitability of using high-density force myography (HD-FMG) for prosthetic control. HD-FMG uses a high-density array of pressure sensors to detect changes in the pressure patterns between the residual limb and socket caused by the contraction of the forearm muscles. In this work, HD-FMG outperforms the standard electromyography (EMG)-based system in detecting different wrist and hand gestures. With the arm in a fixed, static position, eight hand and wrist motions were classified with 0.33% error using the HD-FMG technique. Comparatively, classification errors in the range of 2.2%-11.3% have been reported in the literature for multichannel EMG-based approaches. As with EMG, position variation in HD-FMG can introduce classification error, but incorporating position variation into the training protocol reduces this effect. Channel reduction was also applied to the HD-FMG technique to decrease the dimensionality of the problem as well as the size of the sensorized area. We found that with informed, symmetric channel reduction, classification error could be decreased to 0.02%.
NASA Astrophysics Data System (ADS)
Gupta, Shaurya; Guha, Daipayan; Jakubovic, Raphael; Yang, Victor X. D.
2017-02-01
Computer-assisted navigation is used by surgeons in spine procedures to guide pedicle screws to improve placement accuracy and in some cases, to better visualize patient's underlying anatomy. Intraoperative registration is performed to establish a correlation between patient's anatomy and the pre/intra-operative image. Current algorithms rely on seeding points obtained directly from the exposed spinal surface to achieve clinically acceptable registration accuracy. Registration of these three dimensional surface point-clouds are prone to various systematic errors. The goal of this study was to evaluate the robustness of surgical navigation systems by looking at the relationship between the optical density of an acquired 3D point-cloud and the corresponding surgical navigation error. A retrospective review of a total of 48 registrations performed using an experimental structured light navigation system developed within our lab was conducted. For each registration, the number of points in the acquired point cloud was evaluated relative to whether the registration was acceptable, the corresponding system reported error and target registration error. It was demonstrated that the number of points in the point cloud neither correlates with the acceptance/rejection of a registration or the system reported error. However, a negative correlation was observed between the number of the points in the point-cloud and the corresponding sagittal angular error. Thus, system reported total registration points and accuracy are insufficient to gauge the accuracy of a navigation system and the operating surgeon must verify and validate registration based on anatomical landmarks prior to commencing surgery.
Applying the intention-to-treat principle in practice: Guidance on handling randomisation errors
Sullivan, Thomas R; Voysey, Merryn; Lee, Katherine J; Cook, Jonathan A; Forbes, Andrew B
2015-01-01
Background: The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. Methods: The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. Results: We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Conclusion: Randomisation errors are almost inevitable and should be reported in trial publications. The intention-to-treat principle is useful for guiding responses to randomisation errors when they are discovered. PMID:26033877
Applying the intention-to-treat principle in practice: Guidance on handling randomisation errors.
Yelland, Lisa N; Sullivan, Thomas R; Voysey, Merryn; Lee, Katherine J; Cook, Jonathan A; Forbes, Andrew B
2015-08-01
The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Randomisation errors are almost inevitable and should be reported in trial publications. The intention-to-treat principle is useful for guiding responses to randomisation errors when they are discovered. © The Author(s) 2015.
Mills, Kathryn; Idris, Aula; Pham, Thu-An; Porte, John; Wiggins, Mark; Kavakli, Manolya
2017-12-18
To determine the validity and reliability of the peak frontal plane knee angle evaluated by a virtual reality (VR) netball game when landing from a drop vertical jump (DVJ). Laboratory Methods: Forty participants performed 3 DVJs evaluated by 3-dimensional (3D) motion analysis and 3 DVJs evaluated by the VR game. Limits of agreement for the peak projected frontal plane knee angle and peak knee abduction were determined. Participants were given a consensus category of "Above threshold" or "Below threshold" based on a pre-specified threshold angle of 9˚ during landing. Classification agreement was determined using kappa coefficient and accuracy was determined using specificity and sensitivity. Ten participants returned 1-week later to determine intra-rater reliability, standard error of the measure and typical error. The mean difference in detected frontal plane knee angle was 3.39˚ (1.03˚, 5.74˚). Limits of agreement were -10.27˚ (-14.36˚, -6.19˚) to 17.05˚ (12.97˚, 21.14˚). Substantial agreement, specificity and sensitivity were observed for the threshold classification (ĸ = 0.66, [0.42, 0.88] specificity= 0.96 [0.78, 1.0], sensitivity= 0.75 [0.43, 0.95]). The game exhibited acceptable reliability over time (ICC (3,1) = 0.844) and error was approximately 2˚. The VR game reliably evaluated a projected frontal plane knee angle. While the knee angle detected by the VR game is strongly related peak knee abduction, the accuracy of detecting the exact angle was limited. A threshold approach may be a more accurate approach for gaming technology to evaluate frontal plane knee angles when landing from a jump.
Scanner qualification with IntenCD based reticle error correction
NASA Astrophysics Data System (ADS)
Elblinger, Yair; Finders, Jo; Demarteau, Marcel; Wismans, Onno; Minnaert Janssen, Ingrid; Duray, Frank; Ben Yishai, Michael; Mangan, Shmoolik; Cohen, Yaron; Parizat, Ziv; Attal, Shay; Polonsky, Netanel; Englard, Ilan
2010-03-01
Scanner introduction into the fab production environment is a challenging task. An efficient evaluation of scanner performance matrices during factory acceptance test (FAT) and later on during site acceptance test (SAT) is crucial for minimizing the cycle time for pre and post production-start activities. If done effectively, the matrices of base line performance established during the SAT are used as a reference for scanner performance and fleet matching monitoring and maintenance in the fab environment. Key elements which can influence the cycle time of the SAT, FAT and maintenance cycles are the imaging, process and mask characterizations involved with those cycles. Discrete mask measurement techniques are currently in use to create across-mask CDU maps. By subtracting these maps from their final wafer measurement CDU map counterparts, it is possible to assess the real scanner induced printed errors within certain limitations. The current discrete measurement methods are time consuming and some techniques also overlook mask based effects other than line width variations, such as transmission and phase variations, all of which influence the final printed CD variability. Applied Materials Aera2TM mask inspection tool with IntenCDTM technology can scan the mask at high speed, offer full mask coverage and accurate assessment of all masks induced source of errors simultaneously, making it beneficial for scanner qualifications and performance monitoring. In this paper we report on a study that was done to improve a scanner introduction and qualification process using the IntenCD application to map the mask induced CD non uniformity. We will present the results of six scanners in production and discuss the benefits of the new method.
San José, Verónica; Bellot-Arcís, Carlos; Tarazona, Beatriz; Zamora, Natalia; O Lagravère, Manuel
2017-01-01
Background To compare the reliability and accuracy of direct and indirect dental measurements derived from two types of 3D virtual models: generated by intraoral laser scanning (ILS) and segmented cone beam computed tomography (CBCT), comparing these with a 2D digital model. Material and Methods One hundred patients were selected. All patients’ records included initial plaster models, an intraoral scan and a CBCT. Patients´ dental arches were scanned with the iTero® intraoral scanner while the CBCTs were segmented to create three-dimensional models. To obtain 2D digital models, plaster models were scanned using a conventional 2D scanner. When digital models had been obtained using these three methods, direct dental measurements were measured and indirect measurements were calculated. Differences between methods were assessed by means of paired t-tests and regression models. Intra and inter-observer error were analyzed using Dahlberg´s d and coefficients of variation. Results Intraobserver and interobserver error for the ILS model was less than 0.44 mm while for segmented CBCT models, the error was less than 0.97 mm. ILS models provided statistically and clinically acceptable accuracy for all dental measurements, while CBCT models showed a tendency to underestimate measurements in the lower arch, although within the limits of clinical acceptability. Conclusions ILS and CBCT segmented models are both reliable and accurate for dental measurements. Integration of ILS with CBCT scans would get dental and skeletal information altogether. Key words:CBCT, intraoral laser scanner, 2D digital models, 3D models, dental measurements, reliability. PMID:29410764
Absolute GPS Positioning Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Ramillien, G.
A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.
[Application of root cause analysis in healthcare].
Hsu, Tsung-Fu
2007-12-01
The main purpose of this study was to explore various aspects of root cause analysis (RCA), including its definition, rationale concept, main objective, implementation procedures, most common analysis methodology (fault tree analysis, FTA), and advantages and methodologic limitations in regard to healthcare. Several adverse events that occurred at a certain hospital were also analyzed by the author using FTA as part of this study. RCA is a process employed to identify basic and contributing causal factors underlying performance variations associated with adverse events. The rationale concept of RCA offers a systemic approach to improving patient safety that does not assign blame or liability to individuals. The four-step process involved in conducting an RCA includes: RCA preparation, proximate cause identification, root cause identification, and recommendation generation and implementation. FTA is a logical, structured process that can help identify potential causes of system failure before actual failures occur. Some advantages and significant methodologic limitations of RCA were discussed. Finally, we emphasized that errors stem principally from faults attributable to system design, practice guidelines, work conditions, and other human factors, which induce health professionals to make negligence or mistakes with regard to healthcare. We must explore the root causes of medical errors to eliminate potential RCA system failure factors. Also, a systemic approach is needed to resolve medical errors and move beyond a current culture centered on assigning fault to individuals. In constructing a real environment of patient-centered safety healthcare, we can help encourage clients to accept state-of-the-art healthcare services.
Ambridge, Ben; Bidgood, Amy; Twomey, Katherine E.; Pine, Julian M.; Rowland, Caroline F.; Freudenthal, Daniel
2015-01-01
Participants aged 5;2-6;8, 9;2-10;6 and 18;1-22;2 (72 at each age) rated verb argument structure overgeneralization errors (e.g., *Daddy giggled the baby) using a five-point scale. The study was designed to investigate the feasibility of two proposed construction-general solutions to the question of how children retreat from, or avoid, such errors. No support was found for the prediction of the preemption hypothesis that the greater the frequency of the verb in the single most nearly synonymous construction (for this example, the periphrastic causative; e.g., Daddy made the baby giggle), the lower the acceptability of the error. Support was found, however, for the prediction of the entrenchment hypothesis that the greater the overall frequency of the verb, regardless of construction, the lower the acceptability of the error, at least for the two older groups. Thus while entrenchment appears to be a robust solution to the problem of the retreat from error, and one that generalizes across different error types, we did not find evidence that this is the case for preemption. The implication is that the solution to the retreat from error lies not with specialized mechanisms, but rather in a probabilistic process of construction competition. PMID:25919003
Ambridge, Ben; Bidgood, Amy; Twomey, Katherine E; Pine, Julian M; Rowland, Caroline F; Freudenthal, Daniel
2014-01-01
Participants aged 5;2-6;8, 9;2-10;6 and 18;1-22;2 (72 at each age) rated verb argument structure overgeneralization errors (e.g., *Daddy giggled the baby) using a five-point scale. The study was designed to investigate the feasibility of two proposed construction-general solutions to the question of how children retreat from, or avoid, such errors. No support was found for the prediction of the preemption hypothesis that the greater the frequency of the verb in the single most nearly synonymous construction (for this example, the periphrastic causative; e.g., Daddy made the baby giggle), the lower the acceptability of the error. Support was found, however, for the prediction of the entrenchment hypothesis that the greater the overall frequency of the verb, regardless of construction, the lower the acceptability of the error, at least for the two older groups. Thus while entrenchment appears to be a robust solution to the problem of the retreat from error, and one that generalizes across different error types, we did not find evidence that this is the case for preemption. The implication is that the solution to the retreat from error lies not with specialized mechanisms, but rather in a probabilistic process of construction competition.
Both, Stefan; Alecu, Ionut M; Stan, Andrada R; Alecu, Marius; Ciura, Andrei; Hansen, Jeremy M; Alecu, Rodica
2007-03-07
An effective patient quality assurance (QA) program for intensity-modulated radiation therapy (IMRT) requires accurate and realistic plan acceptance criteria--that is, action limits. Based on dose measurements performed with a commercially available two-dimensional (2D) diode array, we analyzed 747 fluence maps resulting from a routine patient QA program for IMRT plans. The fluence maps were calculated by three different commercially available (ADAC, CMS, Eclipse) treatment planning systems (TPSs) and were delivered using 6-MV X-ray beams produced by linear accelerators. To establish reasonably achievable and clinically acceptable limits for the dose deviations, the agreement between the measured and calculated fluence maps was evaluated in terms of percent dose error (PDE) for a few points and percent of passing points (PPP) for the isodose distribution. The analysis was conducted for each TPS used in the study (365 ADAC, 162 CMS,220 Eclipse), for multiple treatment sites (prostate, pelvis, head and neck, spine, rectum, anus, lung, brain), at the normalization point for 3% percentage difference (%Diff) and 3-mm distance to agreement (DTA) criteria. We investigated the treatment-site dependency of PPP and PDE. The results show that, at 3% and 3-mm criteria, a 95% PPP and 3% PDE can be achieved for prostate treatments and a 90% PPP and 5% PDE are attainable for any treatment site.
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Strategy optimization for mask rule check in wafer fab
NASA Astrophysics Data System (ADS)
Yang, Chuen Huei; Lin, Shaina; Lin, Roger; Wang, Alice; Lee, Rachel; Deng, Erwin
2015-07-01
Photolithography process is getting more and more sophisticated for wafer production following Moore's law. Therefore, for wafer fab, consolidated and close cooperation with mask house is a key to achieve silicon wafer success. However, generally speaking, it is not easy to preserve such partnership because many engineering efforts and frequent communication are indispensable. The inattentive connection is obvious in mask rule check (MRC). Mask houses will do their own MRC at job deck stage, but the checking is only for identification of mask process limitation including writing, etching, inspection, metrology, etc. No further checking in terms of wafer process concerned mask data errors will be implemented after data files of whole mask are composed in mask house. There are still many potential data errors even post-OPC verification has been done for main circuits. What mentioned here are the kinds of errors which will only occur as main circuits combined with frame and dummy patterns to form whole reticle. Therefore, strategy optimization is on-going in UMC to evaluate MRC especially for wafer fab concerned errors. The prerequisite is that no impact on mask delivery cycle time even adding this extra checking. A full-mask checking based on job deck in gds or oasis format is necessary in order to secure acceptable run time. Form of the summarized error report generated by this checking is also crucial because user friendly interface will shorten engineers' judgment time to release mask for writing. This paper will survey the key factors of MRC in wafer fab.
NASA Technical Reports Server (NTRS)
Lemoine, Frank G.; Rowlands, David D.; Luthcke, Scott B.; Zelensky, Nikita P.; Chinn, Douglas S.; Pavlis, Despina E.; Marr, Gregory
2001-01-01
The US Navy's GEOSAT Follow-On Spacecraft was launched on February 10, 1998 with the primary objective of the mission to map the oceans using a radar altimeter. Following an extensive set of calibration campaigns in 1999 and 2000, the US Navy formally accepted delivery of the satellite on November 29, 2000. Satellite laser ranging (SLR) and Doppler (Tranet-style) beacons track the spacecraft. Although limited amounts of GPS data were obtained, the primary mode of tracking remains satellite laser ranging. The GFO altimeter measurements are highly precise, with orbit error the largest component in the error budget. We have tuned the non-conservative force model for GFO and the gravity model using SLR, Doppler and altimeter crossover data sampled over one year. Gravity covariance projections to 70x70 show the radial orbit error on GEOSAT was reduced from 2.6 cm in EGM96 to 1.3 cm with the addition of SLR, GFO/GFO and TOPEX/GFO crossover data. Evaluation of the gravity fields using SLR and crossover data support the covariance projections and also show a dramatic reduction in geographically-correlated error for the tuned fields. In this paper, we report on progress in orbit determination for GFO using GFO/GFO and TOPEX/GFO altimeter crossovers. We will discuss improvements in satellite force modeling and orbit determination strategy, which allows reduction in GFO radial orbit error from 10-15 cm to better than 5 cm.
Context affects nestmate recognition errors in honey bees and stingless bees.
Couvillon, Margaret J; Segers, Francisca H I D; Cooper-Bowman, Roseanne; Truslove, Gemma; Nascimento, Daniela L; Nascimento, Fabio S; Ratnieks, Francis L W
2013-08-15
Nestmate recognition studies, where a discriminator first recognises and then behaviourally discriminates (accepts/rejects) another individual, have used a variety of methodologies and contexts. This is potentially problematic because recognition errors in discrimination behaviour are predicted to be context-dependent. Here we compare the recognition decisions (accept/reject) of discriminators in two eusocial bees, Apis mellifera and Tetragonisca angustula, under different contexts. These contexts include natural guards at the hive entrance (control); natural guards held in plastic test arenas away from the hive entrance that vary either in the presence or absence of colony odour or the presence or absence of an additional nestmate discriminator; and, for the honey bee, the inside of the nest. For both honey bee and stingless bee guards, total recognition errors of behavioural discrimination made by guards (% nestmates rejected + % non-nestmates accepted) are much lower at the colony entrance (honey bee: 30.9%; stingless bee: 33.3%) than in the test arenas (honey bee: 60-86%; stingless bee: 61-81%; P<0.001 for both). Within the test arenas, the presence of colony odour specifically reduced the total recognition errors in honey bees, although this reduction still fell short of bringing error levels down to what was found at the colony entrance. Lastly, in honey bees, the data show that the in-nest collective behavioural discrimination by ca. 30 workers that contact an intruder is insufficient to achieve error-free recognition and is not as effective as the discrimination by guards at the entrance. Overall, these data demonstrate that context is a significant factor in a discriminators' ability to make appropriate recognition decisions, and should be considered when designing recognition study methodologies.
Quantitative measurement of hypertrophic scar: interrater reliability and concurrent validity.
Nedelec, Bernadette; Correa, José A; Rachelska, Grazyna; Armour, Alexis; LaSalle, Léo
2008-01-01
Research into the pathophysiology and treatment of hypertrophic scar (HSc) remains limited by the heterogeneity of scar and the imprecision with which its severity is measured. The objective of this study was to test the interrater reliability and concurrent validity of the Cutometer measurement of elasticity, the Mexameter measurement of erythema and pigmentation, and total thickness measure of the DermaScan C relative to the modified Vancouver Scar Scale (mVSS) in patient-matched normal skin, normal scar, and HSc. Three independent investigators evaluated 128 sites (severe HSc, moderate or mild HSc, donor site, and normal skin) on 32 burn survivors using all of the above measurement tools. The intraclass correlation coefficient, which was used to measure interrater reliability, reflects the inherent amount of error in the measure and is considered acceptable when it is >0.75. Interrater reliability of the totals of the height, pliability, and vascularity subscales of the mVSS fell below the acceptable limit ( congruent with0.50). The individual subscales of the mVSS fell well below the acceptable level (< or =0.3). The Cutometer reading of elasticity provided acceptable reliability (>0.89) for each study site with the exception of severe scar. Mexameter and DermaScan C reliability measurements were acceptable for all sites (>0.82). Concurrent validity correlations with the mVSS were significant except for the comparison of the mVSS pliability subscale and the Cutometer maximum deformation measure comparison in severe scar. In conclusion, the Mexameter and DermaScan C measurements of scar color and thickness of all sites, as well as the Cutometer measurement of elasticity in all but the most severe scars shows high interrater reliability. Their significant concurrent validity with the mVSS confirms that these tools are measuring the same traits as the mVSS, and in a more objective way.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Y; National Cancer Center, Kashiwa, Chiba; Tachibana, H
Purpose: Total body irradiation (TBI) and total marrow irradiation (TMI) using Tomotherapy have been reported. A gantry-based linear accelerator uses one isocenter during one rotational irradiation. Thus, 3–5 isocenter points should be used for a whole plan of TBI-VMAT during smoothing out the junctional dose distribution. IGRT provides accurate and precise patient setup for the multiple junctions, however it is evident that some setup errors should occur and affect accuracy of dose distribution in the area. In this study, we evaluated the robustness for patient’s setup error in VMAT-TBI. Methods: VMAT-TBI Planning was performed in an adult whole-body human phantommore » using Eclipse. Eight full arcs with four isocenter points using 6MV-X were used to cover the entire whole body. Dose distribution was optimized using two structures of patient’s body as PTV and lung. The two arcs were shared with one isocenter and the two arcs were 5 cm-overlapped with the other two arcs. Point absolute dose using ionization-chamber and planer relative dose distribution using film in the junctional regions were performed using water-equivalent slab phantom. In the measurements, several setup errors of (+5∼−5mm) were added. Results: The result of the chamber measurement shows the deviations were within ±3% when the setup errors were within ±3 mm. In the planer evaluation, the pass ratio of gamma evaluation (3%/2mm) shows more than 90% if the errors within ±3 mm. However, there were hot/cold areas in the edge of the junction even with acceptable gamma pass ratio. 5 mm setup error caused larger hot and cold areas and the dosimetric acceptable areas were decreased in the overlapped areas. Conclusion: It can be clinically acceptable for VMAT-TBI when patient setup error is within ±3mm. Averaging effects from patient random error would be helpful to blur the hot/cold area in the junction.« less
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Error compensation for thermally induced errors on a machine tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krulewich, D.A.
1996-11-08
Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.
NASA Astrophysics Data System (ADS)
Khamukhin, A. A.
2017-02-01
Simple navigation algorithms are needed for small autonomous unmanned aerial vehicles (UAVs). These algorithms can be implemented in a small microprocessor with low power consumption. This will help to reduce the weight of the UAVs computing equipment and to increase the flight range. The proposed algorithm uses only the number of opaque channels (ommatidia in bees) through which a target can be seen by moving an observer from location 1 to 2 toward the target. The distance estimation is given relative to the distance between locations 1 and 2. The simple scheme of an appositional compound eye to develop calculation formula is proposed. The distance estimation error analysis shows that it decreases with an increase of the total number of opaque channels to a certain limit. An acceptable error of about 2 % is achieved with the angle of view from 3 to 10° when the total number of opaque channels is 21600.
Non-linear dynamic compensation system
NASA Technical Reports Server (NTRS)
Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)
1992-01-01
A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.
SU-E-T-484: In Vivo Dosimetry Tolerances in External Beam Fast Neutron Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, L; Gopan, O
Purpose: Optical stimulated luminescence (OSL) dosimetry with Landauer Al2O3:C nanodots was developed at our institution as a passive in vivo dosimetry (IVD) system for patients treated with fast neutron therapy. The purpose of this study was to establish clinically relevant tolerance limits for detecting treatment errors requiring further investigation. Methods: Tolerance levels were estimated by conducting a series of IVD expected dose calculations for square field sizes ranging between 2.8 and 28.8 cm. For each field size evaluated, doses were calculated for open and internal wedged fields with angles of 30°, 45°, or 60°. Theoretical errors were computed for variationsmore » of incorrect beam configurations. Dose errors, defined as the percent difference from the expected dose calculation, were measured with groups of three nanodots placed in a 30 x 30 cm solid water phantom, at beam isocenter (150 cm SAD, 1.7 cm Dmax). The tolerances were applied to IVD patient measurements. Results: The overall accuracy of the nanodot measurements is 2–3% for open fields. Measurement errors agreed with calculated errors to within 3%. Theoretical estimates of dosimetric errors showed that IVD measurements with OSL nanodots will detect the absence of an internal wedge or a wrong wedge angle. Incorrect nanodot placement on a wedged field is more likely to be caught if the offset is in the direction of the “toe” of the wedge where the dose difference in percentage is about 12%. Errors caused by an incorrect flattening filter size produced a 2% measurement error that is not detectable by IVD measurement alone. Conclusion: IVD with nanodots will detect treatment errors associated with the incorrect implementation of the internal wedge. The results of this study will streamline the physicists’ investigations in determining the root cause of an IVD reading that is out of normally accepted tolerances.« less
MacCourt, Duncan; Bernstein, Joseph
2009-01-01
The current medical malpractice system is broken. Many patients injured by malpractice are not compensated, whereas some patients who recover in tort have not suffered medical negligence; furthermore, the system's failures demoralize patients and physicians. But most importantly, the system perpetuates medical error because the adversarial nature of litigation induces a so-called "Culture of Silence" in physicians eager to shield themselves from liability. This silence leads to the pointless repetition of error, as the open discussion and analysis of the root causes of medical mistakes does not take place as fully as it should. In 1993, President Clinton's Task Force on National Health Care Reform considered a solution characterized by Enterprise Medical Liability (EML), Alternative Dispute Resolution (ADR), some limits on recovery for non-pecuniary damages (Caps), and offsets for collateral source recovery. Yet this list of ingredients did not include a strategy to surmount the difficulties associated with each element. Specifically, EML might be efficient, but none of the enterprises contemplated to assume responsibility, i.e., hospitals and payers, control physician behavior enough so that it would be fair to foist liability on them. Likewise, although ADR might be efficient, it will be resisted by individual litigants who perceive themselves as harmed by it. Finally, while limitations on collateral source recovery and damages might effectively reduce costs, patients and trial lawyers likely would not accept them without recompense. The task force also did not place error reduction at the center of malpractice tort reform -a logical and strategic error, in our view. In response, we propose a new system that employs the ingredients suggested by the task force but also addresses the problems with each. We also explicitly consider steps to rebuff the Culture of Silence and promote error reduction. We assert that patients would be better off with a system where physicians cede their implicit "right to remain silent", even if some injured patients will receive less than they do today. Likewise, physicians will be happier with a system that avoids blame-even if this system placed strict requirements for high quality care and disclosure of error. We therefore conceive of de facto trade between patients and physicians, a Pareto improvement, taking form via the establishment of "Societies of Quality Medicine." Physicians working within these societies would consent to onerous processes for disclosing, rectifying and preventing medical error. Patients would in turn contractually agree to assert their claims in arbitration and with limits on recovery. The role of plaintiffs' lawyers would be unchanged, but due to increased disclosure, discovery costs would diminish and the likelihood of prevailing will more than triple. This article examines the legal and policy issues surrounding the establishment of Societies of Quality Medicine, particularly the issues of contracting over liability, and outlines a means of overcoming the theoretical and practical difficulties with enterprise liability, alternative dispute resolution and the imposition of limits on recovery for non-pecuniary damages. We aim to build a welfare enhancing system that rebuffs the culture of silence and promotes error reduction, a system that is at the same time legally sound, fiscally prudent and politically possible.
22 CFR 34.18 - Waivers of indebtedness.
Code of Federal Regulations, 2011 CFR
2011-04-01
... known through the exercise of due diligence that an error existed but failed to take corrective action... elapsed between the erroneous payment and discovery of the error and notification of the employee; (D... to duty because of disability (supported by an acceptable medical certificate); and (D) Whether...
Home medication support for childhood cancer: family-centered design and testing.
Walsh, Kathleen E; Biggins, Colleen; Blasko, Deb; Christiansen, Steven M; Fischer, Shira H; Keuker, Christopher; Klugman, Robert; Mazor, Kathleen M
2014-11-01
Errors in the use of medications at home by children with cancer are common, and interventions to support correct use are needed. We sought to (1) engage stakeholders in the design and development of an intervention to prevent errors in home medication use, and (2) evaluate the acceptability and usefulness of the intervention. We convened a multidisciplinary team of parents, clinicians, technology experts, and researchers to develop an intervention using a two-step user-centered design process. First, parents and oncologists provided input on the design. Second, a parent panel and two oncology nurses refined draft materials. In a feasibility study, we used questionnaires to assess usefulness and acceptability. Medication error rates were assessed via monthly telephone interviews with parents. We successfully partnered with parents, clinicians, and IT experts to develop Home Medication Support (HoMeS), a family-centered Web-based intervention. HoMeS includes a medication calendar with decision support, a communication tool, adverse effect information, a metric conversion chart, and other information. The 15 families in the feasibility study gave HoMeS high ratings for acceptability and usefulness. Half recorded information on the calendar to indicate to other caregivers that doses were given; 34% brought it to the clinic to communicate with their clinician about home medication use. There was no change in the rate of medication errors in this feasibility study. We created and tested a stakeholder-designed, Web-based intervention to support home chemotherapy use, which parents rated highly. This tool may prevent serious medication errors in a larger study. Copyright © 2014 by American Society of Clinical Oncology.
Wells, Gary L
2008-02-01
The Illinois pilot program on lineup procedures has helped sharpen the focus on the types of controls that are needed in eyewitness field experiments and the limits that exist for interpreting outcome measures (rates of suspect and filler identifications). A widely-known limitation of field experiments is that, unlike simulated crime experiments, the guilt or innocence of the suspects is not easily known independently of the behavior of the eyewitnesses. Less well appreciated is that the rate of identification of lineup fillers, although clearly errors, can be a misleading measure if the filler identification rate is used to assess which of two or more lineup procedures is the better procedure. Several examples are used to illustrate that there are clearly improper procedures that would yield fewer identifications of fillers than would their proper counterparts. For example, biased lineup structure (e.g., using poorly matched fillers) as well as suggestive lineup procedures (that can result from non-blind administration of lineups) would reduce filler identification errors compared to unbiased and non-suggestive procedures. Hence, under many circumstances filler identification rates can be misleading indicators of preferred methods. Comparisons of lineup procedures in future field experiments will not be easily accepted in the absence of double-blind administration methods in all conditions plus true random assignment to conditions.
The values of the parameters of some multilayer distributed RC null networks
NASA Technical Reports Server (NTRS)
Huelsman, L. P.; Raghunath, S.
1974-01-01
In this correspondence, the values of the parameters of some multilayer distributed RC notch networks are determined, and the usually accepted values are shown to be in error. The magnitude of the error is illustrated by graphs of the frequency response of the networks.
Ning, Hsiao-Chen; Lin, Chia-Ni; Chiu, Daniel Tsun-Yee; Chang, Yung-Ta; Wen, Chiao-Ni; Peng, Shu-Yu; Chu, Tsung-Lan; Yu, Hsin-Ming; Wu, Tsu-Lan
2016-01-01
Background Accurate patient identification and specimen labeling at the time of collection are crucial steps in the prevention of medical errors, thereby improving patient safety. Methods All patient specimen identification errors that occurred in the outpatient department (OPD), emergency department (ED), and inpatient department (IPD) of a 3,800-bed academic medical center in Taiwan were documented and analyzed retrospectively from 2005 to 2014. To reduce such errors, the following series of strategies were implemented: a restrictive specimen acceptance policy for the ED and IPD in 2006; a computer-assisted barcode positive patient identification system for the ED and IPD in 2007 and 2010, and automated sample labeling combined with electronic identification systems introduced to the OPD in 2009. Results Of the 2000345 specimens collected in 2005, 1023 (0.0511%) were identified as having patient identification errors, compared with 58 errors (0.0015%) among 3761238 specimens collected in 2014, after serial interventions; this represents a 97% relative reduction. The total number (rate) of institutional identification errors contributed from the ED, IPD, and OPD over a 10-year period were 423 (0.1058%), 556 (0.0587%), and 44 (0.0067%) errors before the interventions, and 3 (0.0007%), 52 (0.0045%) and 3 (0.0001%) after interventions, representing relative 99%, 92% and 98% reductions, respectively. Conclusions Accurate patient identification is a challenge of patient safety in different health settings. The data collected in our study indicate that a restrictive specimen acceptance policy, computer-generated positive identification systems, and interdisciplinary cooperation can significantly reduce patient identification errors. PMID:27494020
WHEN HAS A MODEL BEEN SUFFICIENTLY CALIBRATED AND TESTED TO BE PUT TO EFFICIENT USE?
The question of what degree of predictive error is acceptable for environmental models is explored. Two schools of thought are presented. The universalist school would argue that it is possible to agree on general acceptance criteria for specific categories of models, particula...
Errors in laboratory medicine: practical lessons to improve patient safety.
Howanitz, Peter J
2005-10-01
Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.
NASA Technical Reports Server (NTRS)
Olorenshaw, Lex; Trawick, David
1991-01-01
The purpose was to develop a speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Better mechanisms are provided for using speech recognition in a literacy tutor application. Using a combination of scoring normalization techniques and cheater-mode decoding, a reasonable acceptance/rejection threshold was provided. In continuous speech, the system was tested to be able to provide above 80 pct. correct acceptance of words, while correctly rejecting over 80 pct. of incorrectly pronounced words.
Map-based trigonometric parallaxes of open clusters - The Pleiades
NASA Technical Reports Server (NTRS)
Gatewood, George; Castelaz, Michael; Han, Inwoo; Persinger, Timothy; Stein, John
1990-01-01
The multichannel astrometric photometer and Thaw refractor of the University of Pittsburgh's Allegheny Observatory have been used to determine the trigonometric parallax of the Pleiades star cluster. The distance determined, 150 with a standard error of 18 parsecs, places the cluster slightly farther away than generally accepted. This suggests that the basis of many estimations of the cosmic distance scale is approximately 20 percent short. The accuracy of the determination is limited by the number and choice of reference stars. With careful attention to the selection of reference stars in several Pleiades regions, it should be possible to examine differences in the photometric and trigonometric modulus at a precision of 0.1 magnitudes.
77 FR 65506 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-29
...We propose to supersede an existing airworthiness directive (AD) that applies to certain The Boeing Company Model 757-200 and - 200PF series airplanes. The existing AD currently requires modification of the nacelle strut and wing structure, and repair of any damage found during the modification. Since we issued that AD, a compliance time error involving the optional threshold formula was discovered, which could allow an airplane to exceed the acceptable compliance time for addressing the unsafe condition. This proposed AD would specify a maximum compliance time limit that overrides the optional threshold formula results. We are proposing this AD to prevent fatigue cracking in primary strut structure and consequent reduced structural integrity of the strut.
Optimal Inspection of Imports to Prevent Invasive Pest Introduction.
Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G
2018-03-01
The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.
Error, contradiction and reversal in science and medicine.
Coccheri, Sergio
2017-06-01
Error and contradictions are not "per se" detrimental in science and medicine. Going back to the history of philosophy, Sir Francis Bacon stated that "truth emerges more readily from error than from confusion", and recently Popper introduced the concept of an approximate temporary truth that constitutes the engine of scientific progress. In biomedical research and in clinical practice we assisted during the last decades to many overturnings or reversals of concepts and practices. This phenomenon may discourage patients from accepting ordinary medical care and may favour the choice of alternative medicine. The media often enhance the disappointment for these discrepancies. In this note I recommend to transfer to patients the concept of a confirmed and dependable knowledge at the present time. However, physicians should tolerate uncertainty and accept the idea that medical concepts and applications are subjected to continuous progression, change and displacement. Copyright © 2017 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Structure and dating errors in the geologic time scale and periodicity in mass extinctions
NASA Technical Reports Server (NTRS)
Stothers, Richard B.
1989-01-01
Structure in the geologic time scale reflects a partly paleontological origin. As a result, ages of Cenozoic and Mesozoic stage boundaries exhibit a weak 28-Myr periodicity that is similar to the strong 26-Myr periodicity detected in mass extinctions of marine life by Raup and Sepkoski. Radiometric dating errors in the geologic time scale, to which the mass extinctions are stratigraphically tied, do not necessarily lessen the likelihood of a significant periodicity in mass extinctions, but do spread the acceptable values of the period over the range 25-27 Myr for the Harland et al. time scale or 25-30 Myr for the DNAG time scale. If the Odin time scale is adopted, acceptable periods fall between 24 and 33 Myr, but are not robust against dating errors. Some indirect evidence from independently-dated flood-basalt volcanic horizons tends to favor the Odin time scale.
Cost-effective surgical registration using consumer depth cameras
NASA Astrophysics Data System (ADS)
Potter, Michael; Yaniv, Ziv
2016-03-01
The high costs associated with technological innovation have been previously identified as both a major contributor to the rise of health care expenses, and as a limitation for widespread adoption of new technologies. In this work we evaluate the use of two consumer grade depth cameras, the Microsoft Kinect v1 and 3DSystems Sense, as a means for acquiring point clouds for registration. These devices have the potential to replace professional grade laser range scanning devices in medical interventions that do not require sub-millimetric registration accuracy, and may do so at a significantly reduced cost. To facilitate the use of these devices we have developed a near real-time (1-4 sec/frame) rigid registration framework combining several alignment heuristics with the Iterative Closest Point (ICP) algorithm. Using nearest neighbor registration error as our evaluation criterion we found the optimal scanning distances for the Sense and Kinect to be 50-60cm and 70-80cm respectively. When imaging a skull phantom at these distances, RMS error values of 1.35mm and 1.14mm were obtained. The registration framework was then evaluated using cranial MR scans of two subjects. For the first subject, the RMS error using the Sense was 1.28 +/- 0.01 mm. Using the Kinect this error was 1.24 +/- 0.03 mm. For the second subject, whose MR scan was significantly corrupted by metal implants, the errors increased to 1.44 +/- 0.03 mm and 1.74 +/- 0.06 mm but the system nonetheless performed within acceptable bounds.
Meyer-Massetti, Carla; Krummenacher, Evelyne; Hedinger-Grogg, Barbara; Luterbacher, Stephan; Hersberger, Kurt E
2016-09-01
Background: While drug-related problems are among the most frequent adverse events in health care, little is known about their type and prevalence in home care in the current literature. The use of a Critical Incident Reporting System (CIRS), known as an economic and efficient tool to record medication errors for subsequent analysis, is widely implemented in inpatient care, but less established in ambulatory care. Recommendations on a possible format are scarce. A manual CIRS was developed based on the literature and subsequently piloted and implemented in a Swiss home care organization. Aim: The aim of this work was to implement a critical incident reporting system specifically for medication safety in home care. Results: The final CIRS form was well accepted among staff. Requiring limited resources, it allowed preliminary identification and trending of medication errors in home care. The most frequent error reports addressed medication preparation at the patients’ home, encompassing the following errors: omission (30 %), wrong dose (17.5 %) and wrong time (15 %). The most frequent underlying causes were related to working conditions (37.9 %), lacking attention (68.2 %), time pressure (22.7 %) and interruptions by patients (9.1 %). Conclusions: A manual CIRS allowed efficient data collection and subsequent analysis of medication errors in order to plan future interventions for improvement of medication safety. The development of an electronic CIRS would allow a reduction of the expenditure of time regarding data collection and analysis. In addition, it would favour the development of a national CIRS network among home care institutions.
Alecu, Ionut M.; Stan, Andrada R.; Alecu, Marius; Ciura, Andrei; Hansen, Jeremy M.; Alecu, Rodica
2007-01-01
An effective patient quality assurance (QA) program for intensity‐modulated radiation therapy (IMRT) requires accurate and realistic plan acceptance criteria—that is, action limits. Based on dose measurements performed with a commercially available two‐dimensional (2D) diode array, we analyzed 747 fluence maps resulting from a routine patient QA program for IMRT plans. The fluence maps were calculated by three different commercially available (ADAC, CMS, Eclipse) treatment planning systems (TPSs) and were delivered using 6‐MV X‐ray beams produced by linear accelerators. To establish reasonably achievable and clinically acceptable limits for the dose deviations, the agreement between the measured and calculated fluence maps was evaluated in terms of percent dose error (PDE) for a few points and percent of passing points (PPP) for the isodose distribution. The analysis was conducted for each TPS used in the study (365 ADAC, 162 CMS, 220 Eclipse), for multiple treatment sites (prostate, pelvis, head and neck, spine, rectum, anus, lung, brain), at the normalization point for 3% percentage difference (%Diff) and 3‐mm distance to agreement (DTA) criteria. We investigated the treatment‐site dependency of PPP and PDE. The results show that, at 3% and 3‐mm criteria, a 95% PPP and 3% PDE can be achieved for prostate treatments and a 90% PPP and 5% PDE are attainable for any treatment site. PACS Numbers: 87.53Dq, 87.53Tf, 87.53Xd, 87.56Fc PMID:17592459
Phantom feet on digital radionuclide images and other scary computer tales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freitas, J.E.; Dworkin, H.J.; Dees, S.M.
1989-09-01
Malfunction of a computer-assisted digital gamma camera is reported. Despite what appeared to be adequate acceptance testing, an error in the system gave rise to switching of images and identification text. A suggestion is made for using a hot marker, which would avoid the potential error of misinterpretation of patient images.
Two Cultures in Modern Science and Technology: For Safety and Validity Does Medicine Have to Update?
Becker, Robert E
2016-01-11
Two different scientific cultures go unreconciled in modern medicine. Each culture accepts that scientific knowledge and technologies are vulnerable to and easily invalidated by methods and conditions of acquisition, interpretation, and application. How these vulnerabilities are addressed separates the 2 cultures and potentially explains medicine's difficulties eradicating errors. A traditional culture, dominant in medicine, leaves error control in the hands of individual and group investigators and practitioners. A competing modern scientific culture accepts errors as inevitable, pernicious, and pervasive sources of adverse events throughout medical research and patient care too malignant for individuals or groups to control. Error risks to the validity of scientific knowledge and safety in patient care require systemwide programming able to support a culture in medicine grounded in tested, continually updated, widely promulgated, and uniformly implemented standards of practice for research and patient care. Experiences from successes in other sciences and industries strongly support the need for leadership from the Institute of Medicine's recommended Center for Patient Safely within the Federal Executive branch of government.
Robust control of burst suppression for medical coma
NASA Astrophysics Data System (ADS)
Westover, M. Brandon; Kim, Seong-Eun; Ching, ShiNung; Purdon, Patrick L.; Brown, Emery N.
2015-08-01
Objective. Medical coma is an anesthetic-induced state of brain inactivation, manifest in the electroencephalogram by burst suppression. Feedback control can be used to regulate burst suppression, however, previous designs have not been robust. Robust control design is critical under real-world operating conditions, subject to substantial pharmacokinetic and pharmacodynamic parameter uncertainty and unpredictable external disturbances. We sought to develop a robust closed-loop anesthesia delivery (CLAD) system to control medical coma. Approach. We developed a robust CLAD system to control the burst suppression probability (BSP). We developed a novel BSP tracking algorithm based on realistic models of propofol pharmacokinetics and pharmacodynamics. We also developed a practical method for estimating patient-specific pharmacodynamics parameters. Finally, we synthesized a robust proportional integral controller. Using a factorial design spanning patient age, mass, height, and gender, we tested whether the system performed within clinically acceptable limits. Throughout all experiments we subjected the system to disturbances, simulating treatment of refractory status epilepticus in a real-world intensive care unit environment. Main results. In 5400 simulations, CLAD behavior remained within specifications. Transient behavior after a step in target BSP from 0.2 to 0.8 exhibited a rise time (the median (min, max)) of 1.4 [1.1, 1.9] min; settling time, 7.8 [4.2, 9.0] min; and percent overshoot of 9.6 [2.3, 10.8]%. Under steady state conditions the CLAD system exhibited a median error of 0.1 [-0.5, 0.9]%; inaccuracy of 1.8 [0.9, 3.4]%; oscillation index of 1.8 [0.9, 3.4]%; and maximum instantaneous propofol dose of 4.3 [2.1, 10.5] mg kg-1. The maximum hourly propofol dose was 4.3 [2.1, 10.3] mg kg-1 h-1. Performance fell within clinically acceptable limits for all measures. Significance. A CLAD system designed using robust control theory achieves clinically acceptable performance in the presence of realistic unmodeled disturbances and in spite of realistic model uncertainty, while maintaining infusion rates within acceptable safety limits.
Robust control of burst suppression for medical coma
Westover, M Brandon; Kim, Seong-Eun; Ching, ShiNung; Purdon, Patrick L; Brown, Emery N
2015-01-01
Objective Medical coma is an anesthetic-induced state of brain inactivation, manifest in the electroencephalogram by burst suppression. Feedback control can be used to regulate burst suppression, however, previous designs have not been robust. Robust control design is critical under real-world operating conditions, subject to substantial pharmacokinetic and pharmacodynamic parameter uncertainty and unpredictable external disturbances. We sought to develop a robust closed-loop anesthesia delivery (CLAD) system to control medical coma. Approach We developed a robust CLAD system to control the burst suppression probability (BSP). We developed a novel BSP tracking algorithm based on realistic models of propofol pharmacokinetics and pharmacodynamics. We also developed a practical method for estimating patient-specific pharmacodynamics parameters. Finally, we synthesized a robust proportional integral controller. Using a factorial design spanning patient age, mass, height, and gender, we tested whether the system performed within clinically acceptable limits. Throughout all experiments we subjected the system to disturbances, simulating treatment of refractory status epilepticus in a real-world intensive care unit environment. Main results In 5400 simulations, CLAD behavior remained within specifications. Transient behavior after a step in target BSP from 0.2 to 0.8 exhibited a rise time (the median (min, max)) of 1.4 [1.1, 1.9] min; settling time, 7.8 [4.2, 9.0] min; and percent overshoot of 9.6 [2.3, 10.8]%. Under steady state conditions the CLAD system exhibited a median error of 0.1 [−0.5, 0.9]%; inaccuracy of 1.8 [0.9, 3.4]%; oscillation index of 1.8 [0.9, 3.4]%; and maximum instantaneous propofol dose of 4.3 [2.1, 10.5] mg kg−1. The maximum hourly propofol dose was 4.3 [2.1, 10.3] mg kg−1 h−1. Performance fell within clinically acceptable limits for all measures. Significance A CLAD system designed using robust control theory achieves clinically acceptable performance in the presence of realistic unmodeled disturbances and in spite of realistic model uncertainty, while maintaining infusion rates within acceptable safety limits. PMID:26020243
Masterson, Julie J.; Preston, Jonathan L.
2015-01-01
Purpose This archival investigation examined the relationship between preliteracy speech sound production skill (SSPS) and spelling in Grade 3 using a dataset in which children's receptive vocabulary was generally within normal limits, speech therapy was not provided until Grade 2, and phonological awareness instruction was discouraged at the time data were collected. Method Participants (N = 250), selected from the Templin Archive (Templin, 2004), varied on prekindergarten SSPS. Participants' real word spellings in Grade 3 were evaluated using a metric of linguistic knowledge, the Computerized Spelling Sensitivity System (Masterson & Apel, 2013). Relationships between kindergarten speech error types and later spellings also were explored. Results Prekindergarten children in the lowest SPSS (7th percentile) scored poorest among articulatory subgroups on both individual spelling elements (phonetic elements, junctures, and affixes) and acceptable spelling (using relatively more omissions and illegal spelling patterns). Within the 7th percentile subgroup, there were no statistical spelling differences between those with mostly atypical speech sound errors and those with mostly typical speech sound errors. Conclusions Findings were consistent with predictions from dual route models of spelling that SSPS is one of many variables associated with spelling skill and that children with impaired SSPS are at risk for spelling difficulty. PMID:26380965
Frankenfield, David; Roth-Yousey, Lori; Compher, Charlene
2005-05-01
An assessment of energy needs is a necessary component in the development and evaluation of a nutrition care plan. The metabolic rate can be measured or estimated by equations, but estimation is by far the more common method. However, predictive equations might generate errors large enough to impact outcome. Therefore, a systematic review of the literature was undertaken to document the accuracy of predictive equations preliminary to deciding on the imperative to measure metabolic rate. As part of a larger project to determine the role of indirect calorimetry in clinical practice, an evidence team identified published articles that examined the validity of various predictive equations for resting metabolic rate (RMR) in nonobese and obese people and also in individuals of various ethnic and age groups. Articles were accepted based on defined criteria and abstracted using evidence analysis tools developed by the American Dietetic Association. Because these equations are applied by dietetics practitioners to individuals, a key inclusion criterion was research reports of individual data. The evidence was systematically evaluated, and a conclusion statement and grade were developed. Four prediction equations were identified as the most commonly used in clinical practice (Harris-Benedict, Mifflin-St Jeor, Owen, and World Health Organization/Food and Agriculture Organization/United Nations University [WHO/FAO/UNU]). Of these equations, the Mifflin-St Jeor equation was the most reliable, predicting RMR within 10% of measured in more nonobese and obese individuals than any other equation, and it also had the narrowest error range. No validation work concentrating on individual errors was found for the WHO/FAO/UNU equation. Older adults and US-residing ethnic minorities were underrepresented both in the development of predictive equations and in validation studies. The Mifflin-St Jeor equation is more likely than the other equations tested to estimate RMR to within 10% of that measured, but noteworthy errors and limitations exist when it is applied to individuals and possibly when it is generalized to certain age and ethnic groups. RMR estimation errors would be eliminated by valid measurement of RMR with indirect calorimetry, using an evidence-based protocol to minimize measurement error. The Expert Panel advises clinical judgment regarding when to accept estimated RMR using predictive equations in any given individual. Indirect calorimetry may be an important tool when, in the judgment of the clinician, the predictive methods fail an individual in a clinically relevant way. For members of groups that are greatly underrepresented by existing validation studies of predictive equations, a high level of suspicion regarding the accuracy of the equations is warranted.
Maintaining data integrity in a rural clinical trial.
Van den Broeck, Jan; Mackay, Melanie; Mpontshane, Nontobeko; Kany Kany Luabeya, Angelique; Chhagan, Meera; Bennish, Michael L
2007-01-01
Clinical trials conducted in rural resource-poor settings face special challenges in ensuring quality of data collection and handling. The variable nature of these challenges, ways to overcome them, and the resulting data quality are rarely reported in the literature. To provide a detailed example of establishing local data handling capacity for a clinical trial conducted in a rural area, highlight challenges and solutions in establishing such capacity, and to report the data quality obtained by the trial. We provide a descriptive case study of a data system for biological samples and questionnaire data, and the problems encountered during its implementation. To determine the quality of data we analyzed test-retest studies using Kappa statistics of inter- and intra-observer agreement on categorical data. We calculated Technical Errors of Measurement of anthropometric measurements, audit trail analysis was done to assess error correction rates, and residual error rates were calculated by database-to-source document comparison. Initial difficulties included the unavailability of experienced research nurses, programmers and data managers in this rural area and the difficulty of designing new software tools and a complex database while making them error-free. National and international collaboration and external monitoring helped ensure good data handling and implementation of good clinical practice. Data collection, fieldwork supervision and query handling depended on streamlined transport over large distances. The involvement of a community advisory board was helpful in addressing cultural issues and establishing community acceptability of data collection methods. Data accessibility for safety monitoring required special attention. Kappa values and Technical Errors of Measurement showed acceptable values. Residual error rates in key variables were low. The article describes the experience of a single-site trial and does not address challenges particular to multi-site trials. Obtaining and maintaining data integrity in rural clinical trials is feasible, can result in acceptable data quality and can be used to develop capacity in developing country sites. It does, however, involve special challenges and requirements.
Type I and Type II error concerns in fMRI research: re-balancing the scale
Cunningham, William A.
2009-01-01
Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017
NASA Astrophysics Data System (ADS)
House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor
2017-03-01
PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.
Performance of two updated blood glucose monitoring systems: an evaluation following ISO 15197:2013.
Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Jendrike, Nina; Haug, Cornelia; Freckmann, Guido
2016-05-01
Objective For patients with diabetes, regular self-monitoring of blood glucose (SMBG) is essential to ensure adequate glycemic control. Therefore, accurate and reliable blood glucose measurements with SMBG systems are necessary. The international standard ISO 15197 describes requirements for SMBG systems, such as limits within which 95% of glucose results have to fall to reach acceptable system accuracy. The 2013 version of this standard sets higher demands, especially regarding system accuracy, than the currently still valid edition. ISO 15197 can be applied by manufacturers to receive a CE mark for their system. Research design and methods This study was an accuracy evaluation following ISO 15197:2013 section 6.3 of two recently updated SMBG systems (Contour * and Contour TS; Bayer Consumer Care AG, Basel, Switzerland) with an improved algorithm to investigate whether the systems fulfill the requirements of the new standard. For this purpose, capillary blood samples of approximately 100 participants were measured with three test strip lots of both systems and deviations from glucose values obtained with a hexokinase-based comparison method (Cobas Integra † 400 plus; Roche Instrument Center, Rotkreuz, Switzerland) were determined. Percentages of values within the acceptance criteria of ISO 15197:2013 were calculated. This study was registered at clinicaltrials.gov (NCT02358408). Main outcome Both updated systems fulfilled the system accuracy requirements of ISO 15197:2013 as 98.5% to 100% of the results were within the stipulated limits. Furthermore, all results were within the clinically non-critical zones A and B of the consensus error grid for type 1 diabetes. Conclusions The technical improvement of the systems ensured compliance with ISO 15197 in the hands of healthcare professionals even in its more stringent 2013 version. Alternative presentation of system accuracy results in radar plots provides additional information with certain advantages. In addition, the surveillance error grid offers a modern tool to assess a system's clinical performance.
Mjøsund, Hanne Leirbekk; Boyle, Eleanor; Kjaer, Per; Mieritz, Rune Mygind; Skallgård, Tue; Kent, Peter
2017-03-21
Wireless, wearable, inertial motion sensor technology introduces new possibilities for monitoring spinal motion and pain in people during their daily activities of work, rest and play. There are many types of these wireless devices currently available but the precision in measurement and the magnitude of measurement error from such devices is often unknown. This study investigated the concurrent validity of one inertial motion sensor system (ViMove) for its ability to measure lumbar inclination motion, compared with the Vicon motion capture system. To mimic the variability of movement patterns in a clinical population, a sample of 34 people were included - 18 with low back pain and 16 without low back pain. ViMove sensors were attached to each participant's skin at spinal levels T12 and S2, and Vicon surface markers were attached to the ViMove sensors. Three repetitions of end-range flexion inclination, extension inclination and lateral flexion inclination to both sides while standing were measured by both systems concurrently with short rest periods in between. Measurement agreement through the whole movement range was analysed using a multilevel mixed-effects regression model to calculate the root mean squared errors and the limits of agreement were calculated using the Bland Altman method. We calculated root mean squared errors (standard deviation) of 1.82° (±1.00°) in flexion inclination, 0.71° (±0.34°) in extension inclination, 0.77° (±0.24°) in right lateral flexion inclination and 0.98° (±0.69°) in left lateral flexion inclination. 95% limits of agreement ranged between -3.86° and 4.69° in flexion inclination, -2.15° and 1.91° in extension inclination, -2.37° and 2.05° in right lateral flexion inclination and -3.11° and 2.96° in left lateral flexion inclination. We found a clinically acceptable level of agreement between these two methods for measuring standing lumbar inclination motion in these two cardinal movement planes. Further research should investigate the ViMove system's ability to measure lumbar motion in more complex 3D functional movements and to measure changes of movement patterns related to treatment effects.
Chemical and Thermodynamic Properties at High Temperatures: A Symposium
NASA Technical Reports Server (NTRS)
Walker, Raymond F.
1961-01-01
This book contains the program and all available abstracts of the 90' invited and contributed papers to be presented at the TUPAC Symposium on Chemical and Thermodynamic Properties at High Temperatures. The Symposium will be held in conjunction with the XVIIIth IUPAC Congress, Montreal, August 6 - 12, 1961. It has been organized, by the Subcommissions on Condensed States and on Gaseous States of the Commission on High Temperatures and Refractories and by the Subcommission on Experimental Thermodynamics of the Commission on Chemical Thermodynamics, acting in conjunction with the Organizing Committee of the IUPAC Congress. All inquiries concerning participation In the Symposium should be directed to: Secretary, XVIIIth International Congress of Pure and Applied Chemistry, National Research Council, Ottawa, 'Canada. Owing to the limited time and facilities available for the preparation and printing of the book, it has not been possible to refer the proofs of the abstracts to the authors for checking. Furthermore, it has not been possible to subject the manuscripts to a very thorough editorial examination. Some obvious errors in the manuscripts have been corrected; other errors undoubtedly have been introduced. Figures have been redrawn only when such a step was essential for reproduction purposes. Sincere apologies are offered to authors and readers for any errors which remain; however, in the circumstances neither the IUPAC Commissions who organized the Symposium, nor the U. S. Government Agencies who assisted in the preparation of this book can accept responsibility for the errors.
Elsäßer, Amelie; Regnstrom, Jan; Vetter, Thorsten; Koenig, Franz; Hemmings, Robert James; Greco, Martina; Papaluca-Amati, Marisa; Posch, Martin
2014-10-02
Since the first methodological publications on adaptive study design approaches in the 1990s, the application of these approaches in drug development has raised increasing interest among academia, industry and regulators. The European Medicines Agency (EMA) as well as the Food and Drug Administration (FDA) have published guidance documents addressing the potentials and limitations of adaptive designs in the regulatory context. Since there is limited experience in the implementation and interpretation of adaptive clinical trials, early interaction with regulators is recommended. The EMA offers such interactions through scientific advice and protocol assistance procedures. We performed a text search of scientific advice letters issued between 1 January 2007 and 8 May 2012 that contained relevant key terms. Letters containing questions related to adaptive clinical trials in phases II or III were selected for further analysis. From the selected letters, important characteristics of the proposed design and its context in the drug development program, as well as the responses of the Committee for Human Medicinal Products (CHMP)/Scientific Advice Working Party (SAWP), were extracted and categorized. For 41 more recent procedures (1 January 2009 to 8 May 2012), additional details of the trial design and the CHMP/SAWP responses were assessed. In addition, case studies are presented as examples. Over a range of 5½ years, 59 scientific advices were identified that address adaptive study designs in phase II and phase III clinical trials. Almost all were proposed as confirmatory phase III or phase II/III studies. The most frequently proposed adaptation was sample size reassessment, followed by dropping of treatment arms and population enrichment. While 12 (20%) of the 59 proposals for an adaptive clinical trial were not accepted, the great majority of proposals were accepted (15, 25%) or conditionally accepted (32, 54%). In the more recent 41 procedures, the most frequent concerns raised by CHMP/SAWP were insufficient justifications of the adaptation strategy, type I error rate control and bias. For the majority of proposed adaptive clinical trials, an overall positive opinion was given albeit with critical comments. Type I error rate control, bias and the justification of the design are common issues raised by the CHMP/SAWP.
A Cycle of Redemption in a Medical Error Disclosure and Apology Program.
Carmack, Heather J
2014-06-01
Physicians accept that they have an ethical responsibility to disclose and apologize for medical errors; however, when physicians make a medical error, they are often not given the opportunity to disclose and apologize for the mistake. In this article, I explore how one hospital negotiated the aftermath of medical mistakes through a disclosure and apology program. Specifically, I used Burke's cycle of redemption to position the hospital's disclosure and apology program as a redemption process and explore how the hospital physicians and administrators worked through the experiences of disclosing and apologizing for medical errors. © The Author(s) 2014.
Weighted linear regression using D2H and D2 as the independent variables
Hans T. Schreuder; Michael S. Williams
1998-01-01
Several error structures for weighted regression equations used for predicting volume were examined for 2 large data sets of felled and standing loblolly pine trees (Pinus taeda L.). The generally accepted model with variance of error proportional to the value of the covariate squared ( D2H = diameter squared times height or D...
Minimizing Experimental Error in Thinning Research
C. B. Briscoe
1964-01-01
Many diverse approaches have been made prescribing and evaluating thinnings on an objective basis. None of the techniques proposed hasbeen widely accepted. Indeed. none has been proven superior to the others nor even widely applicable. There are at least two possible reasons for this: none of the techniques suggested is of any general utility and/or experimental error...
ERIC Educational Resources Information Center
Rice, Mabel L.; Wexler, Kenneth; Redmond, Sean M.
1999-01-01
This longitudinal study evaluated grammatical judgments of "well formedness" of children (N=21) with specific language impairment (SLI). Comparison with two control groups found that children with SLI rejected morphosyntactic errors they didn't commit but accepted errors they were likely to make. Findings support the extended optional infinitive…
Running Speed Can Be Predicted from Foot Contact Time during Outdoor over Ground Running.
de Ruiter, Cornelis J; van Oeveren, Ben; Francke, Agnieta; Zijlstra, Patrick; van Dieen, Jaap H
2016-01-01
The number of validation studies of commercially available foot pods that provide estimates of running speed is limited and these studies have been conducted under laboratory conditions. Moreover, internal data handling and algorithms used to derive speed from these pods are proprietary and thereby unclear. The present study investigates the use of foot contact time (CT) for running speed estimations, which potentially can be used in addition to the global positioning system (GPS) in situations where GPS performance is limited. CT was measured with tri axial inertial sensors attached to the feet of 14 runners, during natural over ground outdoor running, under optimized conditions for GPS. The individual relationships between running speed and CT were established during short runs at different speeds on two days. These relations were subsequently used to predict instantaneous speed during a straight line 4 km run with a single turning point halfway. Stopwatch derived speed, measured for each of 32 consecutive 125m intervals during the 4 km runs, was used as reference. Individual speed-CT relations were strong (r2 >0.96 for all trials) and consistent between days. During the 4km runs, median error (ranges) in predicted speed from CT 2.5% (5.2) was higher (P<0.05) than for GPS 1.6% (0.8). However, around the turning point and during the first and last 125m interval, error for GPS-speed increased to 5.0% (4.5) and became greater (P<0.05) than the error predicted from CT: 2.7% (4.4). Small speed fluctuations during 4km runs were adequately monitored with both methods: CT and GPS respectively explained 85% and 73% of the total speed variance during 4km runs. In conclusion, running speed estimates bases on speed-CT relations, have acceptable accuracy and could serve to backup or substitute for GPS during tarmac running on flat terrain whenever GPS performance is limited.
Chen, Yi-Miau; Huang, Yi-Jing; Huang, Chien-Yu; Lin, Gong-Hong; Liaw, Lih-Jiun; Lee, Shih-Chieh; Hsieh, Ching-Lin
2017-10-01
The 3-point Berg Balance Scale (BBS-3P) and 3-point Postural Assessment Scale for Stroke Patients (PASS-3P) were simplified from the BBS and PASS to overcome the complex scoring systems. The BBS-3P and PASS-3P were more feasible in busy clinical practice and showed similarly sound validity and responsiveness to the original measures. However, the reliability of the BBS-3P and PASS-3P is unknown limiting their utility and the interpretability of scores. We aimed to examine the test-retest reliability and minimal detectable change (MDC) of the BBS-3P and PASS-3P in patients with stroke. Cross-sectional study. The rehabilitation departments of a medical center and a community hospital. A total of 51 chronic stroke patients (64.7% male). Both balance measures were administered twice 7 days apart. The test-retest reliability of both the BBS-3P and PASS-3P were examined by intraclass correlation coefficients (ICC). The MDC and its percentage over the total score (MDC%) of each measure was calculated for examining the random measurement errors. The ICC values of the BBS-3P and PASS-3P were 0.99 and 0.97, respectively. The MDC% (MDC) of the BBS-3P and PASS-3P were 9.1% (5.1 points) and 8.4% (3.0 points), respectively, indicating that both measures had small and acceptable random measurement errors. Our results showed that both the BBS-3P and the PASS-3P had good test-retest reliability, with small and acceptable random measurement error. These two simplified 3-level balance measures can provide reliable results over time. Our findings support the repeated administration of the BBS-3P and PASS-3P to monitor the balance of patients with stroke. The MDC values can help clinicians and researchers interpret the change scores more precisely.
Moon, Jordan R; Eckerson, Joan M; Tobkin, Sarah E; Smith, Abbie E; Lockwood, Christopher M; Walter, Ashley A; Cramer, Joel T; Beck, Travis W; Stout, Jeffrey R
2009-01-01
The purpose of the present study was to determine the validity of various laboratory methods for estimating percent body fat (%fat) in NCAA Division I college female athletes (n = 29; 20 +/- 1 year). Body composition was assessed via hydrostatic weighing (HW), air displacement plethysmography (ADP), and dual-energy X-ray absorptiometry (DXA), and estimates of %fat derived using 4-compartment (C), 3C, and 2C models were compared to a criterion 5C model that included bone mineral content, body volume (BV), total body water, and soft tissue mineral. The Wang-4C and the Siri-3C models produced nearly identical values compared to the 5C model (r > 0.99, total error (TE) < 0.40%fat). For the remaining laboratory methods, constant error values (CE) ranged from -0.04%fat (HW-Siri) to -3.71%fat (DXA); r values ranged from 0.89 (ADP-Siri, ADP-Brozek) to 0.93 (DXA); standard error of estimate values ranged from 1.78%fat (DXA) to 2.19%fat (ADP-Siri, ADP-Brozek); and TE values ranged from 2.22%fat (HW-Brozek) to 4.90%fat (DXA). The limits of agreement for DXA (-10.10 to 2.68%fat) were the largest with a significant trend of -0.43 (P < 0.05). With the exception of DXA, all of the equations resulted in acceptable TE values (<3.08%fat). However, the results for individual estimates of %fat using the Brozek equation indicated that the 2C models that derived BV from ADP and HW overestimated (5.38, 3.65%) and underestimated (5.19, 4.88%) %fat, respectively. The acceptable TE values for both HW and ADP suggest that these methods are valid for estimating %fat in college female athletes; however, the Wang-4C and Siri-3C models should be used to identify individual estimates of %fat in this population.
In Search of Grid Converged Solutions
NASA Technical Reports Server (NTRS)
Lockard, David P.
2010-01-01
Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.
Plutonium Critical Mass Curve Comparison to Mass at Upper Subcritical Limit (USL) Using Whisper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alwin, Jennifer Louise; Zhang, Ning
Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the MCNP ® Monte Carlo radiation transport package. Standard approaches to validation rely on the selection of benchmarks based upon expert judgment. Whisper uses sensitivity/uncertainty (S/U) methods to select relevant benchmarks to a particular application or set of applications being analyzed. Using these benchmarks, Whisper computes a calculational margin. Whisper attempts to quantify the margin of subcriticality (MOS) from errors in software and uncertainties in nuclear data. The combination of the Whisper-derived calculational margin and MOS comprise the baseline upper subcritical limit (USL), tomore » which an additional margin may be applied by the nuclear criticality safety analyst as appropriate to ensure subcriticality. A series of critical mass curves for plutonium, similar to those found in Figure 31 of LA-10860-MS, have been generated using MCNP6.1.1 and the iterative parameter study software, WORM_Solver. The baseline USL for each of the data points of the curves was then computed using Whisper 1.1. The USL was then used to determine the equivalent mass for plutonium metal-water system. ANSI/ANS-8.1 states that it is acceptable to use handbook data, such as the data directly from the LA-10860-MS, as it is already considered validated (Section 4.3 4) “Use of subcritical limit data provided in ANSI/ANS standards or accepted reference publications does not require further validation.”). This paper attempts to take a novel approach to visualize traditional critical mass curves and allows comparison with the amount of mass for which the k eff is equal to the USL (calculational margin + margin of subcriticality). However, the intent is to plot the critical mass data along with USL, not to suggest that already accepted handbook data should have new and more rigorous requirements for validation.« less
Biswas, Animesh; Rahman, Fazlur; Eriksson, Charli; Halim, Abdul; Dalal, Koustuv
2016-08-23
Social Autopsy (SA) is an innovative strategy where a trained facilitator leads community groups through a structured, standardised analysis of the physical, environmental, cultural and social factors contributing to a serious, non-fatal health event or death. The discussion stimulated by the formal process of SA determines the causes and suggests preventative measures that are appropriate and achievable in the community. Here we explored individual experiences of SA, including acceptance and participant learning, and its effect on rural communities in Bangladesh. The present study had explored the experiences gained while undertaking SA of maternal and neonatal deaths and stillbirths in rural Bangladesh. Qualitative assessment of documents, observations, focus group discussions, group discussions and in-depth interviews by content and thematic analyses. Each community's maternal and neonatal death was a unique, sad story. SA undertaken by government field-level health workers were well accepted by rural communities. SA had the capability to explore the social reasons behind the medical cause of the death without apportioning blame to any individual or group. SA was a useful instrument to raise awareness and encourage community responses to errors within the society that contributed to the death. People participating in SA showed commitment to future preventative measures and devised their own solutions for the future prevention of maternal and neonatal deaths. SA highlights societal errors and promotes discussion around maternal or newborn death. SA is an effective means to deliver important preventative messages and to sensitise the community to death issues. Importantly, the community itself is enabled to devise future strategies to avert future maternal and neonatal deaths in Bangladesh. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
GPS FOM Chimney Analysis using Generalized Extreme Value Distribution
NASA Technical Reports Server (NTRS)
Ott, Rick; Frisbee, Joe; Saha, Kanan
2004-01-01
Many a time an objective of a statistical analysis is to estimate a limit value like 3-sigma 95% confidence upper limit from a data sample. The generalized Extreme Value Distribution method can be profitably employed in many situations for such an estimate. . .. It is well known that according to the Central Limit theorem the mean value of a large data set is normally distributed irrespective of the distribution of the data from which the mean value is derived. In a somewhat similar fashion it is observed that many times the extreme value of a data set has a distribution that can be formulated with a Generalized Distribution. In space shuttle entry with 3-string GPS navigation the Figure Of Merit (FOM) value gives a measure of GPS navigated state accuracy. A GPS navigated state with FOM of 6 or higher is deemed unacceptable and is said to form a FOM 6 or higher chimney. A FOM chimney is a period of time during which the FOM value stays higher than 5. A longer period of FOM of value 6 or higher causes navigated state to accumulate more error for a lack of state update. For an acceptable landing it is imperative that the state error remains low and hence at low altitude during entry GPS data of FOM greater than 5 must not last more than 138 seconds. I To test the GPS performAnce many entry test cases were simulated at the Avionics Development Laboratory. Only high value FoM chimneys are consequential. The extreme value statistical technique is applied to analyze high value FOM chimneys. The Maximum likelihood method is used to determine parameters that characterize the GEV distribution, and then the limit value statistics are estimated.
Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco
2008-09-01
This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees.
Likitlersuang, Jirapat; Leineweber, Matthew J; Andrysek, Jan
2017-10-01
Thin film force sensors are commonly used within biomechanical systems, and at the interface of the human body and medical and non-medical devices. However, limited information is available about their performance in such applications. The aims of this study were to evaluate and determine ways to improve the performance of thin film (FlexiForce) sensors at the body/device interface. Using a custom apparatus designed to load the sensors under simulated body/device conditions, two aspects were explored relating to sensor calibration and application. The findings revealed accuracy errors of 23.3±17.6% for force measurements at the body/device interface with conventional techniques of sensor calibration and application. Applying a thin rigid disc between the sensor and human body and calibrating the sensor using compliant surfaces was found to substantially reduce measurement errors to 2.9±2.0%. The use of alternative calibration and application procedures is recommended to gain acceptable measurement performance from thin film force sensors in body/device applications. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Interest and limits of the six sigma methodology in medical laboratory.
Scherrer, Florian; Bouilloux, Jean-Pierre; Calendini, Ors'Anton; Chamard, Didier; Cornu, François
2017-02-01
The mandatory accreditation of clinical laboratories in France provides an incentive to develop real tools to measure performance management methods and to optimize the management of internal quality controls. Six sigma methodology is an approach commonly applied to software quality management and discussed in numerous publications. This paper discusses the primary factors that influence the sigma index (the choice of the total allowable error, the approach used to address bias) and compares the performance of different analyzers on the basis of the sigma index. Six sigma strategy can be applied to the policy management of internal quality control in a laboratory and demonstrates through a comparison of four analyzers that there is no single superior analyzer in clinical chemistry. Similar sigma results are obtained using approaches toward bias based on the EQAS or the IQC. The main difficulty in using the six sigma methodology lies in the absence of official guidelines for the definition of the total error acceptable. Despite this drawback, our comparison study suggests that difficulties with defined analytes do not vary with the analyzer used.
NASA Astrophysics Data System (ADS)
Yazdani, Mohammad Reza; Setayeshi, Saeed; Arabalibeik, Hossein; Akbari, Mohammad Esmaeil
2017-05-01
Intraoperative electron radiation therapy (IOERT), which uses electron beams for irradiating the target directly during the surgery, has the advantage of delivering a homogeneous dose to a controlled layer of tissue. Since the dose falls off quickly below the target thickness, the underlying normal tissues are spared. In selecting the appropriate electron energy, the accuracy of the target tissue thickness measurement is critical. In contrast to other procedures applied in IOERT, the routine measurement method is considered to be completely traditional and approximate. In this work, a novel mechanism is proposed for measuring the target tissue thickness with an acceptable level of accuracy. An electronic system has been designed and manufactured with the capability of measuring the tissue thickness based on the recorded electron density under the target. The results indicated the possibility of thickness measurement with a maximum error of 2 mm for 91.35% of data. Aside from system limitation in estimating the thickness of 5 mm phantom, for 88.94% of data, maximum error is 1 mm.
iGen: An automated generator of simplified models with provable error bounds.
NASA Astrophysics Data System (ADS)
Tang, D.; Dobbie, S.
2009-04-01
Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.
Lee, Posen; Lu, Wen-Shian; Liu, Chin-Hsuan; Lin, Hung-Yu; Hsieh, Ching-Lin
2017-12-08
The d2 Test of Attention (D2) is a commonly used measure of selective attention for patients with schizophrenia. However, its test-retest reliability and minimal detectable change (MDC) are unknown in patients with schizophrenia, limiting its utility in both clinical and research settings. The aim of the present study was to examine the test-retest reliability and MDC of the D2 in patients with schizophrenia. A rater administered the D2 on 108 patients with schizophrenia twice at a 1-month interval. Test-retest reliability was determined through the calculation of the intra-class correlation coefficient (ICC). We also carried out Bland-Altman analysis, which included a scatter plot of the differences between test and retest against their mean. Systematic biases were evaluated by use of a paired t-test. The ICCs for the D2 ranged from 0.78 to 0.94. The MDCs (MDC%) of the seven subscores were 102.3 (29.7), 19.4 (85.0), 7.2 (94.6), 21.0 (69.0), 104.0 (33.1), 105.0 (35.8), and 7.8 (47.8), which represented limited-to-acceptable random measurement error. Trends in the Bland-Altman plots of the omissions (E1), commissions (E2), and errors (E) were noted, presenting that the data had heteroscedasticity. According to the results, the D2 had good test-retest reliability, especially in the scores of TN, TN-E, and CP. For the further research, finding a way to improve the administration procedure to reduce random measurement error would be important for the E1, E2, E, and FR subscores. © The Author(s) 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Gopal, S; Do, T; Pooni, J S; Martinelli, G
2014-03-01
The Mostcare monitor is a non-invasive cardiac output monitor. It has been well validated in cardiac surgical patients but there is limited evidence on its use in patients with severe sepsis and septic shock. The study included the first 22 consecutive patients with severe sepsis and septic shock in whom the floatation of a pulmonary artery catheter was deemed necessary to guide clinical management. Cardiac output measurements including cardiac output, cardiac index and stroke volume were simultaneously calculated and recorded from a thermodilution pulmonary artery catheter and from the Mostcare monitor respectively. The two methods of measuring cardiac output were compared by Bland-Altman statistics and linear regression analysis. A percentage error of less than 30% was defined as acceptable for this study. Bland-Altman analysis for cardiac output showed a Bias of 0.31 L.min-1, precision (=SD) of 1.97 L.min-1 and a percentage error of 62.54%. For Cardiac Index the bias was 0.21 L.min-1.m-2, precision of 1.10 L.min-1.m-2 and a percentage error of 64%. For stroke volume the bias was 5 mL, precision of 24.46 mL and percentage error of 70.21%. Linear regression produced a correlation coefficient r2 for cardiac output, cardiac index, and stroke volume, of 0.403, 0.306, and 0.3 respectively. Compared to thermodilution cardiac output, cardiac output studies obtained from the Mostcare monitor have an unacceptably high error rate. The Mostcare monitor demonstrated to be an unreliable monitoring device to measure cardiac output in patients with severe sepsis and septic shock on an intensive care unit.
Ultrasound transducer function: annual testing is not sufficient.
Mårtensson, Mattias; Olsson, Mats; Brodin, Lars-Åke
2010-10-01
The objective was to follow-up the study 'High incidence of defective ultrasound transducers in use in routine clinical practice' and evaluate if annual testing is good enough to reduce the incidence of defective ultrasound transducers in routine clinical practice to an acceptable level. A total of 299 transducers were tested in 13 clinics at five hospitals in the Stockholm area. Approximately 7000-15,000 ultrasound examinations are carried out at these clinics every year. The transducers tested in the study had been tested and classified as fully operational 1 year before and since then been in normal use in the routine clinical practice. The transducers were tested with the Sonora FirstCall Test System. There were 81 (27.1%) defective transducers found; giving a 95% confidence interval ranging from 22.1 to 32.1%. The most common transducer errors were 'delamination' of the ultrasound lens and 'break in the cable' which together constituted 82.7% of all transducer errors found. The highest error rate was found at the radiological clinics with a mean error rate of 36.0%. There was a significant difference in error rate between two observed ways the clinics handled the transducers. There was no significant difference in the error rates of the transducer brands or the transducers models. Annual testing is not sufficient to reduce the incidence of defective ultrasound transducers in routine clinical practice to an acceptable level and it is strongly advisable to create a user routine that minimizes the handling of the transducers.
ERIC Educational Resources Information Center
Pankow, Lena; Kaiser, Gabriele; Busse, Andreas; König, Johannes; Blömeke, Sigrid; Hoth, Jessica; Döhrmann, Martina
2016-01-01
The paper presents results from a computer-based assessment in which 171 early career mathematics teachers from Germany were asked to anticipate typical student errors on a given mathematical topic and identify them under time constraints. Fast and accurate perception and knowledge-based judgments are widely accepted characteristics of teacher…
NASA Technical Reports Server (NTRS)
Genge, Gary G.
1991-01-01
The probabilistic design approach currently receiving attention for structural failure modes has been adapted for obtaining measured bearing wear limits in the Space Shuttle Main Engine high-pressure oxidizer turbopump. With the development of the shaft microtravel measurements to determine bearing health, an acceptance limit was neeed that protects against all known faiure modes yet is not overly conservative. This acceptance criteria limit has been successfully determined using probabilistic descriptions of preflight hardware geometry, empirical bearing wear data, mission requirements, and measurement tool precision as an input for a Monte Carlo simulation. The result of the simulation is a frequency distribution of failures as a function of preflight acceptance limits. When the distribution is converted into a reliability curve, a conscious risk management decision is made concerning the acceptance limit.
Total energy based flight control system
NASA Technical Reports Server (NTRS)
Lambregts, Antonius A. (Inventor)
1985-01-01
An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.
Keyworth, Chris; Hart, Jo; Thoong, Hong; Ferguson, Jane; Tully, Mary
2017-08-01
Although prescribing of medication in hospitals is rarely an error-free process, prescribers receive little feedback on their mistakes and ways to change future practices. Audit and feedback interventions may be an effective approach to modifying the clinical practice of health professionals, but these may pose logistical challenges when used in hospitals. Moreover, such interventions are often labor intensive. Consequently, there is a need to develop effective and innovative interventions to overcome these challenges and to improve the delivery of feedback on prescribing. Implementation intentions, which have been shown to be effective in changing behavior, link critical situations with an appropriate response; however, these have rarely been used in the context of improving prescribing practices. Semistructured qualitative interviews were conducted to evaluate the acceptability and feasibility of providing feedback on prescribing errors via MyPrescribe, a mobile-compatible website informed by implementation intentions. Data relating to 200 prescribing errors made by 52 junior doctors were collected by 11 hospital pharmacists. These errors were populated into MyPrescribe, where prescribers were able to construct their own personalized action plans. Qualitative interviews with a subsample of 15 junior doctors were used to explore issues regarding feasibility and acceptability of MyPrescribe and their experiences of using implementation intentions to construct prescribing action plans. Framework analysis was used to identify prominent themes, with findings mapped to the behavioral components of the COM-B model (capability, opportunity, motivation, and behavior) to inform the development of future interventions. MyPrescribe was perceived to be effective in providing opportunities for critical reflection on prescribing errors and to complement existing training (such as junior doctors' e-portfolio). The participants were able to provide examples of how they would use "If-Then" plans for patient management. Technology, as opposed to other methods of learning (eg, traditional "paper based" learning), was seen as a positive advancement for continued learning. MyPrescribe was perceived as an acceptable and feasible learning tool for changing prescribing practices, with participants suggesting that it would make an important addition to medical prescribers' training in reflective practice. MyPrescribe is a novel theory-based technological innovation that provides the platform for doctors to create personalized implementation intentions. Applying the COM-B model allows for a more detailed understanding of the perceived mechanisms behind prescribing practices and the ways in which interventions aimed at changing professional practice can be implemented. ©Chris Keyworth, Jo Hart, Hong Thoong, Jane Ferguson, Mary Tully. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 01.08.2017.
Hart, Jo; Thoong, Hong; Ferguson, Jane; Tully, Mary
2017-01-01
Background Although prescribing of medication in hospitals is rarely an error-free process, prescribers receive little feedback on their mistakes and ways to change future practices. Audit and feedback interventions may be an effective approach to modifying the clinical practice of health professionals, but these may pose logistical challenges when used in hospitals. Moreover, such interventions are often labor intensive. Consequently, there is a need to develop effective and innovative interventions to overcome these challenges and to improve the delivery of feedback on prescribing. Implementation intentions, which have been shown to be effective in changing behavior, link critical situations with an appropriate response; however, these have rarely been used in the context of improving prescribing practices. Objective Semistructured qualitative interviews were conducted to evaluate the acceptability and feasibility of providing feedback on prescribing errors via MyPrescribe, a mobile-compatible website informed by implementation intentions. Methods Data relating to 200 prescribing errors made by 52 junior doctors were collected by 11 hospital pharmacists. These errors were populated into MyPrescribe, where prescribers were able to construct their own personalized action plans. Qualitative interviews with a subsample of 15 junior doctors were used to explore issues regarding feasibility and acceptability of MyPrescribe and their experiences of using implementation intentions to construct prescribing action plans. Framework analysis was used to identify prominent themes, with findings mapped to the behavioral components of the COM-B model (capability, opportunity, motivation, and behavior) to inform the development of future interventions. Results MyPrescribe was perceived to be effective in providing opportunities for critical reflection on prescribing errors and to complement existing training (such as junior doctors’ e-portfolio). The participants were able to provide examples of how they would use “If-Then” plans for patient management. Technology, as opposed to other methods of learning (eg, traditional “paper based” learning), was seen as a positive advancement for continued learning. Conclusions MyPrescribe was perceived as an acceptable and feasible learning tool for changing prescribing practices, with participants suggesting that it would make an important addition to medical prescribers’ training in reflective practice. MyPrescribe is a novel theory-based technological innovation that provides the platform for doctors to create personalized implementation intentions. Applying the COM-B model allows for a more detailed understanding of the perceived mechanisms behind prescribing practices and the ways in which interventions aimed at changing professional practice can be implemented. PMID:28765104
Scheduling periodic jobs using imprecise results
NASA Technical Reports Server (NTRS)
Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay
1987-01-01
One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed.
NASA Technical Reports Server (NTRS)
Hinrichs, C. A.
1974-01-01
A digital simulation is presented for a candidate modem in a modeled atmospheric scintillation environment with Doppler, Doppler rate, and signal attenuation typical of the radio link conditions for an outer planets atmospheric entry probe. The results indicate that the signal acquisition characteristics and the channel error rate are acceptable for the system requirements of the radio link. The simulation also outputs data for calculating other error statistics and a quantized symbol stream from which error correction decoding can be analyzed.
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
Evaluation of dual-tip micromanometers during 21-day implantation in goats
NASA Technical Reports Server (NTRS)
Reister, C. A.; Koenig, S. C.; Schaub, J. D.; Ewert, D. L.; Swope, R. D.; Latham, R. D.; Fanton, J. W.; Convertino, V. A. (Principal Investigator)
1998-01-01
Investigative research efforts using a cardiovascular model required the determination of central circulatory haemodynamic and arterial system parameters for the evaluation of cardiovascular performance. These calculations required continuous beat-to-beat measurement of pressure within the four chambers of the heart and great vessels. Sensitivity and offset drift, longevity, and sources of error for eight 3F dual-tipped micromanometers were determined during 21 days of implantation in goats. Subjects were instrumented with pairs of chronically implanted fluid-filled access catheters in the left and right ventricles, through which dual-tipped (test) micromanometers were chronically inserted and single-tip (standard) micromanometers were acutely inserted. Acutely inserted sensors were calibrated daily and measured pressures were compared in vivo to the chronically inserted sensors. Comparison of the pre- and post-gain calibration of the chronically inserted sensors showed a mean sensitivity drift of 1.0 +/- 0.4% (99% confidence, n = 9 sensors) and mean offset drift of 5.0 +/- 1.5 mmHg (99% confidence, n = 9 sensors). Potential sources of error for these drifts were identified, and included measurement system inaccuracies, temperature drift, hydrostatic column gradients, and dynamic pressure changes. Based upon these findings, we determined that these micromanometers may be chronically inserted in high-pressure chambers for up to 17 days with an acceptable error, but should be limited to acute (hours) insertions in low-pressure applications.
Fracture mechanics life analytical methods verification testing
NASA Technical Reports Server (NTRS)
Favenesi, J. A.; Clemons, T. G.; Riddell, W. T.; Ingraffea, A. R.; Wawrzynek, P. A.
1994-01-01
The objective was to evaluate NASCRAC (trademark) version 2.0, a second generation fracture analysis code, for verification and validity. NASCRAC was evaluated using a combination of comparisons to the literature, closed-form solutions, numerical analyses, and tests. Several limitations and minor errors were detected. Additionally, a number of major flaws were discovered. These major flaws were generally due to application of a specific method or theory, not due to programming logic. Results are presented for the following program capabilities: K versus a, J versus a, crack opening area, life calculation due to fatigue crack growth, tolerable crack size, proof test logic, tearing instability, creep crack growth, crack transitioning, crack retardation due to overloads, and elastic-plastic stress redistribution. It is concluded that the code is an acceptable fracture tool for K solutions of simplified geometries, for a limited number of J and crack opening area solutions, and for fatigue crack propagation with the Paris equation and constant amplitude loads when the Paris equation is applicable.
Measuring systems of hard to get objects: problems with analysis of measurement results
NASA Astrophysics Data System (ADS)
Gilewska, Grazyna
2005-02-01
The problem accessibility of metrological parameters features of objects appeared in many measurements. Especially if it is biological object which parameters very often determined on the basis of indirect research. Accidental component predominate in forming of measurement results with very limited access to measurement objects. Every measuring process has a lot of conditions limiting its abilities to any way processing (e.g. increase number of measurement repetition to decrease random limiting error). It may be temporal, financial limitations, or in case of biological object, small volume of sample, influence measuring tool and observers on object, or whether fatigue effects e.g. at patient. It's taken listing difficulties into consideration author worked out and checked practical application of methods outlying observation reduction and next innovative methods of elimination measured data with excess variance to decrease of mean standard deviation of measured data, with limited aomunt of data and accepted level of confidence. Elaborated methods wee verified on the basis of measurement results of knee-joint width space got from radiographs. Measurements were carried out by indirectly method on the digital images of radiographs. Results of examination confirmed legitimacy to using of elaborated methodology and measurement procedures. Such methodology has special importance when standard scientific ways didn't bring expectations effects.
5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL... § 1605.22 Claims for correction of Board or TSP record keeper errors; time limitations. (a) Filing claims... after that time, the Board or TSP record keeper may use its sound discretion in deciding whether to...
NASA Astrophysics Data System (ADS)
Clarke, John R.; Southerland, David
1999-07-01
Semi-closed circuit underwater breathing apparatus (UBA) provide a constant flow of mixed gas containing oxygen and nitrogen or helium to a diver. However, as a diver's work rate and metabolic oxygen consumption varies, the oxygen percentages within the UBA can change dramatically. Hence, even a resting diver can become hypoxic and become at risk for oxygen induced seizures. Conversely, a hard working diver can become hypoxic and lose consciousness. Unfortunately, current semi-closed UBA do not contain oxygen monitors. We describe a simple oxygen monitoring system designed and prototyped at the Navy Experimental Diving Unit. The main monitor components include a PIC microcontroller, analog-to-digital converter, bicolor LED, and oxygen sensor. The LED, affixed to the diver's mask is steady green if the oxygen partial pressure is within pre- defined acceptable limits. A more advanced monitor with a depth senor and additional computational circuitry could be used to estimate metabolic oxygen consumption. The computational algorithm uses the oxygen partial pressure and the diver's depth to compute O2 using the steady state solution of the differential equation describing oxygen concentrations within the UBA. Consequently, dive transients induce errors in the O2 estimation. To evalute these errors, we used a computer simulation of semi-closed circuit UBA dives to generate transient rich data as input to the estimation algorithm. A step change in simulated O2 elicits a monoexponential change in the estimated O2 with a time constant of 5 to 10 minutes. Methods for predicting error and providing a probable error indication to the diver are presented.
Semantic Typicality Effects in Acquired Dyslexia: Evidence for Semantic Impairment in Deep Dyslexia.
Riley, Ellyn A; Thompson, Cynthia K
2010-06-01
BACKGROUND: Acquired deep dyslexia is characterized by impairment in grapheme-phoneme conversion and production of semantic errors in oral reading. Several theories have attempted to explain the production of semantic errors in deep dyslexia, some proposing that they arise from impairments in both grapheme-phoneme and lexical-semantic processing, and others proposing that such errors stem from a deficit in phonological production. Whereas both views have gained some acceptance, the limited evidence available does not clearly eliminate the possibility that semantic errors arise from a lexical-semantic input processing deficit. AIMS: To investigate semantic processing in deep dyslexia, this study examined the typicality effect in deep dyslexic individuals, phonological dyslexic individuals, and controls using an online category verification paradigm. This task requires explicit semantic access without speech production, focusing observation on semantic processing from written or spoken input. METHODS #ENTITYSTARTX00026; PROCEDURES: To examine the locus of semantic impairment, the task was administered in visual and auditory modalities with reaction time as the primary dependent measure. Nine controls, six phonological dyslexic participants, and five deep dyslexic participants completed the study. OUTCOMES #ENTITYSTARTX00026; RESULTS: Controls and phonological dyslexic participants demonstrated a typicality effect in both modalities, while deep dyslexic participants did not demonstrate a typicality effect in either modality. CONCLUSIONS: These findings suggest that deep dyslexia is associated with a semantic processing deficit. Although this does not rule out the possibility of concomitant deficits in other modules of lexical-semantic processing, this finding suggests a direction for treatment of deep dyslexia focused on semantic processing.
Older Adults' Acceptance of Activity Trackers
Preusse, Kimberly C.; Mitzner, Tracy L.; Fausset, Cara Bailey; Rogers, Wendy A.
2016-01-01
Objective To assess the usability and acceptance of activity tracking technologies by older adults. Method First in our multi-method approach, we conducted heuristic evaluations of two activity trackers that revealed potential usability barriers to acceptance. Next, questionnaires and interviews were administered to 16 older adults (Mage=70, SDage=3.09, rangeage= 65-75) before and after a 28-day field study to understand facilitators and additional barriers to acceptance. These measurements were supplemented with diary and usage data and assessed if and why users overcame usability issues. Results The heuristic evaluation revealed usability barriers in System Status Visibility; Error Prevention; and Consistency and Standards. The field study revealed additional barriers (e.g., accuracy, format), and acceptance-facilitators (e.g., goal-tracking, usefulness, encouragement). Discussion The acceptance of wellness management technologies, such as activity trackers, may be increased by addressing acceptance-barriers during deployment (e.g., providing tutorials on features that were challenging, communicating usefulness). PMID:26753803
Imbery, Terence A; Diaz, Nicholas; Greenfield, Kristy; Janus, Charles; Best, Al M
2016-10-01
Preclinical fixed prosthodontics is taught by Department of Prosthodontics faculty members at Virginia Commonwealth University School of Dentistry; however, 86% of all clinical cases in academic year 2012 were staffed by faculty members from the Department of General Practice. The aims of this retrospective study were to quantify the quality of impressions, accuracy of laboratory work authorizations, and most common errors and to determine if there were differences between the rate of errors in cases supervised by the prosthodontists and the general dentists. A total of 346 Fixed Prosthodontic Laboratory Tracking Sheets for the 2012 academic year were reviewed. The results showed that, overall, 73% of submitted impressions were acceptable at initial evaluation, 16% had to be poured first and re-evaluated for quality prior to pindexing, 7% had multiple impressions submitted for transfer dies, and 4% were rejected for poor quality. There were higher acceptance rates for impressions and work authorizations for cases staffed by prosthodontists than by general dentists, but the differences were not statistically significant (p=0.0584 and p=0.0666, respectively). Regarding the work authorizations, 43% overall did not provide sufficient information or had technical errors that delayed prosthesis fabrication. The most common errors were incorrect mountings, absence of solid casts, inadequate description of margins for porcelain fused to metal crowns, inaccurate die trimming, and margin marking. The percentages of errors in cases supervised by general dentists and prosthodontists were similar for 17 of the 18 types of errors identified; only for margin description was the percentage of errors statistically significantly higher for general dentist-supervised than prosthodontist-supervised cases. These results highlighted the ongoing need for faculty development and calibration to ensure students receive the highest quality education from all faculty members teaching fixed prosthodontics.
Langer, Thorsten; Martinez, William; Browning, David M; Varrin, Pamela; Sarnoff Lee, Barbara; Bell, Sigall K
2016-08-01
Despite growing interest in engaging patients and families (P/F) in patient safety education, little is known about how P/F can best contribute. We assessed the feasibility and acceptability of a patient-teacher medical error disclosure and prevention training model. We developed an educational intervention bringing together interprofessional clinicians with P/F from hospital advisory councils to discuss error disclosure and prevention. Patient focus groups and orientation sessions informed curriculum and assessment design. A pre-post survey with qualitative and quantitative questions was used to assess P/F and clinician experiences and attitudes about collaborative safety education including participant hopes, fears, perceived value of learning experience and challenges. Responses to open-ended questions were coded according to principles of content analysis. P/F and clinicians hoped to learn about each other's perspectives, communication skills and patient empowerment strategies. Before the intervention, both groups worried about power dynamics dampening effective interaction. Clinicians worried that P/F would learn about their fallibility, while P/F were concerned about clinicians' jargon and defensive posturing. Following workshops, clinicians valued patients' direct feedback, communication strategies for error disclosure and a 'real' learning experience. P/F appreciated clinicians' accountability, and insights into how medical errors affect clinicians. Half of participants found nothing challenging, the remainder clinicians cited emotions and enormity of 'culture change', while P/F commented on medical jargon and desire for more time. Patients and clinicians found the experience valuable. Recommendations about how to develop a patient-teacher programme in patient safety are provided. An educational paradigm that includes patients as teachers and collaborative learners with clinicians in patient safety is feasible, valued by clinicians and P/F and promising for P/F-centred medical error disclosure and prevention training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
A cloud medication safety support system using QR code and Web services for elderly outpatients.
Tseng, Ming-Hseng; Wu, Hui-Ching
2014-01-01
Drug is an important part of disease treatment, but medication errors happen frequently and have significant clinical and financial consequences. The prevalence of prescription medication use among the ambulatory adult population increases with advancing age. Because of the global aging society, outpatients need to improve medication safety more than inpatients. The elderly with multiple chronic conditions face the complex task of medication management. To reduce the medication errors for the elder outpatients with chronic diseases, a cloud medication safety supporting system is designed, demonstrated and evaluated. The proposed system is composed of a three-tier architecture: the front-end tier, the mobile tier and the cloud tier. The mobile tier will host the personalized medication safety supporting application on Android platforms that provides some primary functions including reminders for medication, assistance with pill-dispensing, recording of medications, position of medications and notices of forgotten medications for elderly outpatients. Finally, the hybrid technology acceptance model is employed to understand the intention and satisfaction level of the potential users to use this mobile medication safety support application system. The result of the system acceptance testing indicates that this developed system, implementing patient-centered services, is highly accepted by the elderly. This proposed M-health system could assist elderly outpatients' homecare in preventing medication errors and improving their medication safety.
Development of the Computer-Adaptive Version of the Late-Life Function and Disability Instrument
Tian, Feng; Kopits, Ilona M.; Moed, Richard; Pardasaney, Poonam K.; Jette, Alan M.
2012-01-01
Background. Having psychometrically strong disability measures that minimize response burden is important in assessing of older adults. Methods. Using the original 48 items from the Late-Life Function and Disability Instrument and newly developed items, a 158-item Activity Limitation and a 62-item Participation Restriction item pool were developed. The item pools were administered to a convenience sample of 520 community-dwelling adults 60 years or older. Confirmatory factor analysis and item response theory were employed to identify content structure, calibrate items, and build the computer-adaptive testings (CATs). We evaluated real-data simulations of 10-item CAT subscales. We collected data from 102 older adults to validate the 10-item CATs against the Veteran’s Short Form-36 and assessed test–retest reliability in a subsample of 57 subjects. Results. Confirmatory factor analysis revealed a bifactor structure, and multi-dimensional item response theory was used to calibrate an overall Activity Limitation Scale (141 items) and an overall Participation Restriction Scale (55 items). Fit statistics were acceptable (Activity Limitation: comparative fit index = 0.95, Tucker Lewis Index = 0.95, root mean square error approximation = 0.03; Participation Restriction: comparative fit index = 0.95, Tucker Lewis Index = 0.95, root mean square error approximation = 0.05). Correlation of 10-item CATs with full item banks were substantial (Activity Limitation: r = .90; Participation Restriction: r = .95). Test–retest reliability estimates were high (Activity Limitation: r = .85; Participation Restriction r = .80). Strength and pattern of correlations with Veteran’s Short Form-36 subscales were as hypothesized. Each CAT, on average, took 3.56 minutes to administer. Conclusions. The Late-Life Function and Disability Instrument CATs demonstrated strong reliability, validity, accuracy, and precision. The Late-Life Function and Disability Instrument CAT can achieve psychometrically sound disability assessment in older persons while reducing respondent burden. Further research is needed to assess their ability to measure change in older adults. PMID:22546960
Determination and evaluation of acceptable force limits in single-digit tasks.
Nussbaum, Maury A; Johnson, Hope
2002-01-01
Acceptable limits derived from psychophysical methodologies have been proposed, measured, and employed in a range of applications. There is little existing work, however, on such limits for single-digit exertions and relatively limited evidence on several fundamental issues related to data collection and processing of a sequence of self-regulated exertion levels. An experimental study was conducted using 14 male and 10 female participants (age range 18-31 years) from whom maximal voluntary exertions and maximal acceptable limits (MALs) were obtained using the index finger and thumb. Moderate to high levels of consistency were found for both measures between sessions separated by one day. Single MAL values, determined from a time series of exertions, were equivalent across three divergent processing methods and between values obtained from 5- and 25-min samples. A critical interpretation of these and earlier results supports continued use of acceptable limits but also suggests that they should be used with some caution and not equated with safe limits. This research can be applied toward future development of exertion limits based on perceived acceptability.
NASA Technical Reports Server (NTRS)
Diorio, Kimberly A.
2002-01-01
A process task analysis effort was undertaken by Dynacs Inc. commencing in June 2002 under contract from NASA YA-D6. Funding was provided through NASA's Ames Research Center (ARC), Code M/HQ, and Industrial Engineering and Safety (IES). The John F. Kennedy Space Center (KSC) Engineering Development Contract (EDC) Task Order was 5SMA768. The scope of the effort was to conduct a Human Factors Process Failure Modes and Effects Analysis (HF PFMEA) of a hazardous activity and provide recommendations to eliminate or reduce the effects of errors caused by human factors. The Liquid Oxygen (LOX) Pump Acceptance Test Procedure (ATP) was selected for this analysis. The HF PFMEA table (see appendix A) provides an analysis of six major categories evaluated for this study. These categories include Personnel Certification, Test Procedure Format, Test Procedure Safety Controls, Test Article Data, Instrumentation, and Voice Communication. For each specific requirement listed in appendix A, the following topics were addressed: Requirement, Potential Human Error, Performance-Shaping Factors, Potential Effects of the Error, Barriers and Controls, Risk Priority Numbers, and Recommended Actions. This report summarizes findings and gives recommendations as determined by the data contained in appendix A. It also includes a discussion of technology barriers and challenges to performing task analyses, as well as lessons learned. The HF PFMEA table in appendix A recommends the use of accepted and required safety criteria in order to reduce the risk of human error. The items with the highest risk priority numbers should receive the greatest amount of consideration. Implementation of the recommendations will result in a safer operation for all personnel.
Saatchi, Masoud; Mohammadi, Golshan; Vali Sichani, Armita; Moshkforoush, Saba
2018-01-01
The aim of the present study was to evaluate the radiographic quality of RCTs performed by undergraduate clinical students of Dental School of Isfahan University of Medical Sciences. In this cross sectional study, records and periapical radiographs of 1200 root filled teeth were randomly selected from the records of patients who had received RCTs in Dental School of Isfahan University of Medical Sciences from 2013 to 2015. After excluding 416 records, the final sample consisted of 784 root-treated teeth (1674 root canals). Two variables including the length and the density of the root fillings were examined. Moreover, the presence of ledge, foramen perforation, root perforation and fractured instruments were also evaluated as procedural errors. Descriptive statistics were used for expressing the frequencies of criteria and chi square test was used for comparing tooth types, tooth locations and academic level of students ( P <0.05). The frequency of root canals with acceptable filling was 54.1%. Overfilling was found in 11% of root canals, underfilling in 8.3% and inadequate density in 34.6%. No significant difference was found between the frequency of acceptable root fillings in the maxilla and mandible ( P =0.072). More acceptable fillings were found in the root canals of premolars (61.3%) than molars (51.3%) ( P =0.001). The frequency of procedural errors was 18.6%. Ledge was found in 12.5% of root canals, foramen perforation in 2%, root perforation in 2.4% and fractured instrument in 2%. Procedural errors were more frequent in the root canals of molars (22.5%) than the anterior teeth (12.3%) ( P =0.003) and the premolars (9.5%) ( P <0.001). Technical quality of RCTs performed by clinical students was not satisfactory and incidence of procedural errors was considerable.
Shanks, Orin C; Kelty, Catherine A; Oshiro, Robin; Haugland, Richard A; Madi, Tania; Brooks, Lauren; Field, Katharine G; Sivaganesan, Mano
2016-05-01
There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria proposed in this study should help scientists, managers, reviewers, and the public evaluate the technical quality of future findings against an established benchmark. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
On the Limitations of Variational Bias Correction
NASA Technical Reports Server (NTRS)
Moradi, Isaac; Mccarty, Will; Gelaro, Ronald
2018-01-01
Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.
Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud
2013-09-01
The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)
Wildeman, Maarten A; Zandbergen, Jeroen; Vincent, Andrew; Herdini, Camelia; Middeldorp, Jaap M; Fles, Renske; Dalesio, Otilia; van der Donk, Emile; Tan, I Bing
2011-08-08
Data collection by electronic medical record (EMR) systems have been proven to be helpful in data collection for scientific research and in improving healthcare. For a multi-centre trial in Indonesia and the Netherlands a web based system was selected to enable all participating centres to easily access data. This study assesses whether the introduction of a clinical trial data management service (CTDMS) composed of electronic case report forms (eCRF) can result in effective data collection and treatment monitoring. Data items entered were checked for inconsistencies automatically when submitted online. The data were divided into primary and secondary data items. We analysed both the total number of errors and the change in error rate, for both primary and secondary items, over the first five month of the trial. In the first five months 51 patients were entered. The primary data error rate was 1.6%, whilst that for secondary data was 2.7% against acceptable error rates for analysis of 1% and 2.5% respectively. The presented analysis shows that after five months since the introduction of the CTDMS the primary and secondary data error rates reflect acceptable levels of data quality. Furthermore, these error rates were decreasing over time. The digital nature of the CTDMS, as well as the online availability of that data, gives fast and easy insight in adherence to treatment protocols. As such, the CTDMS can serve as a tool to train and educate medical doctors and can improve treatment protocols.
Sarmast, Nima D; Angelov, Nikola; Ghinea, Razvan; Powers, John M; Paravina, Rade D
The CIELab and CIEDE2000 coverage error (ΔE* COV and ΔE' COV , respectively) of basic shades of different gingival shade guides and gingiva-colored restorative dental materials (n = 5) was calculated as compared to a previously compiled database on healthy human gingiva. Data were analyzed using analysis of variance with Tukey-Kramer multiple-comparison test (P < .05). A 50:50% acceptability threshold of 4.6 for ΔE* and 4.1 for ΔE' was used to interpret the results. ΔE* COV / ΔE' COV ranged from 4.4/3.5 to 8.6/6.9. The majority of gingival shade guides and gingiva-colored restorative materials exhibited statistically significant coverage errors above the 50:50% acceptability threshold and uneven shade distribution.
The NEEDS Data Base Management and Archival Mass Memory System
NASA Technical Reports Server (NTRS)
Bailey, G. A.; Bryant, S. B.; Thomas, D. T.; Wagnon, F. W.
1980-01-01
A Data Base Management System and an Archival Mass Memory System are being developed that will have a 10 to the 12th bit on-line and a 10 to the 13th off-line storage capacity. The integrated system will accept packetized data from the data staging area at 50 Mbps, create a comprehensive directory, provide for file management, record the data, perform error detection and correction, accept user requests, retrieve the requested data files and provide the data to multiple users at a combined rate of 50 Mbps. Stored and replicated data files will have a bit error rate of less than 10 to the -9th even after ten years of storage. The integrated system will be demonstrated to prove the technology late in 1981.
Dajani, Hilmi R; Hosokawa, Kazuya; Ando, Shin-Ichi
2016-11-01
Lung-to-finger circulation time of oxygenated blood during nocturnal periodic breathing in heart failure patients measured using polysomnography correlates negatively with cardiac function but possesses limited accuracy for cardiac output (CO) estimation. CO was recalculated from lung-to-finger circulation time using a multivariable linear model with information on age and average overnight heart rate in 25 patients who underwent evaluation of heart failure. The multivariable model decreased the percentage error to 22.3% relative to invasive CO measured during cardiac catheterization. This improved automated noninvasive CO estimation using multiple variables meets a recently proposed performance criterion for clinical acceptability of noninvasive CO estimation, and compares very favorably with other available methods. Copyright © 2016 Elsevier Inc. All rights reserved.
System for NIS Forecasting Based on Ensembles Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-01-02
BMA-NIS is a package/library designed to be called by a script (e.g. Perl or Python). The software itself is written in the language of R. The software assists electric power delivery systems in planning resource availability and demand, based on historical data and current data variables. Net Interchange Schedule (NIS) is the algebraic sum of all energy scheduled to flow into or out of a balancing area during any interval. Accurate forecasts for NIS are important so that the Area Control Error (ACE) stays within an acceptable limit. To date, there are many approaches for forecasting NIS but all nonemore » of these are based on single models that can be sensitive to time of day and day of week effects.« less
75 FR 7934 - Airworthiness Directives; McCauley Propeller Systems 1A103/TCM Series Propellers
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-23
... with cracks that do not meet acceptable limits, and rework of propellers with cracks that meet..., replacement of propellers with cracks that do not meet acceptable limits, and rework of propellers with cracks... propeller hub, removal from service of propellers with cracks that do not meet acceptable limits, and rework...
Liquid Medication Dosing Errors by Hispanic Parents: Role of Health Literacy and English Proficiency
Harris, Leslie M.; Dreyer, Benard; Mendelsohn, Alan; Bailey, Stacy C.; Sanders, Lee M.; Wolf, Michael S.; Parker, Ruth M.; Patel, Deesha A.; Kim, Kwang Youn A.; Jimenez, Jessica J.; Jacobson, Kara; Smith, Michelle; Yin, H. Shonna
2016-01-01
Objective Hispanic parents in the US are disproportionately affected by low health literacy and limited English proficiency (LEP). We examined associations between health literacy, LEP, and liquid medication dosing errors in Hispanic parents. Methods Cross-sectional analysis of data from a multisite randomized controlled experiment to identify best practices for the labeling/dosing of pediatric liquid medications (SAFE Rx for Kids study); 3 urban pediatric clinics. Analyses were limited to Hispanic parents of children <8 years, with health literacy and LEP data (n=1126). Parents were randomized to 5 groups that varied by pairing of units of measurement on the label/dosing tool. Each parent measured 9 doses [3 amounts (2.5,5,7.5 mL) using 3 tools (2 syringes (0.2,0.5 mL increment), 1 cup)] in random order. Dependent variable: Dosing error=>20% dose deviation. Predictor variables: health literacy (Newest Vital Sign) [limited=0–3; adequate=4–6], LEP (speaks English less than “very well”). Results 83.1% made dosing errors (mean(SD) errors/parent=2.2(1.9)). Parents with limited health literacy and LEP had the greatest odds of making a dosing error compared to parents with adequate health literacy who were English proficient (% trials with errors/parent=28.8 vs. 12.9%; AOR=2.2[1.7–2.8]). Parents with limited health literacy who were English proficient were also more likely to make errors (% trials with errors/parent=18.8%; AOR=1.4[1.1–1.9]). Conclusion Dosing errors are common among Hispanic parents; those with both LEP and limited health literacy are at particular risk. Further study is needed to examine how the redesign of medication labels and dosing tools could reduce literacy and language-associated disparities in dosing errors. PMID:28477800
The relationship between hand hygiene and health care-associated infection: it’s complicated
McLaws, Mary-Louise
2015-01-01
The reasoning that improved hand hygiene compliance contributes to the prevention of health care-associated infections is widely accepted. It is also accepted that high hand hygiene alone cannot impact formidable risk factors, such as older age, immunosuppression, admission to the intensive care unit, longer length of stay, and indwelling devices. When hand hygiene interventions are concurrently undertaken with other routine or special preventive strategies, there is a potential for these concurrent strategies to confound the effect of the hand hygiene program. The result may be an overestimation of the hand hygiene intervention unless the design of the intervention or analysis controls the effect of the potential confounders. Other epidemiologic principles that may also impact the result of a hand hygiene program include failure to consider measurement error of the content of the hand hygiene program and the measurement error of compliance. Some epidemiological errors in hand hygiene programs aimed at reducing health care-associated infections are inherent and not easily controlled. Nevertheless, the inadvertent omission by authors to report these common epidemiological errors, including concurrent infection prevention strategies, suggests to readers that the effect of hand hygiene is greater than the sum of all infection prevention strategies. Worse still, this omission does not assist evidence-based practice. PMID:25678805
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
[Analysis of drug-related problems in a tertiary university hospital in Barcelona (Spain)].
Ferrández, Olivia; Casañ, Borja; Grau, Santiago; Louro, Javier; Salas, Esther; Castells, Xavier; Sala, Maria
2018-05-07
To describe drug-related problems identified in hospitalized patients and to assess physicians' acceptance rate of pharmacists' recommendations. Retrospective observational study that included all drug-related problems detected in hospitalized patients during 2014-2015. Statistical analysis included a descriptive analysis of the data and a multivariate logistic regression to evaluate the association between pharmacists' recommendation acceptance rate and the variable of interest. During the study period 4587 drug-related problems were identified in 44,870 hospitalized patients. Main drug-related problems were prescription errors due to incorrect use of the computerized physician order entry (18.1%), inappropriate drug-drug combination (13.3%) and dose adjustment by renal and/or hepatic function (11.5%). Acceptance rate of pharmacist therapy advice in evaluable cases was 81.0%. Medical versus surgical admitting department, specific types of intervention (addition of a new drug, drug discontinuation and correction of a prescription error) and oral communication of the recommendation were associated with a higher acceptance rate. The results of this study allow areas to be identified on which to implement optimization strategies. These include training courses for physicians on the computerized physician order entry, on drugs that need dose adjustment with renal impairment, and on relevant drug interactions. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Human Reliability and the Cost of Doing Business
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2014-01-01
Human error cannot be defined unambiguously in advance of it happening, it often becomes an error after the fact. The same action can result in a tragic accident for one situation or a heroic action given a more favorable outcome. People often forget that we employ humans in business and industry for the flexibility and capability to change when needed. In complex systems, operations are driven by their specifications of the system and the system structure. People provide the flexibility to make it work. Human error has been reported as being responsible for 60%-80% of failures, accidents and incidents in high-risk industries. We don't have to accept that all human errors are inevitable. Through the use of some basic techniques, many potential human error events can be addressed. There are actions that can be taken to reduce the risk of human error.
P-value interpretation and alpha allocation in clinical trials.
Moyé, L A
1998-08-01
Although much value has been placed on type I error event probabilities in clinical trials, interpretive difficulties often arise that are directly related to clinical trial complexity. Deviations of the trial execution from its protocol, the presence of multiple treatment arms, and the inclusion of multiple end points complicate the interpretation of an experiment's reported alpha level. The purpose of this manuscript is to formulate the discussion of P values (and power for studies showing no significant differences) on the basis of the event whose relative frequency they represent. Experimental discordance (discrepancies between the protocol's directives and the experiment's execution) is linked to difficulty in alpha and beta interpretation. Mild experimental discordance leads to an acceptable adjustment for alpha or beta, while severe discordance results in their corruption. Finally, guidelines are provided for allocating type I error among a collection of end points in a prospectively designed, randomized controlled clinical trial. When considering secondary end point inclusion in clinical trials, investigators should increase the sample size to preserve the type I error rates at acceptable levels.
7 CFR 42.143 - Operating Characteristic (OC) curves for on-line sampling and inspection.
Code of Federal Regulations, 2010 CFR
2010-01-01
...=Number of sample units in a subgroup. T=Subgroup tolerance.L=Acceptance limit.S=Starting value. EC02SE91... ng=Number of sample units in a subgroup. T=Subgroup tolerance. L=Acceptance limit. S=Starting value... of sample units in a subgroup. T=Subgroup tolerance. L=Acceptance limit. S=Starting value. EC02SE91...
Evaluation of an Airborne Spacing Concept, On-Board Spacing Tool, and Pilot Interface
NASA Technical Reports Server (NTRS)
Swieringa, Kurt; Murdoch, Jennifer L.; Baxley, Brian; Hubbs, Clay
2011-01-01
The number of commercial aircraft operations is predicted to increase in the next ten years, creating a need for improved operational efficiency. Two areas believed to offer significant increases in efficiency are optimized profile descents and dependent parallel runway operations. It is envisioned that during both of these types of operations, flight crews will precisely space their aircraft behind preceding aircraft at air traffic control assigned intervals to increase runway throughput and maximize the use of existing infrastructure. This paper describes a human-in-the-loop experiment designed to study the performance of an onboard spacing algorithm and pilots ratings of the usability and acceptability of an airborne spacing concept that supports dependent parallel arrivals. Pilot participants flew arrivals into the Dallas Fort-Worth terminal environment using one of three different simulators located at the National Aeronautics and Space Administration s (NASA) Langley Research Center. Scenarios were flown using Interval Management with Spacing (IM-S) and Required Time of Arrival (RTA) control methods during conditions of no error, error in the forecast wind, and offset (disturbance) to the arrival flow. Results indicate that pilots delivered their aircraft to the runway threshold within +/- 3.5 seconds of their assigned arrival time and reported that both the IM-S and RTA procedures were associated with low workload levels. In general, pilots found the IM-S concept, procedures, speeds, and interface acceptable; with 92% of pilots rating the procedures as complete and logical, 218 out of 240 responses agreeing that the IM-S speeds were acceptable, and 63% of pilots reporting that the displays were easy to understand and displayed in appropriate locations. The 22 (out of 240) responses, indicating that the commanded speeds were not acceptable and appropriate occurred during scenarios containing wind error and offset error. Concerns cited included the occurrence of multiple speed changes within a short time period, speed changes required within twenty miles of the runway, and an increase in airspeed followed shortly by a decrease in airspeed. Within this paper, appropriate design recommendations are provided, and the need for continued, iterative human-centered design is discussed.
[Clinical economics: a concept to optimize healthcare services].
Porzsolt, F; Bauer, K; Henne-Bruns, D
2012-03-01
Clinical economics strives to support healthcare decisions by economic considerations. Making economic decisions does not mean saving costs but rather comparing the gained added value with the burden which has to be accepted. The necessary rules are offered in various disciplines, such as economy, epidemiology and ethics. Medical doctors have recognized these rules but are not applying them in daily clinical practice. This lacking orientation leads to preventable errors. Examples of these errors are shown for diagnosis, screening, prognosis and therapy. As these errors can be prevented by application of clinical economic principles the possible consequences for optimization of healthcare are discussed.
Hozo, Iztok; Schell, Michael J; Djulbegovic, Benjamin
2008-07-01
The absolute truth in research is unobtainable, as no evidence or research hypothesis is ever 100% conclusive. Therefore, all data and inferences can in principle be considered as "inconclusive." Scientific inference and decision-making need to take into account errors, which are unavoidable in the research enterprise. The errors can occur at the level of conclusions that aim to discern the truthfulness of research hypothesis based on the accuracy of research evidence and hypothesis, and decisions, the goal of which is to enable optimal decision-making under present and specific circumstances. To optimize the chance of both correct conclusions and correct decisions, the synthesis of all major statistical approaches to clinical research is needed. The integration of these approaches (frequentist, Bayesian, and decision-analytic) can be accomplished through formal risk:benefit (R:B) analysis. This chapter illustrates the rational choice of a research hypothesis using R:B analysis based on decision-theoretic expected utility theory framework and the concept of "acceptable regret" to calculate the threshold probability of the "truth" above which the benefit of accepting a research hypothesis outweighs its risks.
Three calculations of free cortisol versus measured values in the critically ill.
Molenaar, Nienke; Groeneveld, A B Johan; de Jong, Margriet F C
2015-11-01
To investigate the agreement between the calculated free cortisol levels according to widely applied Coolens and adjusted Södergård equations with measured levels in the critically ill. A prospective study in a mixed intensive care unit. We consecutively included 103 patients with treatment-insensitive hypotension in whom an adrenocorticotropic hormone (ACTH) test (250μg) was performed. Serum total and free cortisol (equilibrium dialysis), corticosteroid-binding globulin and albumin were assessed. Free cortisol was estimated by the Coolens method (C) and two adjusted Södergård (S1 and S2) equations. Bland Altman plots were made. The bias for absolute (t=0, 30 and 60min after ACTH injection) cortisol levels was 38, -24, 41nmol/L when the C, S1 and S2 equations were used, with 95% limits of agreement between -65-142, -182-135, and -57-139nmol/L and percentage errors of 66, 85, and 64%, respectively. Bias for delta (peak-baseline) cortisol was 14, -31 and 16nmol/L, with 95% limits of agreement between -80-108, -157-95, and -74-105nmol/L, and percentage errors of 107, 114, and 100% for C, S1 and S2 equations, respectively. Calculated free cortisol levels have too high bias and imprecision to allow for acceptable use in the critically ill. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lisson, Jerold B.; Mounts, Darryl I.; Fehniger, Michael J.
1992-08-01
Localized wavefront performance analysis (LWPA) is a system that allows the full utilization of the system optical transfer function (OTF) for the specification and acceptance of hybrid imaging systems. We show that LWPA dictates the correction of wavefront errors with the greatest impact on critical imaging spatial frequencies. This is accomplished by the generation of an imaging performance map-analogous to a map of the optic pupil error-using a local OTF. The resulting performance map a function of transfer function spatial frequency is directly relatable to the primary viewing condition of the end-user. In addition to optimizing quality for the viewer it will be seen that the system has the potential for an improved matching of the optical and electronic bandpass of the imager and for the development of more realistic acceptance specifications. 1. LOCAL WAVEFRONT PERFORMANCE ANALYSIS The LWPA system generates a local optical quality factor (LOQF) in the form of a map analogous to that used for the presentation and evaluation of wavefront errors. In conjunction with the local phase transfer function (LPTF) it can be used for maximally efficient specification and correction of imaging system pupil errors. The LOQF and LPTF are respectively equivalent to the global modulation transfer function (MTF) and phase transfer function (PTF) parts of the OTF. The LPTF is related to difference of the average of the errors in separated regions of the pupil. Figure
Internet-Based Group Intervention for Ovarian Cancer Survivors: Feasibility and Preliminary Results
Kinner, Ellen M; Armer, Jessica S; McGregor, Bonnie A; Duffecy, Jennifer; Leighton, Susan; Corden, Marya E; Gauthier Mullady, Janine; Penedo, Frank J
2018-01-01
Background Development of psychosocial group interventions for ovarian cancer survivors has been limited. Drawing from elements of cognitive-behavioral stress management (CBSM), mindfulness-based stress reduction (MBSR), and acceptance and commitment therapy (ACT), we developed and conducted preliminary testing of an Internet-based group intervention tailored specifically to meet the needs of ovarian cancer survivors. The Internet-based platform facilitated home delivery of the psychosocial intervention to a group of cancer survivors for whom attending face-to-face programs could be difficult given their physical limitations and the small number of ovarian cancer survivors at any one treatment site. Objective The aim of this study was to develop, optimize, and assess the usability, acceptability, feasibility, and preliminary intended effects of an Internet-based group stress management intervention for ovarian cancer survivors delivered via a tablet or laptop. Methods In total, 9 ovarian cancer survivors provided feedback during usability testing. Subsequently, 19 survivors participated in 5 waves of field testing of the 10-week group intervention led by 2 psychologists. The group met weekly for 2 hours via an Internet-based videoconference platform. Structured interviews and weekly evaluations were used to elicit feedback on the website and intervention content. Before and after the intervention, measures of mood, quality of life (QOL), perceived stress, sleep, and social support were administered. Paired t tests were used to examine changes in psychosocial measures over time. Results Usability results indicated that participants (n=9) performed basic tablet functions quickly with no errors and performed website functions easily with a low frequency of errors. In the field trial (n=19), across 5 groups, the 10-week intervention was well attended. Perceived stress (P=.03) and ovarian cancer-specific QOL (P=.01) both improved significantly during the course of the intervention. Trends toward decreased distress (P=.18) and greater physical (P=.05) and functional well-being (P=.06) were also observed. Qualitative interviews revealed that the most common obstacles participants experienced were technical issues and the time commitment for practicing the techniques taught in the program. Participants reported that the intervention helped them to overcome a sense of isolation and that they appreciated the ability to participate at home. Conclusions An Internet-based group intervention tailored specifically for ovarian cancer survivors is highly usable and acceptable with moderate levels of feasibility. Preliminary psychosocial outcomes indicate decreases in perceived stress and improvements in ovarian cancer-specific QOL following the intervention. A randomized clinical trial is needed to demonstrate the efficacy of this promising intervention for ovarian cancer survivors. PMID:29335233
Bedini, José Luis; Wallace, Jane F.; Petruschke, Thorsten; Pardo, Scott
2015-01-01
Background: Self-monitoring of blood glucose is crucial for the effective self-management of diabetes. The present study evaluated the accuracy of the Contour® XT blood glucose monitoring system (BGMS) compared to the reference method in a large multicenter study under routine lab conditions at each hospital site. Methods: This study was conducted at 21 leading hospitals in Spain using leftover whole blood samples (n = 2100). Samples were tested with the BGMS using 1 commercial strip lot and the local laboratory hexokinase method. BGMS accuracy was assessed and results were compared to ISO 15197:2013 accuracy limit criteria and by using mean absolute relative difference analysis (MARD), consensus (Parkes) error grid (CEG), and surveillance error grid analyses (SEG). Results: Pooled analysis of 2100 measurements from all sites showed that 99.43% of the BGMS results were within the ranges accepted by the accuracy limit criteria. The overall MARD was 3.85%. MARD was 4.47% for glucose concentrations < 70 mg/dL and 3.81% for concentrations of 70-300 mg/dL. In CEG, most results (99.8%) were within zone A (“no effect on clinical action”); the remaining ones (0.2%) were in zone B (“little to no effect on clinical action”). The SEG analysis showed that most of the results (98.4%) were in the “no risk” zone, with the remaining results in the “slight, lower” risk zone. Conclusions: This is the largest multicenter study of Contour XT BGMS to date, and shows that this BGMS meets the ISO 15197:2013 accuracy limit criteria under local routine conditions in 21 leading Spanish hospitals. PMID:26253142
Bedini, José Luis; Wallace, Jane F; Petruschke, Thorsten; Pardo, Scott
2015-08-07
Self-monitoring of blood glucose is crucial for the effective self-management of diabetes. The present study evaluated the accuracy of the Contour® XT blood glucose monitoring system (BGMS) compared to the reference method in a large multicenter study under routine lab conditions at each hospital site. This study was conducted at 21 leading hospitals in Spain using leftover whole blood samples (n = 2100). Samples were tested with the BGMS using 1 commercial strip lot and the local laboratory hexokinase method. BGMS accuracy was assessed and results were compared to ISO 15197:2013 accuracy limit criteria and by using mean absolute relative difference analysis (MARD), consensus (Parkes) error grid (CEG), and surveillance error grid analyses (SEG). Pooled analysis of 2100 measurements from all sites showed that 99.43% of the BGMS results were within the ranges accepted by the accuracy limit criteria. The overall MARD was 3.85%. MARD was 4.47% for glucose concentrations < 70 mg/dL and 3.81% for concentrations of 70-300 mg/dL. In CEG, most results (99.8%) were within zone A ("no effect on clinical action"); the remaining ones (0.2%) were in zone B ("little to no effect on clinical action"). The SEG analysis showed that most of the results (98.4%) were in the "no risk" zone, with the remaining results in the "slight, lower" risk zone. This is the largest multicenter study of Contour XT BGMS to date, and shows that this BGMS meets the ISO 15197:2013 accuracy limit criteria under local routine conditions in 21 leading Spanish hospitals. © 2015 Diabetes Technology Society.
Ribeiro, D M; Réus, J C; Felippe, W T; Pacheco-Pereira, C; Dutra, K L; Santos, J N; Porporatti, A L; De Luca Canto, G
2018-03-01
The technical quality of root canal treatment (RCT) may impact on the outcome. The quality of education received during undergraduate school may be linked to the quality of treatment provided in general dental practice. In this context, the aim of this systematic review was to answer the following focused questions: (i) What is the frequency of acceptable technical quality of root fillings, assessed radiographically, performed by undergraduate students? (ii) What are the most common errors assessed radiographically and reported in these treatments? For this purpose, articles that evaluated the quality of root fillings performed by undergraduate students were selected. Data were collected based on predetermined criteria. The key features from the included studies were extracted. GRADE-tool assessed the quality of the evidence. MAStARI evaluated the methodological quality, and a meta-analysis on all studies was conducted. At the end of the screening, 24 articles were identified. Overall frequency of acceptable technical quality of root fillings was 48%. From this total, 52% related to anterior teeth, 49% to premolars and 26% to molars. The main procedural errors reported were ledge formation, furcation perforation, apical transportation and apical perforation. The heterogeneity amongst the studies was high (84-99%). Five studies had a high risk of bias, eight had a moderate risk, and 11 had low risk. The overall quality of evidence identified was very low. The conclusion was that technical quality of root fillings performed by undergraduate students is low, which may reveal that endodontic education has limited achievement at undergraduate level. A plan to improve the quality of root fillings, and by extrapolation the overall quality of root canal treatment, should be discussed by the staff responsible for endodontic education and training. © 2017 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Evaluation of mobile smartphones app as a screening tool for environmental noise monitoring.
Ibekwe, Titus S; Folorunsho, David O; Dahilo, Enoch A; Gbujie, Ibeneche O; Nwegbu, Maxwell M; Nwaorgu, Onyekwere G
2016-01-01
Noise is a global occupational and environmental health hazard with considerable social and physiological impact and, therefore, there is a need for regular measurements to boost monitoring and regulations of environmental noise levels in our communities. This necessitates a readily available, inexpensive, and easy to use noise measuring device. We aimed to test the sensitivity and validity of mobile "smart" phones for this purpose. This was a comparative analysis of a cross sectional study done between January 2014 and February 2015. Noise levels were measured simultaneously at different locations within Abuja Nigeria at day and night hours in real time environments. A sound level meter (SLM) (Extech407730 Digital Soundmeter, serial no.: 2310135, calibration no: 91037) and three smartphones (Samsung Galaxy note3, Nokia S, and Techno Phantom Z running on Android "Apps" Androidboy1) were used. Statistical calculations were done with Pearson correlation, T-test and Consistency within American National Standards Institute acceptable standard errors. Noise level readings for both daytime and night with the SLM and the mobile phones showed equivalent values. All noise level meters measured were <100dB. The daytime readings were nearly identical in six locations and the maximum difference in values between the SLM and Smartphone instruments was 3db, noted in two locations. Readings in dBA showed strong correlation (r = 0.9) within acceptable error limits for Type 2 SLM devices and no significant difference in the values (p = 0.12 & 0.58) for both day and night. Sensitivity of the instrument yielded 92.9%. The androidboy1 "app" performance in this study showed a good correlation and comparative high sensitivity to the Standard SLM (type 2 SLM device). However there is the need for further studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, Raef S.; Shen, Sui; Ove, Roger
We wanted to describe a technique for the implementation of intensity-modulated radiotherapy (IMRT) with a real-time position monitor (RPM) respiratory gating system for the treatment of pleural space with intact lung. The technique is illustrated by a case of pediatric osteosarcoma, metastatic to the pleura of the right lung. The patient was simulated in the supine position where a breathing tracer and computed tomography (CT) scans synchronized at end expiration were acquired using the RPM system. The gated CT images were used to define target volumes and critical structures. Right pleural gated IMRT delivered at end expiration was prescribed tomore » a dose of 44 Gy, with 55 Gy delivered to areas of higher risk via simultaneous integrated boost (SIB) technique. IMRT was necessary to avoid exceeding the tolerance of intact lung. Although very good coverage of the target volume was achieved with a shell-shaped dose distribution, dose over the targets was relatively inhomogeneous. Portions of target volumes necessarily intruded into the right lung, the liver, and right kidney, limiting the degree of normal tissue sparing that could be achieved. The radiation doses to critical structures were acceptable and well tolerated. With intact lung, delivering a relatively high dose to the pleura with acceptable doses to surrounding normal tissues using respiratory gated pleural IMRT is feasible. Treatment delivery during a limited part of the respiratory cycle allows for reduced CT target volume motion errors, with reduction in the portion of the planning margin that accounts for respiratory motion, and subsequent increase in the therapeutic ratio.« less
Kouassi, Kafui; Fétéké, Lochina; Assignon, Selom; Dorkenoo, Ameyo; Napo-Koura, Gado
2015-01-01
This study aims to evaluate the performance of a few biochemistry analysis and make recommendations to the place of the stakeholders. It is a cross-sectional study conducted between the October 1(st), 2012 and the July 31, 2013 bearing on the results of 5 common examinations of clinical biochemistry, provided by 11 laboratories volunteers opening in the public and private sectors. These laboratories have analysed during the 3 cycles, 2 levels (medium and high) of serum concentration of urea, glucose, creatinine and serum aminotransferases. The performance of laboratories have been determined from the acceptable limits corresponding to the limits of total errors, defined by the French Society of Clinical Biology (SFBC). A system of internal quality control is implemented by all laboratories and 45% of them participated in international programs of external quality assessment (EQA). The rate of acceptable results for the entire study was of 69%. There was a significant difference (p<0.002) between the performance of the group of laboratories engaged in a quality approach and the group with default implementation of the quality approach. Also a significant difference was observed between the laboratories of the central level and those of the peripheral level of our health system (p<0.047). The performance of the results provided by the laboratories remains relatively unsatisfactory. It is important that the Ministry of Health put in place a national program of EQA with mandatory participation.
Effect of wafer geometry on lithography chucking processes
NASA Astrophysics Data System (ADS)
Turner, Kevin T.; Sinha, Jaydeep K.
2015-03-01
Wafer flatness during exposure in lithography tools is critical and is becoming more important as feature sizes in devices shrink. While chucks are used to support and flatten the wafer during exposure, it is essential that wafer geometry be controlled as well. Thickness variations of the wafer and high-frequency wafer shape components can lead to poor flatness of the chucked wafer and ultimately patterning problems, such as defocus errors. The objective of this work is to understand how process-induced wafer geometry, resulting from deposited films with non-uniform stress, can lead to high-frequency wafer shape variations that prevent complete chucking in lithography scanners. In this paper, we discuss both the acceptable limits of wafer shape that permit complete chucking to be achieved, and how non-uniform residual stresses in films, either due to patterning or process non-uniformity, can induce high spatial frequency wafer shape components that prevent chucking. This paper describes mechanics models that relate non-uniform film stress to wafer shape and presents results for two example cases. The models and results can be used as a basis for establishing control strategies for managing process-induced wafer geometry in order to avoid wafer flatness-induced errors in lithography processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorpe, J. I.; Livas, J.; Maghami, P.
Arm locking is a proposed laser frequency stabilization technique for the Laser Interferometer Space Antenna (LISA), a gravitational-wave observatory sensitive in the milliHertz frequency band. Arm locking takes advantage of the geometric stability of the triangular constellation of three spacecraft that compose LISA to provide a frequency reference with a stability in the LISA measurement band that exceeds that available from a standard reference such as an optical cavity or molecular absorption line. We have implemented a time-domain simulation of a Kalman-filter-based arm-locking system that includes the expected limiting noise sources as well as the effects of imperfect a priorimore » knowledge of the constellation geometry on which the design is based. We use the simulation to study aspects of the system performance that are difficult to capture in a steady-state frequency-domain analysis such as frequency pulling of the master laser due to errors in estimates of heterodyne frequency. We find that our implementation meets requirements on both the noise and dynamic range of the laser frequency with acceptable tolerances and that the design is sufficiently insensitive to errors in the estimated constellation geometry that the required performance can be maintained for the longest continuous measurement intervals expected for the LISA mission.« less
Quantification of plume opacity by digital photography.
Du, Ke; Rood, Mark J; Kim, Byung J; Kemme, Michael R; Franek, Bill; Mattison, Kevin
2007-02-01
The United States Environmental Protection Agency (USEPA) developed Method 9 to describe how plume opacity can be quantified by humans. However, use of observations by humans introduces subjectivity, and is expensive due to semiannual certification requirements of the observers. The Digital Opacity Method (DOM) was developed to quantify plume opacity at lower cost, with improved objectivity, and to provide a digital record. Photographs of plumes were taken with a calibrated digital camera under specified conditions. Pixel values from those photographs were then interpreted to quantify the plume's opacity using a contrast model and a transmission model. The contrast model determines plume opacity based on pixel values that are related to the change in contrast between two backgrounds that are located behind and next to the plume. The transmission model determines the plume's opacity based on pixel values that are related to radiances from the plume and its background. DOM was field tested with a smoke generator. The individual and average opacity errors of DOM were within the USEPA Method 9 acceptable error limits for both field campaigns. Such results are encouraging and support the use of DOM as an alternative to Method 9.
Potential and Limitations of an Improved Method to Produce Dynamometric Wheels
García de Jalón, Javier
2018-01-01
A new methodology for the estimation of tyre-contact forces is presented. The new procedure is an evolution of a previous method based on harmonic elimination techniques developed with the aim of producing low cost dynamometric wheels. While the original method required stress measurement in many rim radial lines and the fulfillment of some rigid conditions of symmetry, the new methodology described in this article significantly reduces the number of required measurement points and greatly relaxes symmetry constraints. This can be done without compromising the estimation error level. The reduction of the number of measuring radial lines increases the ripple of demodulated signals due to non-eliminated higher order harmonics. Therefore, it is necessary to adapt the calibration procedure to this new scenario. A new calibration procedure that takes into account angular position of the wheel is completely described. This new methodology is tested on a standard commercial five-spoke car wheel. Obtained results are qualitatively compared to those derived from the application of former methodology leading to the conclusion that the new method is both simpler and more robust due to the reduction in the number of measuring points, while contact forces’ estimation error remains at an acceptable level. PMID:29439427
NASA Technical Reports Server (NTRS)
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2006-01-01
Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated radar. Error model modifications for nonraining situations will be required, however. Sampling error represents only a portion of the total error in monthly 2.5 -resolution TMI estimates; the remaining error is attributed to random and systematic algorithm errors arising from the physical inconsistency and/or nonrepresentativeness of cloud-resolving-model-simulated profiles that support the algorithm.
Ozone Profile Retrievals from the OMPS on Suomi NPP
NASA Astrophysics Data System (ADS)
Bak, J.; Liu, X.; Kim, J. H.; Haffner, D. P.; Chance, K.; Yang, K.; Sun, K.; Gonzalez Abad, G.
2017-12-01
We verify and correct the Ozone Mapping and Profiler Suite (OMPS) Nadir Mapper (NM) L1B v2.0 data with the aim of producing accurate ozone profile retrievals using an optimal estimation based inversion method in the 302.5-340 nm fitting. The evaluation of available slit functions demonstrates that preflight-measured slit functions well represent OMPS measurements compared to derived Gaussian slit functions. Our OMPS fitting residuals contain significant wavelength and cross-track dependent biases, and thereby serious cross-track striping errors are found in preliminary retrievals, especially in the troposphere. To eliminate the systematic component of the fitting residuals, we apply "soft calibration" to OMPS radiances. With the soft calibration the amplitude of fitting residuals decreases from 1 % to 0.2 % over low/mid latitudes, and thereby the consistency of tropospheric ozone retrievals between OMPS and Ozone Monitoring Instrument (OMI) are substantially improved. A common mode correction is implemented for additional radiometric calibration, which improves retrievals especially at high latitudes where the amplitude of fitting residuals decreases by a factor of 2. We estimate the floor noise error of OMPS measurements from standard deviations of the fitting residuals. The derived error in the Huggins band ( 0.1 %) is 2 times smaller than OMI floor noise error and 2 times larger than OMPS L1B measurement error. The OMPS floor noise errors better constrain our retrievals for maximizing measurement information and stabilizing our fitting residuals. The final precision of the fitting residuals is less than 0.1 % in the low/mid latitude, with 1 degrees of freedom for signal for the tropospheric ozone, so that we meet the general requirements for successful tropospheric ozone retrievals. To assess if the quality of OMPS ozone retrievals could be acceptable for scientific use, we will characterize OMPS ozone profile retrievals, present error analysis, and validate retrievals using a reference dataset. The useful information on the vertical distribution of ozone is limited below 40 km only from OMPS NM measurements due to the absence of Hartley ozone wavelength. This shortcoming will be improved with the joint ozone profile retrieval using Nadir Profiler (NP) measurements covering the 250 to 310 nm range.
Utilizing knowledge from prior plans in the evaluation of quality assurance
NASA Astrophysics Data System (ADS)
Stanhope, Carl; Wu, Q. Jackie; Yuan, Lulin; Liu, Jianfei; Hood, Rodney; Yin, Fang-Fang; Adamson, Justus
2015-06-01
Increased interest regarding sensitivity of pre-treatment intensity modulated radiotherapy and volumetric modulated arc radiotherapy (VMAT) quality assurance (QA) to delivery errors has led to the development of dose-volume histogram (DVH) based analysis. This paradigm shift necessitates a change in the acceptance criteria and action tolerance for QA. Here we present a knowledge based technique to objectively quantify degradations in DVH for prostate radiotherapy. Using machine learning, organ-at-risk (OAR) DVHs from a population of 198 prior patients’ plans were adapted to a test patient’s anatomy to establish patient-specific DVH ranges. This technique was applied to single arc prostate VMAT plans to evaluate various simulated delivery errors: systematic single leaf offsets, systematic leaf bank offsets, random normally distributed leaf fluctuations, systematic lag in gantry angle of the mutli-leaf collimators (MLCs), fluctuations in dose rate, and delivery of each VMAT arc with a constant rather than variable dose rate. Quantitative Analyses of Normal Tissue Effects in the Clinic suggests V75Gy dose limits of 15% for the rectum and 25% for the bladder, however the knowledge based constraints were more stringent: 8.48 ± 2.65% for the rectum and 4.90 ± 1.98% for the bladder. 19 ± 10 mm single leaf and 1.9 ± 0.7 mm single bank offsets resulted in rectum DVHs worse than 97.7% (2σ) of clinically accepted plans. PTV degradations fell outside of the acceptable range for 0.6 ± 0.3 mm leaf offsets, 0.11 ± 0.06 mm bank offsets, 0.6 ± 1.3 mm of random noise, and 1.0 ± 0.7° of gantry-MLC lag. Utilizing a training set comprised of prior treatment plans, machine learning is used to predict a range of achievable DVHs for the test patient’s anatomy. Consequently, degradations leading to statistical outliers may be identified. A knowledge based QA evaluation enables customized QA criteria per treatment site, institution and/or physician and can often be more sensitive to errors than criteria based on organ complication rates.
Validity of the Brunel Mood Scale for use With Malaysian Athletes.
Lan, Mohamad Faizal; Lane, Andrew M; Roy, Jolly; Hanin, Nik Azma
2012-01-01
The aim of the present study was to investigate the factorial validity of the Brunel Mood Scale for use with Malaysian athletes. Athletes (N = 1485 athletes) competing at the Malaysian Games completed the Brunel of Mood Scale (BRUMS). Confirmatory Factor Analysis (CFA) results indicated a Confirmatory Fit Index (CFI) of .90 and Root Mean Squared Error of Approximation (RMSEA) was 0.05. The CFI was below the 0.95 criterion for acceptability and the RMSEA value was within the limits for acceptability suggested by Hu and Bentler, 1999. We suggest that results provide some support for validity of the BRUMS for use with Malaysian athletes. Given the large sample size used in the present study, descriptive statistics could be used as normative data for Malaysian athletes. Key pointsFindings from the present study lend support to the validity of the BRUMS for use with Malaysian athletes.Given the size of the sample used in the present study, we suggest descriptive data be used as the normative data for researchers using the scale with Malaysian athletes.It is suggested that future research investigate the effects of cultural differences on emotional states experienced by athletes before, during and post-competition.
Validity of the Brunel Mood Scale for use With Malaysian Athletes
Lan, Mohamad Faizal; Lane, Andrew M.; Roy, Jolly; Hanin, Nik Azma
2012-01-01
The aim of the present study was to investigate the factorial validity of the Brunel Mood Scale for use with Malaysian athletes. Athletes (N = 1485 athletes) competing at the Malaysian Games completed the Brunel of Mood Scale (BRUMS). Confirmatory Factor Analysis (CFA) results indicated a Confirmatory Fit Index (CFI) of .90 and Root Mean Squared Error of Approximation (RMSEA) was 0.05. The CFI was below the 0.95 criterion for acceptability and the RMSEA value was within the limits for acceptability suggested by Hu and Bentler, 1999. We suggest that results provide some support for validity of the BRUMS for use with Malaysian athletes. Given the large sample size used in the present study, descriptive statistics could be used as normative data for Malaysian athletes. Key points Findings from the present study lend support to the validity of the BRUMS for use with Malaysian athletes. Given the size of the sample used in the present study, we suggest descriptive data be used as the normative data for researchers using the scale with Malaysian athletes. It is suggested that future research investigate the effects of cultural differences on emotional states experienced by athletes before, during and post-competition. PMID:24149128
Clinical consequences and economic costs of untreated obstructive sleep apnea syndrome.
Knauert, Melissa; Naik, Sreelatha; Gillespie, M Boyd; Kryger, Meir
2015-09-01
To provide an overview of the healthcare and societal consequences and costs of untreated obstructive sleep apnea syndrome. PubMed database for English-language studies with no start date restrictions and with an end date of September 2014. A comprehensive literature review was performed to identify all studies that discussed the physiologic, clinical and societal consequences of obstructive sleep apnea syndrome as well as the costs associated with these consequences. There were 106 studies that formed the basis of this analysis. Undiagnosed and untreated obstructive sleep apnea syndrome can lead to abnormal physiology that can have serious implications including increased cardiovascular disease, stroke, metabolic disease, excessive daytime sleepiness, work-place errors, traffic accidents and death. These consequences result in significant economic burden. Both, the health and societal consequences and their costs can be decreased with identification and treatment of sleep apnea. Treatment of obstructive sleep apnea syndrome, despite its consequences, is limited by lack of diagnosis, poor patient acceptance, lack of access to effective therapies, and lack of a variety of effective therapies. Newer modes of therapy that are effective, cost efficient and more accepted by patients need to be developed.
Three Dimensional Visualization of GOES Cloud Data Using Octress
1993-06-01
structure for CAD of integrated circuits that can subdivide the cubes into more complex polyhedrons . Medical imaging is also taking advantage of the...CIGOES 501 FORMAT(A) CALL OPENDBCPARAM’, ISTATRM) IF (ISTATRM .NE. 0) CALL FRIMERRC Error opening database .’, "+ ISTATRM) CALL OLDIMAGE(1, CIGOES, STATUS...image name (no .ext):’ ACCEPT 501, CIGOES 501 FORMAT(A) CALL OPENDB(’PARAM’, ISTATRM) IF (ISTATRM .NE. 0) CALL FRIMERRC Error opening database
Increased instrument intelligence--can it reduce laboratory error?
Jekelis, Albert W
2005-01-01
Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.
Rational desires and the limitation of life-sustaining treatment.
Savulescu, Julian
1994-07-01
It is accepted that treatment of previously competent, now incompetent patients can be limited if that is what the patient would desire, if she were now competent. Expressed past preferences or an advance directive are often taken to constitute sufficient evidence of what a patient would now desire. I distinguish between desires and rational desires. I argue that for a desire to be an expression of a person's autonomy, it must be or satisfy that person's rational desires. A person rationally desires a course of action if that person desires it while being in possession of all available relevant facts, without committing relevant error of logic, and "vividly imagining" what its consequences would be like for her. I argue that some competent, expressed desires obstruct autonomy. I show that several psychological mechanisms operate to prevent a person rationally evaluating what future life in a disabled state would be like. Rational evaluation is difficult. However, treatment limitation, if it is to respect autonomy, must be in accord with a patient's rational desires, and not merely her expressed desires. I illustrate the implications of these arguments for the use of advance directives and for the treatment of competent patients.
Thoomes-de Graaf, M; Scholten-Peeters, G G M; Schellingerhout, J M; Bourne, A M; Buchbinder, R; Koehorst, M; Terwee, C B; Verhagen, A P
2016-09-01
To critically appraise and compare the measurement properties of self-administered patient-reported outcome measures (PROMs) focussing on the shoulder, assessing "activity limitations." Systematic review. The study population had to consist of patients with shoulder pain. We excluded postoperative patients or patients with generic diseases. The methodological quality of the selected studies and the results of the measurement properties were critically appraised and rated using the COSMIN checklist. Out of a total of 3427 unique hits, 31 articles, evaluating 7 different questionnaires, were included. The SPADI is the most frequently evaluated PROM and its measurement properties seem adequate apart from a lack of information regarding its measurement error and content validity. For English, Norwegian and Turkish users, we recommend to use the SPADI. Dutch users could use either the SDQ or the SST. In German, we recommend the DASH. In Tamil, Slovene, Spanish and the Danish languages, the evaluated PROMs were not yet of acceptable validity. None of these PROMs showed strong positive evidence for all measurement properties. We propose to develop a new shoulder PROM focused on activity limitations, taking new knowledge and techniques into account.
Software reliability experiments data analysis and investigation
NASA Technical Reports Server (NTRS)
Walker, J. Leslie; Caglayan, Alper K.
1991-01-01
The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.
Mueller, Silke C; Drewelow, Bernd
2013-05-01
The area under the concentration-time curve (AUC) after oral midazolam administration is commonly used for cytochrome P450 (CYP) 3A phenotyping studies. The aim of this investigation was to evaluate a limited sampling strategy for the prediction of AUC with oral midazolam. A total of 288 concentration-time profiles from 123 healthy volunteers who participated in four previously performed drug interaction studies with intense sampling after a single oral dose of 7.5 mg midazolam were available for evaluation. Of these, 45 profiles served for model building, which was performed by stepwise multiple linear regression, and the remaining 243 datasets served for validation. Mean prediction error (MPE), mean absolute error (MAE) and root mean squared error (RMSE) were calculated to determine bias and precision The one- to four-sampling point models with the best coefficient of correlation were the one-sampling point model (8 h; r (2) = 0.84), the two-sampling point model (0.5 and 8 h; r (2) = 0.93), the three-sampling point model (0.5, 2, and 8 h; r (2) = 0.96), and the four-sampling point model (0.5,1, 2, and 8 h; r (2) = 0.97). However, the one- and two-sampling point models were unable to predict the midazolam AUC due to unacceptable bias and precision. Only the four-sampling point model predicted the very low and very high midazolam AUC of the validation dataset with acceptable precision and bias. The four-sampling point model was also able to predict the geometric mean ratio of the treatment phase over the baseline (with 90 % confidence interval) results of three drug interaction studies in the categories of strong, moderate, and mild induction, as well as no interaction. A four-sampling point limited sampling strategy to predict the oral midazolam AUC for CYP3A phenotyping is proposed. The one-, two- and three-sampling point models were not able to predict midazolam AUC accurately.
How accurate are quotations and references in medical journals?
de Lacey, G; Record, C; Wade, J
1985-09-28
The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors--that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal.
How accurate are quotations and references in medical journals?
de Lacey, G; Record, C; Wade, J
1985-01-01
The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors--that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal. PMID:3931753
Eliminative Argumentation: A Basis for Arguing Confidence in System Properties
2015-02-01
errors to acceptable system reliability is unsound . But this is not an acceptable undercutting defeater; it does not put the conclusion about system...first to note sources of unsoundness in arguments, namely, questionable inference rules and weaknesses in proffered evidence. However, the notions of...This material is based upon work funded and supported by the Department of Defense under Contract No. FA8721-05-C-0003 with Carnegie Mellon University
Authorities to Use US Military Force Since the Passage of the 1973 War Powers Resolution
2016-05-26
Cambridge University Press, 2013), 55. 59 Third, the legislative branch and executive branches’ acceptance and reliance on the “all volunteer ...Andrew Bacevich recently explained the disadvantages of the All- Volunteer Force, Today, the people have by-and-large tuned out war or accept it as...than replicating the errors of Vietnam, the All- Volunteer Force has fostered new ones, chief among them a collective abrogation of civic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magome, T; University of Tokyo Hospital, Tokyo; University of Minnesota, Minneapolis, MN
Purpose: Megavoltage computed tomography (MVCT) imaging has been widely used for daily patient setup with helical tomotherapy (HT). One drawback of MVCT is its very long imaging time, owing to slow couch speed. The purpose of this study was to develop an MVCT imaging method allowing faster couch speeds, and to assess its accuracy for image guidance for HT. Methods: Three cadavers (mimicking closest physiological and physical system of patients) were scanned four times with couch speeds of 1, 2, 3, and 4 mm/s. The resulting MVCT images were reconstructed using an iterative reconstruction (IR) algorithm. The MVCT images weremore » registered with kilovoltage CT images, and the registration errors were compared with the errors with conventional filtered back projection (FBP) algorithm. Moreover, the fast MVCT imaging was tested in three cases of total marrow irradiation as a clinical trial. Results: Three-dimensional registration errors of the MVCT images reconstructed with the IR algorithm were significantly smaller (p < 0.05) than the errors of images reconstructed with the FBP algorithm at fast couch speeds (3, 4 mm/s). The scan time and imaging dose at a speed of 4 mm/s were reduced to 30% of those from a conventional coarse mode scan. For the patient imaging, a limited number of conventional MVCT (1.2 mm/s) and fast MVCT (3 mm/s) reveals acceptable reduced imaging time and dose able to use for anatomical registration. Conclusion: Fast MVCT with IR algorithm maybe clinically feasible alternative for rapid 3D patient localization. This technique may also be useful for calculating daily dose distributions or organ motion analyses in HT treatment over a wide area.« less
NASA Astrophysics Data System (ADS)
Zait, Eitan; Ben-Zvi, Guy; Dmitriev, Vladimir; Oshemkov, Sergey; Pforr, Rainer; Hennig, Mario
2006-05-01
Intra-field CD variation is, besides OPC errors, a main contributor to the total CD variation budget in IC manufacturing. It is caused mainly by mask CD errors. In advanced memory device manufacturing the minimum features are close to the resolution limit resulting in large mask error enhancement factors hence large intra-field CD variations. Consequently tight CD Control (CDC) of the mask features is required, which results in increasing significantly the cost of mask and hence the litho process costs. Alternatively there is a search for such techniques (1) which will allow improving the intrafield CD control for a given moderate mask and scanner imaging performance. Currently a new technique (2) has been proposed which is based on correcting the printed CD by applying shading elements generated in the substrate bulk of the mask by ultrashort pulsed laser exposure. The blank transmittance across a feature is controlled by changing the density of light scattering pixels. The technique has been demonstrated to be very successful in correcting intra-field CD variations caused by the mask and the projection system (2). A key application criterion of this technique in device manufacturing is the stability of the absorbing pixels against DUV light irradiation being applied during mask projection in scanners. This paper describes the procedures and results of such an investigation. To do it with acceptable effort a special experimental setup has been chosen allowing an evaluation within reasonable time. A 193nm excimer laser with pulse duration of 25 ns has been used for blank irradiation. Accumulated dose equivalent to 100,000 300 mm wafer exposures has been applied to Half Tone PSM mask areas with and without CDC shadowing elements. This allows the discrimination of effects appearing in treated and untreated glass regions. Several intensities have been investigated to define an acceptable threshold intensity to avoid glass compaction or generation of color centers in the glass. The impact of the irradiation on the mask transmittance of both areas has been studied by measurements of the printed CD on wafer using a wafer scanner before and after DUV irradiation.
Radar sensitivity and antenna scan pattern study for a satellite-based Radar Wind Sounder (RAWS)
NASA Technical Reports Server (NTRS)
Stuart, Michael A.
1992-01-01
Modeling global atmospheric circulations and forecasting the weather would improve greatly if worldwide information on winds aloft were available. Recognition of this led to the inclusion of the LAser Wind Sounder (LAWS) system to measure Doppler shifts from aerosols in the planned for Earth Observation System (EOS). However, gaps will exist in LAWS coverage where heavy clouds are present. The RAdar Wind Sensor (RAWS) is an instrument that could fill these gaps by measuring Doppler shifts from clouds and rain. Previous studies conducted at the University of Kansas show RAWS as a feasible instrument. This thesis pertains to the signal-to-noise ratio (SNR) sensitivity, transmit waveform, and limitations to the antenna scan pattern of the RAWS system. A dop-size distribution model is selected and applied to the radar range equation for the sensitivity analysis. Six frequencies are used in computing the SNR for several cloud types to determine the optimal transmit frequency. the results show the use of two frequencies, one higher (94 GHz) to obtain sensitivity for thinner cloud, and a lower frequency (24 GHz) to obtain sensitivity for thinner cloud, and a lower frequency (24 GHz) for better penetration in rain, provide ample SNR. The waveform design supports covariance estimation processing. This estimator eliminates the Doppler ambiguities compounded by the selection of such high transmit frequencies, while providing an estimate of the mean frequency. the unambiguous range and velocity computation shows them to be within acceptable limits. The design goal for the RAWS system is to limit the wind-speed error to less than 1 ms(exp -1). Due to linear dependence between vectors for a three-vector scan pattern, a reasonable wind-speed error is unattainable. Only the two-vector scan pattern falls within the wind-error limits for azimuth angles between 16 deg to 70 deg. However, this scan only allows two components of the wind to be determined. As a result, a technique is then shown, based on the Z-R-V relationships, that permit the vertical component (i.e., rain) to be computed. Thus the horizontal wind components may be obtained form the covariance estimator and the vertical component from the reflectivity factor. Finally, a new candidate system is introduced which summarizes the parameters taken from previous RAWS studies, or those modified in this thesis.
Boxwala, A A; Chaney, E L; Fritsch, D S; Friedman, C P; Rosenman, J G
1998-09-01
The purpose of this investigation was to design and implement a prototype physician workstation, called PortFolio, as a platform for developing and evaluating, by means of controlled observer studies, user interfaces and interactive tools for analyzing and managing digital portal images. The first observer study was designed to measure physician acceptance of workstation technology, as an alternative to a view box, for inspection and analysis of portal images for detection of treatment setup errors. The observer study was conducted in a controlled experimental setting to evaluate physician acceptance of the prototype workstation technology exemplified by PortFolio. PortFolio incorporates a windows user interface, a compact kit of carefully selected image analysis tools, and an object-oriented data base infrastructure. The kit evaluated in the observer study included tools for contrast enhancement, registration, and multimodal image visualization. Acceptance was measured in the context of performing portal image analysis in a structured protocol designed to simulate clinical practice. The acceptability and usage patterns were measured from semistructured questionnaires and logs of user interactions. Radiation oncologists, the subjects for this study, perceived the tools in PortFolio to be acceptable clinical aids. Concerns were expressed regarding user efficiency, particularly with respect to the image registration tools. The results of our observer study indicate that workstation technology is acceptable to radiation oncologists as an alternative to a view box for clinical detection of setup errors from digital portal images. Improvements in implementation, including more tools and a greater degree of automation in the image analysis tasks, are needed to make PortFolio more clinically practical.
Effect of time delay on surgical performance during telesurgical manipulation.
Fabrizio, M D; Lee, B R; Chan, D Y; Stoianovici, D; Jarrett, T W; Yang, C; Kavoussi, L R
2000-03-01
Telementoring allows a less experienced surgeon to benefit from an expert surgical consultation, reducing cost, travel, and the learning curve associated with new procedures. However, there are several technical limitations that affect practical applications. One potentially serious problem is the time delay that occurs any time data are transferred across long distances. To date, the effect of time delay on surgical performance has not been studied. A two-phase trial was designed to examine the effect of time delay on surgical performance. In the first phase, a series of tasks was performed, and the numbers of robotic movements required for completion was counted. Programmed incremental time delays were made in audiovisual acquisition and robotic controls. The number of errors made while performing each task at various time delay intervals was noted. In the second phase, a remote surgeon in Baltimore performed the tasks 9000 miles away in Singapore. The number of errors made was recorded. As the time delay increased, the number of operator errors increased. The accuracy needed to perform remote robotic procedures was diminished as the time delay increased. A learning curve did exist for each task, but as the time delay interval increased, it took longer to complete the task. Time delay does affect surgical performance. There is an acceptable delay of <700 msec in which surgeons can compensate for this phenomenon. Clinical studies will be needed to evaluate the true impact of time delay.
Measurement of latent cognitive abilities involved in concept identification learning.
Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Nock, Matthew K; Naifeh, James A; Heeringa, Steven; Ursano, Robert J; Stein, Murray B
2015-01-01
We used cognitive and psychometric modeling techniques to evaluate the construct validity and measurement precision of latent cognitive abilities measured by a test of concept identification learning: the Penn Conditional Exclusion Test (PCET). Item response theory parameters were embedded within classic associative- and hypothesis-based Markov learning models and were fitted to 35,553 Army soldiers' PCET data from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Data were consistent with a hypothesis-testing model with multiple latent abilities-abstraction and set shifting. Latent abstraction ability was positively correlated with number of concepts learned, and latent set-shifting ability was negatively correlated with number of perseverative errors, supporting the construct validity of the two parameters. Abstraction was most precisely assessed for participants with abilities ranging from 1.5 standard deviations below the mean to the mean itself. Measurement of set shifting was acceptably precise only for participants making a high number of perseverative errors. The PCET precisely measures latent abstraction ability in the Army STARRS sample, especially within the range of mildly impaired to average ability. This precision pattern is ideal for a test developed to measure cognitive impairment as opposed to cognitive strength. The PCET also measures latent set-shifting ability, but reliable assessment is limited to the impaired range of ability, reflecting that perseverative errors are rare among cognitively healthy adults. Integrating cognitive and psychometric models can provide information about construct validity and measurement precision within a single analytical framework.
Kierepka, E M; Latch, E K
2016-01-01
Landscape genetics is a powerful tool for conservation because it identifies landscape features that are important for maintaining genetic connectivity between populations within heterogeneous landscapes. However, using landscape genetics in poorly understood species presents a number of challenges, namely, limited life history information for the focal population and spatially biased sampling. Both obstacles can reduce power in statistics, particularly in individual-based studies. In this study, we genotyped 233 American badgers in Wisconsin at 12 microsatellite loci to identify alternative statistical approaches that can be applied to poorly understood species in an individual-based framework. Badgers are protected in Wisconsin owing to an overall lack in life history information, so our study utilized partial redundancy analysis (RDA) and spatially lagged regressions to quantify how three landscape factors (Wisconsin River, Ecoregions and land cover) impacted gene flow. We also performed simulations to quantify errors created by spatially biased sampling. Statistical analyses first found that geographic distance was an important influence on gene flow, mainly driven by fine-scale positive spatial autocorrelations. After controlling for geographic distance, both RDA and regressions found that Wisconsin River and Agriculture were correlated with genetic differentiation. However, only Agriculture had an acceptable type I error rate (3–5%) to be considered biologically relevant. Collectively, this study highlights the benefits of combining robust statistics and error assessment via simulations and provides a method for hypothesis testing in individual-based landscape genetics. PMID:26243136
Y-balance test: a reliability study involving multiple raters.
Shaffer, Scott W; Teyhen, Deydre S; Lorenson, Chelsea L; Warren, Rick L; Koreerat, Christina M; Straseske, Crystal A; Childs, John D
2013-11-01
The Y-balance test (YBT) is one of the few field expedient tests that have shown predictive validity for injury risk in an athletic population. However, analysis of the YBT in a heterogeneous population of active adults (e.g., military, specific occupations) involving multiple raters with limited experience in a mass screening setting is lacking. The primary purpose of this study was to determine interrater test-retest reliability of the YBT in a military setting using multiple raters. Sixty-four service members (53 males, 11 females) actively conducting military training volunteered to participate. Interrater test-retest reliability of the maximal reach had intraclass correlation coefficients (2,1) of 0.80 to 0.85 with a standard error of measurement ranging from 3.1 to 4.2 cm for the 3 reach directions (anterior, posteromedial, and posterolateral). Interrater test-retest reliability of the average reach of 3 trails had an intraclass correlation coefficients (2,3) range of 0.85 to 0.93 with an associated standard error of measurement ranging from 2.0 to 3.5cm. The YBT showed good interrater test-retest reliability with an acceptable level of measurement error among multiple raters screening active duty service members. In addition, 31.3% (n = 20 of 64) of participants exhibited an anterior reach asymmetry of >4cm, suggesting impaired balance symmetry and potentially increased risk for injury. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Dettmer, Jan; Dosso, Stan E
2012-10-01
This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.
Epistemic assessment of radon level of offices in Hong Kong
NASA Astrophysics Data System (ADS)
Wong, L. T.; Mui, K. W.; Law, K. Y.; Hui, P. S.
People spend most of their life working indoors. Human exposure to various air pollutants changed its importance in nature from outdoor to indoor. As some of the pollutant sources basically originate from the building envelope that could not be removed or is costly to mitigate, the remaining questions are: how the indoor air quality (IAQ) is monitored and how the information could be used for the environmental control system to achieve the best air quality delivery. Indoor radon level could be measured with a number of sampling approaches and used to determine the acceptance of an IAQ with respect to certain exposure limits. In determining the acceptable IAQ of a space, this study proposes that the measured indoor radon level must be accompanied with the confidence levels of the assessment. Radon levels in Hong Kong offices were studied by a cross-sectional measurement in 216 typical offices and a year-round longitudinal measurement in one office. The results showed that 96.5% (94.0-99.0% at 95% confidence interval) and 98.6% (97.0% to >99.9% at 95% confidence interval) of the sampled offices would satisfy action radon levels of 150 and 200 Bq m -3, respectively. The same results were then used to quantify the prior knowledge on radon level distributions of an office and the probable errors of the adopted sampling schemes. This study proposes an epistemic approach, with the prior knowledge and a sample test result, to assess the acceptance against an action radon level of an office in Hong Kong. With the certainty of the test results determined for judgmental purposes, it is possible to apply the method to an office for follow-up tests of acceptance.
Zacharis, Constantinos K; Vastardi, Elli
2018-02-20
In the research presented we report the development of a simple and robust liquid chromatographic method for the quantification of two genotoxic alkyl sulphonate impurities (namely methyl p-toluenesulfonate and isopropyl p-toluenesulfonate) in Aprepitant API substances using the Analytical Quality by Design (AQbD) approach. Following the steps of AQbD protocol, the selected critical method attributes (CMAs) were the separation criterions between the critical peak pairs, the analysis time and the peak efficiencies of the analytes. The critical method parameters (CMPs) included the flow rate, the gradient slope and the acetonitrile content at the first step of the gradient elution program. Multivariate experimental designs namely Plackett-Burman and Box-Behnken designs were conducted sequentially for factor screening and optimization of the method parameters. The optimal separation conditions were estimated using the desirability function. The method was fully validated in the range of 10-200% of the target concentration limit of the analytes using the "total error" approach. Accuracy profiles - a graphical decision making tool - were constructed using the results of the validation procedures. The β-expectation tolerance intervals did not exceed the acceptance criteria of±10%, meaning that 95% of future results will be included in the defined bias limits. The relative bias ranged between - 1.3-3.8% for both analytes, while the RSD values for repeatability and intermediate precision were less than 1.9% in all cases. The achieved limit of detection (LOD) and the limit of quantification (LOQ) were adequate for the specific purpose and found to be 0.02% (corresponding to 48μgg -1 in sample) for both methyl and isopropyl p-toluenesulfonate. As proof-of-concept, the validated method was successfully applied in the analysis of several Aprepitant batches indicating that this methodology could be used for routine quality control analyses. Copyright © 2017 Elsevier B.V. All rights reserved.
HS-GC-MS method for the analysis of fragrance allergens in complex cosmetic matrices.
Desmedt, B; Canfyn, M; Pype, M; Baudewyns, S; Hanot, V; Courselle, P; De Beer, J O; Rogiers, V; De Paepe, K; Deconinck, E
2015-01-01
Potential allergenic fragrances are part of the Cosmetic Regulation with labelling and concentration restrictions. This means that they have to be declared on the ingredients list, when their concentration exceeds the labelling limit of 10 ppm or 100 ppm for leave-on or rinse-off cosmetics, respectively. Labelling is important regarding consumer safety. In this way, sensitised people towards fragrances might select their products based on the ingredients list to prevent elicitation of an allergic reaction. It is therefore important to quantify potential allergenic ingredients in cosmetic products. An easy to perform liquid extraction was developed, combined with a new headspace GC-MS method. The latter was capable of analysing 24 volatile allergenic fragrances in complex cosmetic formulations, such as hydrophilic (O/W) and lipophilic (W/O) creams, lotions and gels. This method was successfully validated using the total error approach. The trueness deviations for all components were smaller than 8%, and the expectation tolerance limits did not exceed the acceptance limits of ± 20% at the labelling limit. The current methodology was used to analyse 18 cosmetic samples that were already identified as being illegal on the EU market for containing forbidden skin whitening substances. Our results showed that these cosmetic products also contained undeclared fragrances above the limit value for labelling, which imposes an additional health risk for the consumer. Copyright © 2014 Elsevier B.V. All rights reserved.
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.
Replacing the CCSDS Telecommand Protocol with Next Generation Uplink
NASA Technical Reports Server (NTRS)
Kazz, Greg; Burleigh, Scott; Greenberg, Ed
2012-01-01
Better performing Forward Error Correction on the forward link along with adequate power in the data open an uplink operations trade space that enable missions to: Command to greater distances in deep space (increased uplink margin) Increase the size of the payload data (latency may be a factor) Provides space for the security header/trailer of the CCSDS Space Data Link Security Protocol Note: These higher rates could be used for relief of emergency communication margins/rates and not limited to improving top-end rate performance. A higher performance uplink could also reduce the requirements on flight emergency antenna size and/or the performance required from ground stations. Use of a selective repeat ARQ protocol may increase the uplink design requirements but the resultant development is deemed acceptable, due the factor of 4 to 8 potential increase in uplink data rate.
NASA Astrophysics Data System (ADS)
Delogu, A.; Furini, F.
1991-09-01
Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.
NAND Flash Qualification Guideline
NASA Technical Reports Server (NTRS)
Heidecker, Jason
2012-01-01
Better performing Forward Error Correction on the forward link along with adequate power in the data open an uplink operations trade space that enable missions to: Command to greater distances in deep space (increased uplink margin). Increase the size of the payload data (latency may be a factor). Provides space for the security header/trailer of the CCSDS Space Data Link Security Protocol. Note: These higher rates could be used for relief of emergency communication margins/rates and not limited to improving top-end rate performance. A higher performance uplink could also reduce the requirements on flight emergency antenna size and/or the performance required from ground stations. Use of a selective repeat ARQ protocol may increase the uplink design requirements but the resultant development is deemed acceptable, due the factor of 4 to 8 potential increase in uplink data rate.
Measurements of Gluconeogenesis and Glycogenolysis: A Methodological Review
Chung, Stephanie T.; Chacko, Shaji K.; Sunehag, Agneta L.
2015-01-01
Gluconeogenesis is a complex metabolic process that involves multiple enzymatic steps regulated by myriad factors, including substrate concentrations, the redox state, activation and inhibition of specific enzyme steps, and hormonal modulation. At present, the most widely accepted technique to determine gluconeogenesis is by measuring the incorporation of deuterium from the body water pool into newly formed glucose. However, several techniques using radioactive and stable-labeled isotopes have been used to quantitate the contribution and regulation of gluconeogenesis in humans. Each method has its advantages, methodological assumptions, and set of propagated errors. In this review, we examine the strengths and weaknesses of the most commonly used stable isotopes methods to measure gluconeogenesis in vivo. We discuss the advantages and limitations of each method and summarize the applicability of these measurements in understanding normal and pathophysiological conditions. PMID:26604176
Language-Based Inequity in Health Care: Who Is the "Poor Historian"?
Green, Alexander R; Nze, Chijioke
2017-03-01
Patients with limited English proficiency (LEP) are among the most vulnerable populations. They experience high rates of medical errors with worse clinical outcomes than English-proficient patients and receive lower quality of care by other metrics. However, we have yet to take the issue of linguistic inequities seriously in the medical system and in medical education, tacitly accepting that substandard care is either unavoidable or not worth the cost to address. We argue that we have a moral imperative to provide high-quality care to patients with LEP and to teach our medical trainees that such care is both expected and feasible. Ultimately, to achieve linguistic equity will require creating effective systems for medical interpretation and a major culture shift not unlike what has happened in patient safety. © 2017 American Medical Association. All Rights Reserved.
Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions
Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.
2010-01-01
Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256
Miller, Daniel F; Fortier, Christopher R; Garrison, Kelli L
2011-02-01
Bar code medication administration (BCMA) technology is gaining acceptance for its ability to prevent medication administration errors. However, studies suggest that improper use of BCMA technology can yield unsatisfactory error prevention and introduction of new potential medication errors. To evaluate the incidence of high-alert medication BCMA triggers and alert types and discuss the type of nursing and pharmacy workarounds occurring with the use of BCMA technology and the electronic medication administration record (eMAR). Medication scanning and override reports from January 1, 2008, through November 30, 2008, for all adult medical/surgical units were retrospectively evaluated for high-alert medication system triggers, alert types, and override reason documentation. An observational study of nursing workarounds on an adult medicine step-down unit was performed and an analysis of potential pharmacy workarounds affecting BCMA and the eMAR was also conducted. Seventeen percent of scanned medications triggered an error alert of which 55% were for high-alert medications. Insulin aspart, NPH insulin, hydromorphone, potassium chloride, and morphine were the top 5 high-alert medications that generated alert messages. Clinician override reasons for alerts were documented in only 23% of administrations. Observational studies assessing for nursing workarounds revealed a median of 3 clinician workarounds per administration. Specific nursing workarounds included a failure to scan medications/patient armband and scanning the bar code once the dosage has been removed from the unit-dose packaging. Analysis of pharmacy order entry process workarounds revealed the potential for missed doses, duplicate doses, and doses being scheduled at the wrong time. BCMA has the potential to prevent high-alert medication errors by alerting clinicians through alert messages. Nursing and pharmacy workarounds can limit the recognition of optimal safety outcomes and therefore workflow processes must be continually analyzed and restructured to yield the intended full benefits of BCMA technology. © 2011 SAGE Publications.
NASA Astrophysics Data System (ADS)
Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken
2011-04-01
A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.
1985-04-24
reliability/ downtime/ communication lines/ man-machine interface/ other: 2. A noticeable (to the user) failure happens about and that number has been...improving/ steady/ getting.worse. 3. The number of failures /errors for NOHIMS is acceptable/ somewhat acceptable/ somewhat unacceptable/ unacceptable...somewhat fast/ somewhat slow/ slow. 7. When a NWHIMS failure occurs, it affects the day-to-day provision of medical care because work procedures must
Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R
2018-05-21
Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
Use of error grid analysis to evaluate acceptability of a point of care prothrombin time meter.
Petersen, John R; Vonmarensdorf, Hans M; Weiss, Heidi L; Elghetany, M Tarek
2010-02-01
Statistical methods (linear regression, correlation analysis, etc.) are frequently employed in comparing methods in the central laboratory (CL). Assessing acceptability of point of care testing (POCT) equipment, however, is more difficult because statistically significant biases may not have an impact on clinical care. We showed how error grid (EG) analysis can be used to evaluate POCT PT INR with the CL. We compared results from 103 patients seen in an anti-coagulation clinic that were on Coumadin maintenance therapy using fingerstick samples for POCT (Roche CoaguChek XS and S) and citrated venous blood samples for CL (Stago STAR). To compare clinical acceptability of results we developed an EG with zones A, B, C and D. Using 2nd order polynomial equation analysis, POCT results highly correlate with the CL for CoaguChek XS (R(2)=0. 955) and CoaguChek S (R(2)=0. 93), respectively but does not indicate if POCT results are clinically interchangeable with the CL. Using EG it is readily apparent which levels can be considered clinically identical to the CL despite analytical bias. We have demonstrated the usefulness of EG in determining acceptability of POCT PT INR testing and how it can be used to determine cut-offs where differences in POCT results may impact clinical care. Copyright 2009 Elsevier B.V. All rights reserved.
Tan, Aimin; Saffaj, Taoufiq; Musuku, Adrien; Awaiye, Kayode; Ihssane, Bouchaib; Jhilal, Fayçal; Sosse, Saad Alaoui; Trabelsi, Fethi
2015-03-01
The current approach in regulated LC-MS bioanalysis, which evaluates the precision and trueness of an assay separately, has long been criticized for inadequate balancing of lab-customer risks. Accordingly, different total error approaches have been proposed. The aims of this research were to evaluate the aforementioned risks in reality and the difference among four common total error approaches (β-expectation, β-content, uncertainty, and risk profile) through retrospective analysis of regulated LC-MS projects. Twenty-eight projects (14 validations and 14 productions) were randomly selected from two GLP bioanalytical laboratories, which represent a wide variety of assays. The results show that the risk of accepting unacceptable batches did exist with the current approach (9% and 4% of the evaluated QC levels failed for validation and production, respectively). The fact that the risk was not wide-spread was only because the precision and bias of modern LC-MS assays are usually much better than the minimum regulatory requirements. Despite minor differences in magnitude, very similar accuracy profiles and/or conclusions were obtained from the four different total error approaches. High correlation was even observed in the width of bias intervals. For example, the mean width of SFSTP's β-expectation is 1.10-fold (CV=7.6%) of that of Saffaj-Ihssane's uncertainty approach, while the latter is 1.13-fold (CV=6.0%) of that of Hoffman-Kringle's β-content approach. To conclude, the risk of accepting unacceptable batches was real with the current approach, suggesting that total error approaches should be used instead. Moreover, any of the four total error approaches may be used because of their overall similarity. Lastly, the difficulties/obstacles associated with the application of total error approaches in routine analysis and their desirable future improvements are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Online Deviation Detection for Medical Processes
Christov, Stefan C.; Avrunin, George S.; Clarke, Lori A.
2014-01-01
Human errors are a major concern in many medical processes. To help address this problem, we are investigating an approach for automatically detecting when performers of a medical process deviate from the acceptable ways of performing that process as specified by a detailed process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before harm is done. In this paper, we identify important issues related to the feasibility of the proposed approach and empirically evaluate the approach for two medical procedures, chemotherapy and blood transfusion. For the evaluation, we use the process models to generate sample process executions that we then seed with synthetic errors. The process models describe the coordination of activities of different process performers in normal, as well as in exceptional situations. The evaluation results suggest that the proposed approach could be applied in clinical settings to help catch errors before harm is done. PMID:25954343
Minimizing the Disruptive Effects of Prospective Memory in Simulated Air Traffic Control
Loft, Shayne; Smith, Rebekah E.; Remington, Roger
2015-01-01
Prospective memory refers to remembering to perform an intended action in the future. Failures of prospective memory can occur in air traffic control. In two experiments, we examined the utility of external aids for facilitating air traffic management in a simulated air traffic control task with prospective memory requirements. Participants accepted and handed-off aircraft and detected aircraft conflicts. The prospective memory task involved remembering to deviate from a routine operating procedure when accepting target aircraft. External aids that contained details of the prospective memory task appeared and flashed when target aircraft needed acceptance. In Experiment 1, external aids presented either adjacent or non-adjacent to each of the 20 target aircraft presented over the 40min test phase reduced prospective memory error by 11% compared to a condition without external aids. In Experiment 2, only a single target aircraft was presented a significant time (39min–42min) after presentation of the prospective memory instruction, and the external aids reduced prospective memory error by 34%. In both experiments, costs to the efficiency of non-prospective memory air traffic management (non-target aircraft acceptance response time, conflict detection response time) were reduced by non-adjacent aids compared to no aids or adjacent aids. In contrast, in both experiments, the efficiency of the prospective memory air traffic management (target aircraft acceptance response time) was facilitated by adjacent aids compared to non-adjacent aids. Together, these findings have potential implications for the design of automated alerting systems to maximize multi-task performance in work settings where operators monitor and control demanding perceptual displays. PMID:24059825
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The ultimate goal of radiotherapy treatment planning is to find a treatment that will yield a high tumor control probability (TCP) with an acceptable normal tissue complication probability (NTCP). Yet most treatment planning today is not based upon optimization of TCPs and NTCPs, but rather upon meeting physical dose and volume constraints defined by the planner. It has been suggested that treatment planning evaluation and optimization would be more effective if they were biologically and not dose/volume based, and this is the claim debated in this month’s Point/Counterpoint. After a brief overview of biologically and DVH based treatment planning bymore » the Moderator Colin Orton, Joseph Deasy (for biological planning) and Charles Mayo (against biological planning) will begin the debate. Some of the arguments in support of biological planning include: this will result in more effective dose distributions for many patients DVH-based measures of plan quality are known to have little predictive value there is little evidence that either D95 or D98 of the PTV is a good predictor of tumor control sufficient validated outcome prediction models are now becoming available and should be used to drive planning and optimization Some of the arguments against biological planning include: several decades of experience with DVH-based planning should not be discarded we do not know enough about the reliability and errors associated with biological models the radiotherapy community in general has little direct experience with side by side comparisons of DVH vs biological metrics and outcomes it is unlikely that a clinician would accept extremely cold regions in a CTV or hot regions in a PTV, despite having acceptable TCP values Learning Objectives: To understand dose/volume based treatment planning and its potential limitations To understand biological metrics such as EUD, TCP, and NTCP To understand biologically based treatment planning and its potential limitations.« less
WE-B-304-03: Biological Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orton, C.
The ultimate goal of radiotherapy treatment planning is to find a treatment that will yield a high tumor control probability (TCP) with an acceptable normal tissue complication probability (NTCP). Yet most treatment planning today is not based upon optimization of TCPs and NTCPs, but rather upon meeting physical dose and volume constraints defined by the planner. It has been suggested that treatment planning evaluation and optimization would be more effective if they were biologically and not dose/volume based, and this is the claim debated in this month’s Point/Counterpoint. After a brief overview of biologically and DVH based treatment planning bymore » the Moderator Colin Orton, Joseph Deasy (for biological planning) and Charles Mayo (against biological planning) will begin the debate. Some of the arguments in support of biological planning include: this will result in more effective dose distributions for many patients DVH-based measures of plan quality are known to have little predictive value there is little evidence that either D95 or D98 of the PTV is a good predictor of tumor control sufficient validated outcome prediction models are now becoming available and should be used to drive planning and optimization Some of the arguments against biological planning include: several decades of experience with DVH-based planning should not be discarded we do not know enough about the reliability and errors associated with biological models the radiotherapy community in general has little direct experience with side by side comparisons of DVH vs biological metrics and outcomes it is unlikely that a clinician would accept extremely cold regions in a CTV or hot regions in a PTV, despite having acceptable TCP values Learning Objectives: To understand dose/volume based treatment planning and its potential limitations To understand biological metrics such as EUD, TCP, and NTCP To understand biologically based treatment planning and its potential limitations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deasy, J.
The ultimate goal of radiotherapy treatment planning is to find a treatment that will yield a high tumor control probability (TCP) with an acceptable normal tissue complication probability (NTCP). Yet most treatment planning today is not based upon optimization of TCPs and NTCPs, but rather upon meeting physical dose and volume constraints defined by the planner. It has been suggested that treatment planning evaluation and optimization would be more effective if they were biologically and not dose/volume based, and this is the claim debated in this month’s Point/Counterpoint. After a brief overview of biologically and DVH based treatment planning bymore » the Moderator Colin Orton, Joseph Deasy (for biological planning) and Charles Mayo (against biological planning) will begin the debate. Some of the arguments in support of biological planning include: this will result in more effective dose distributions for many patients DVH-based measures of plan quality are known to have little predictive value there is little evidence that either D95 or D98 of the PTV is a good predictor of tumor control sufficient validated outcome prediction models are now becoming available and should be used to drive planning and optimization Some of the arguments against biological planning include: several decades of experience with DVH-based planning should not be discarded we do not know enough about the reliability and errors associated with biological models the radiotherapy community in general has little direct experience with side by side comparisons of DVH vs biological metrics and outcomes it is unlikely that a clinician would accept extremely cold regions in a CTV or hot regions in a PTV, despite having acceptable TCP values Learning Objectives: To understand dose/volume based treatment planning and its potential limitations To understand biological metrics such as EUD, TCP, and NTCP To understand biologically based treatment planning and its potential limitations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayo, C.
The ultimate goal of radiotherapy treatment planning is to find a treatment that will yield a high tumor control probability (TCP) with an acceptable normal tissue complication probability (NTCP). Yet most treatment planning today is not based upon optimization of TCPs and NTCPs, but rather upon meeting physical dose and volume constraints defined by the planner. It has been suggested that treatment planning evaluation and optimization would be more effective if they were biologically and not dose/volume based, and this is the claim debated in this month’s Point/Counterpoint. After a brief overview of biologically and DVH based treatment planning bymore » the Moderator Colin Orton, Joseph Deasy (for biological planning) and Charles Mayo (against biological planning) will begin the debate. Some of the arguments in support of biological planning include: this will result in more effective dose distributions for many patients DVH-based measures of plan quality are known to have little predictive value there is little evidence that either D95 or D98 of the PTV is a good predictor of tumor control sufficient validated outcome prediction models are now becoming available and should be used to drive planning and optimization Some of the arguments against biological planning include: several decades of experience with DVH-based planning should not be discarded we do not know enough about the reliability and errors associated with biological models the radiotherapy community in general has little direct experience with side by side comparisons of DVH vs biological metrics and outcomes it is unlikely that a clinician would accept extremely cold regions in a CTV or hot regions in a PTV, despite having acceptable TCP values Learning Objectives: To understand dose/volume based treatment planning and its potential limitations To understand biological metrics such as EUD, TCP, and NTCP To understand biologically based treatment planning and its potential limitations.« less
Code of Federal Regulations, 2014 CFR
2014-04-01
... should include full customer restitution where customer harm is demonstrated, except where the amount of... or external audit findings, self-reported errors, or through validated complaints. (C) Requirements...
van Karnebeek, Clara D M; Stockler, Sylvia
2012-03-01
Intellectual disability ('developmental delay' at age<5 years) affects 2.5% of population worldwide. Recommendations to investigate genetic causes of intellectual disability are based on frequencies of single conditions and on the yield of diagnostic methods, rather than availability of causal therapy. Inborn errors of metabolism constitute a subgroup of rare genetic conditions for which an increasing number of treatments has become available. To identify all currently treatable inborn errors of metabolism presenting with predominantly intellectual disability, we performed a systematic literature review. We applied Cochrane Collaboration guidelines in formulation of PICO and definitions, and searched in Pubmed (1960-2011) and relevant (online) textbooks to identify 'all inborn errors of metabolism presenting with intellectual disability as major feature'. We assessed levels of evidence of treatments and characterised the effect of treatments on IQ/development and related outcomes. We identified a total of 81 'treatable inborn errors of metabolism' presenting with intellectual disability as a major feature, including disorders of amino acids (n=12), cholesterol and bile acid (n=2), creatine (n=3), fatty aldehydes (n=1); glucose homeostasis and transport (n=2); hyperhomocysteinemia (n=7); lysosomes (n=12), metals (n=3), mitochondria (n=2), neurotransmission (n=7); organic acids (n=19), peroxisomes (n=1), pyrimidines (n=2), urea cycle (n=7), and vitamins/co-factors (n=8). 62% (n=50) of all disorders are identified by metabolic screening tests in blood (plasma amino acids, homocysteine) and urine (creatine metabolites, glycosaminoglycans, oligosaccharides, organic acids, pyrimidines). For the remaining disorders (n=31) a 'single test per single disease' approach including primary molecular analysis is required. Therapeutic modalities include: sick-day management, diet, co-factor/vitamin supplements, substrate inhibition, stemcell transplant, gene therapy. Therapeutic effects include improvement and/or stabilisation of psychomotor/cognitive development, behaviour/psychiatric disturbances, seizures, neurologic and systemic manifestations. The levels of available evidence for the various treatments range from Level 1b,c (n=5); Level 2a,b,c (n=14); Level 4 (n=45), Level 4-5 (n=27). In clinical practice more than 60% of treatments with evidence level 4-5 is internationally accepted as 'standard of care'. This literature review generated the evidence to prioritise treatability in the diagnostic evaluation of intellectual disability. Our results were translated into digital information tools for the clinician (www.treatable-id.org), which are part of a diagnostic protocol, currently implemented for evaluation of effectiveness in our institution. Treatments for these disorders are relatively accessible, affordable and with acceptable side-effects. Evidence for the majority of the therapies is limited however; international collaborations, patient registries, and novel trial methodologies are key in turning the tide for rare diseases such as these. Copyright © 2011 Elsevier Inc. All rights reserved.
Analysis of case-only studies accounting for genotyping error.
Cheng, K F
2007-03-01
The case-only design provides one approach to assess possible interactions between genetic and environmental factors. It has been shown that if these factors are conditionally independent, then a case-only analysis is not only valid but also very efficient. However, a drawback of the case-only approach is that its conclusions may be biased by genotyping errors. In this paper, our main aim is to propose a method for analysis of case-only studies when these errors occur. We show that the bias can be adjusted through the use of internal validation data, which are obtained by genotyping some sampled individuals twice. Our analysis is based on a simple and yet highly efficient conditional likelihood approach. Simulation studies considered in this paper confirm that the new method has acceptable performance under genotyping errors.
Microscopic saw mark analysis: an empirical approach.
Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Peters, Charles
2015-01-01
Microscopic saw mark analysis is a well published and generally accepted qualitative analytical method. However, little research has focused on identifying and mitigating potential sources of error associated with the method. The presented study proposes the use of classification trees and random forest classifiers as an optimal, statistically sound approach to mitigate the potential for error of variability and outcome error in microscopic saw mark analysis. The statistical model was applied to 58 experimental saw marks created with four types of saws. The saw marks were made in fresh human femurs obtained through anatomical gift and were analyzed using a Keyence digital microscope. The statistical approach weighed the variables based on discriminatory value and produced decision trees with an associated outcome error rate of 8.62-17.82%. © 2014 American Academy of Forensic Sciences.
Pesticides, Neurodevelopmental Disagreement, and Bradford Hill's Guidelines.
Shrader-Frechette, Kristin; ChoGlueck, Christopher
2016-06-27
Neurodevelopmental disorders such as autism affect one-eighth of all U.S. newborns. Yet scientists, accessing the same data and using Bradford-Hill guidelines, draw different conclusions about the causes of these disorders. They disagree about the pesticide-harm hypothesis, that typical United States prenatal pesticide exposure can cause neurodevelopmental damage. This article aims to discover whether apparent scientific disagreement about this hypothesis might be partly attributable to questionable interpretations of the Bradford-Hill causal guidelines. Key scientists, who claim to employ Bradford-Hill causal guidelines, yet fail to accept the pesticide-harm hypothesis, fall into errors of trimming the guidelines, requiring statistically-significant data, and ignoring semi-experimental evidence. However, the main scientists who accept the hypothesis appear to commit none of these errors. Although settling disagreement over the pesticide-harm hypothesis requires extensive analysis, this article suggests that at least some conflicts may arise because of questionable interpretations of the guidelines.
Thin film concentrator panel development
NASA Technical Reports Server (NTRS)
Zimmerman, D. K.
1982-01-01
The development and testing of a rigid panel concept that utilizes a thin film reflective surface for application to a low-cost point-focusing solar concentrator is discussed. It is shown that a thin film reflective surface is acceptable for use on solar concentrators, including 1500 F applications. Additionally, it is shown that a formed steel sheet substrate is a good choice for concentrator panels. The panel has good optical properties, acceptable forming tolerances, environmentally resistant substrate and stiffeners, and adaptability to low to mass production rates. Computer simulations of the concentrator optics were run using the selected reflector panel design. Experimentally determined values for reflector surface specularity and reflectivity along with dimensional data were used in the analysis. The simulations provided intercept factor and net energy into the aperture as a function of aperture size for different surface errors and pointing errors. Point source and Sun source optical tests were also performed.
Exploiting data representation for fault tolerance
Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less
Re-estimating sample size in cluster randomised trials with active recruitment within clusters.
van Schie, S; Moerbeek, M
2014-08-30
Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re-estimate the variances and re-calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re-estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.
New Age Teaching: Beyond Didactics
Vlaovic, Peter D.; McDougall, Elspeth M.
2006-01-01
Widespread acceptance of laparoscopic urology techniques has posed many challenges to training urology residents and allowing postgraduate urologists to acquire often difficult new surgical skills. Several factors in surgical training programs are limiting the ability to train residents in the operating room, including limited-hours work weeks, increasing demand for operating room productivity, and general public awareness of medical errors. As such, surgical simulation may provide an opportunity to enhance residency experience and training, and optimize post-graduate acquisition of new skills and maintenance of competency. This review article explains and defines the various levels of validity as it pertains to surgical simulators. The most recently and comprehensively validity tested simulators are outlined and summarized. The potential role of surgical simulation in the formative and summative assessment of surgical trainees, as well as, the certification and recertification process of postgraduate surgeons will be delineated. Surgical simulation will be an important adjunct to the traditional methods of surgical skills training and will allow surgeons to maintain their proficiency in the technically challenging aspects of minimally invasive urologic surgery. PMID:17619704
NASA Technical Reports Server (NTRS)
Antonille, Scott; Content, David; Rabin, Douglas; Wallace, Thomas; Wake, Shane
2007-01-01
The SHARPI (Solar High Angular Resolution Photometric Imager) primary mirror is a 5kg, 0.5m paraboloid, diffraction limited at FUV wavelengths when placed in a 0-G environment. The ULE sandwich honeycomb mirror and the attached mount pads were delivered by ITT (then Kodak) in 2003 to NASA s Goddard Space Flight Center (GSFC). At GSFC, we accepted, coated, mounted, and vibration tested this mirror in preparation for flight on the PICTURES (Planet Imaging Concept Testbed Using a Rocket Experiment) mission. At each step, the integrated analysis of interferometer data and FEA models was essential to quantify the 0-G mirror figure. This task required separating nanometer sized variations from hundreds of nanometers of gravity induced distortion. The ability to isolate such features allowed in-situ monitoring of mirror figure, diagnosis of perturbations, and remediation of process errors. In this paper, we describe the technical approach used to achieve these measurements and overcome the various difficulties maintaining UV diffraction-limited performance with this aggressively lightweighted mirror.
10 CFR 2.643 - Acceptance and docketing of application for limited work authorization.
Code of Federal Regulations, 2013 CFR
2013-01-01
... acceptable for processing, the Director of New Reactors or the Director of Nuclear Reactor Regulation will... 10 Energy 1 2013-01-01 2013-01-01 false Acceptance and docketing of application for limited work authorization. 2.643 Section 2.643 Energy NUCLEAR REGULATORY COMMISSION AGENCY RULES OF PRACTICE AND PROCEDURE...
User type certification for advanced flight control systems
NASA Technical Reports Server (NTRS)
Gilson, Richard D.; Abbott, David W.
1994-01-01
Advanced avionics through flight management systems (FMS) coupled with autopilots can now precisely control aircraft from takeoff to landing. Clearly, this has been the most important improvement in aircraft since the jet engine. Regardless of the eventual capabilities of this technology, it is doubtful that society will soon accept pilotless airliners with the same aplomb they accept driverless passenger trains. Flight crews are still needed to deal with inputing clearances, taxiing, in-flight rerouting, unexpected weather decisions, and emergencies; yet it is well known that the contribution of human errors far exceed those of current hardware or software systems. Thus human errors remain, and are even increasing in percentage as the largest contributor to total system error. Currently, the flight crew is regulated by a layered system of certification: by operation, e.g., airline transport pilot versus private pilot; by category, e.g., airplane versus helicopter; by class, e.g., single engine land versus multi-engine land; and by type (for larger aircraft and jet powered aircraft), e.g., Boeing 767 or Airbus A320. Nothing in the certification process now requires an in-depth proficiency with specific types of avionics systems despite their prominent role in aircraft control and guidance.
Flynn, Fran; Evanish, Julie Q; Fernald, Josephine M; Hutchinson, Dawn E; Lefaiver, Cheryl
2016-08-01
Because of the high frequency of interruptions during medication administration, the effectiveness of strategies to limit interruptions during medication administration has been evaluated in numerous quality improvement initiatives in an effort to reduce medication administration errors. To evaluate the effectiveness of evidence-based strategies to limit interruptions during scheduled, peak medication administration times in 3 progressive cardiac care units (PCCUs). A secondary aim of the project was to evaluate the impact of limiting interruptions on medication errors. The percentages of interruptions and medication errors before and after implementation of evidence-based strategies to limit interruptions were measured by using direct observations of nurses on 2 PCCUs. Nurses in a third PCCU served as a comparison group. Interruptions (P < .001) and medication errors (P = .02) decreased significantly in 1 PCCU after implementation of evidence-based strategies to limit interruptions. Avoidable interruptions decreased 83% in PCCU1 and 53% in PCCU2 after implementation of the evidence-based strategies. Implementation of evidence-based strategies to limit interruptions in PCCUs decreases avoidable interruptions and promotes patient safety. ©2016 American Association of Critical-Care Nurses.
NASA Astrophysics Data System (ADS)
Kauweloa, Kevin Ikaika
The approximate BED (BEDA) is calculated for multi-phase cases due to current treatment planning systems (TPSs) being incapable of performing BED calculations. There has been no study on the mathematical accuracy and precision of BEDA relative to the true BED (BEDT), and how that might negatively impact patient care. The purpose of the first aim was to study the mathematical accuracy and precision in both hypothetical and clinical situations, while the next two aims were to create multi-phase BED optimization ideas for both multi-target liver stereotactic body radiation therapy (SBRT) cases, and gynecological cases where patients are treated with high-dose rate (HDR) brachytherapy along with external beam radiotherapy (EBRT). MATLAB algorithms created for this work were used to mathematically analyze the accuracy and precision of BEDA relative to BEDT in both hypothetical and clinical situations on a 3D basis. The organs-at-risk (OARs) of ten head & neck and ten prostate cancer patients were studied for the clinical situations. The accuracy of BEDA was shown to vary between OARs as well as between patients. The percentage of patients with an overall BEDA percent error less than 1% were, 50% for the Optic Chiasm and Brainstem, 70% for the Left and Right Optic Nerves, as well as the Rectum and Bladder, and 80% for the Normal Brain and Spinal Cord. As seen for each OAR among different patients, there were always cases where the percent error was greater than 1%. This is a cause for concern since the goal of radiation therapy is to reduce the overall uncertainty of treatment, and calculating BEDA distributions increases the treatment uncertainty with percent errors greater than 1%. The revealed inaccuracy and imprecision of BEDA supports the argument to use BEDT. The multi-target liver study involved applying BEDT in order to reduce the number of dose limits to one rather than have one for each fractionation scheme in multi-target liver SBRT treatments. A BEDT limit was found using the current, clinically accepted dose limits, allowing the BEDT distributions to be calculated, which could be used to determine whether at least 700 cc of the healthy liver did not receive the BEDT limit. Three previously multi-target liver cancer patients were studied. For each case, it was shown that the conventional treatment plans were relatively conservative and that more than 700 cc of the healthy liver received less than the BED T limit. These results show that greater doses can be delivered to the targets without exceeding the BEDT limit to the healthy tissue, which typically causes radiation toxicity. When applying BEDT to gynecological cases, the BEDT can reveal the relative effect each treatment would have individually hence the cumulative BEDT would better inform the physician of the potential results with the patient's treatment. The problem presented for these cases, however, is the method in summing dose distributions together when there is significant motion between treatments and the presence of applicators for the HDR phase. One way to calculate the cumulative BEDT is to use structure guided deformable image registration (SG-DIR) that only focuses on the anatomical contours, to avoid errors introduced by the applicators. Eighteen gynecological patients were studied and VelocityAI was used to perform this SG- DIR. In addition, formalism was developed to assess and characterize the remnant dose-mapping error from this approach, from the shortest distance between contour points (SDBP). The results revealed that warping errors rendered relatively large normal tissue complication probability (NTCP) values which are certainly non negligible and does render this method not clinically viable. However, a more accurate SG-DIR algorithm could improve the accuracy of BEDT distributions in these multi-phase cases.
NASA Technical Reports Server (NTRS)
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2004-01-01
Rainfall rate estimates from space-borne k&ents are generally accepted as reliable by a majority of the atmospheric science commu&y. One-of the Tropical Rainfall Measuring Mission (TRh4M) facility rain rate algorithms is based upon passive microwave observations fiom the TRMM Microwave Imager (TMI). Part I of this study describes improvements in the TMI algorithm that are required to introduce cloud latent heating and drying as additional algorithm products. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, OP5resolution estimates of surface rain rate over ocean fiom the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over forerunning algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm, and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly, 2.5 deg. -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data are limited, TMI estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with: (a) additional contextual information brought to the estimation problem, and/or; (b) physically-consistent and representative databases supporting the algorithm. A model of the random error in instantaneous, 0.5 deg-resolution rain rate estimates appears to be consistent with the levels of error determined from TMI comparisons to collocated radar. Error model modifications for non-raining situations will be required, however. Sampling error appears to represent only a fraction of the total error in monthly, 2S0-resolution TMI estimates; the remaining error is attributed to physical inconsistency or non-representativeness of cloud-resolving model simulated profiles supporting the algorithm.
Skill assessment of Korea operational oceanographic system (KOOS)
NASA Astrophysics Data System (ADS)
Kim, J.; Park, K.
2016-02-01
For the ocean forecast system in Korea, the Korea operational oceanographic system (KOOS) has been developed and pre-operated since 2009 by the Korea institute of ocean science and technology (KIOST) funded by the Korean government. KOOS provides real time information and forecasts for marine environmental conditions in order to support all kinds of activities in the sea. Furthermore, more significant purpose of the KOOS information is to response and support to maritime problems and accidents such as oil spill, red-tide, shipwreck, extraordinary wave, coastal inundation and so on. Accordingly, it is essential to evaluate prediction accuracy and efforts to improve accuracy. The forecast accuracy should meet or exceed target benchmarks before its products are approved for release to the public.In this paper, we conduct error quantification of the forecasts using skill assessment technique for judgement of the KOOS performance. Skill assessment statistics includes the measures of errors and correlations such as root-mean-square-error (RMSE), mean bias (MB), correlation coefficient (R), and index of agreement (IOA) and the frequency with which errors lie within specified limits termed the central frequency (CF).The KOOS provides 72-hour daily forecast data such as air pressure, wind, water elevation, currents, wave, water temperature, and salinity produced by meteorological and hydrodynamic numerical models of WRF, ROMS, MOM5, WAM, WW3, and MOHID. The skill assessment has been performed through comparison of model results with in-situ observation data (Figure 1) for the period from 1 July, 2010 to 31 March, 2015 in Table 1 and model errors have been quantified with skill scores and CF determined by acceptable criteria depending on predicted variables (Table 2). Moreover, we conducted quantitative evaluation of spatio-temporal pattern correlation between numerical models and observation data such as sea surface temperature (SST) and sea surface current produced by ocean sensor in satellites and high frequency (HF) radar, respectively. Those quantified errors can allow to objective assessment of the KOOS performance and used can reveal different aspects of model inefficiency. Based on these results, various model components are tested and developed in order to improve forecast accuracy.
Charpiat, B; Goutelle, S; Schoeffler, M; Aubrun, F; Viale, J-P; Ducerf, C; Leboucher, G; Allenet, B
2012-09-01
Clinical pharmacists can help prevent medication errors. However, data are scarce on their role in preventing medication prescription errors in the post-operative period, a high-risk period, as at least two prescribers can intervene, the surgeon and the anesthetist. We aimed to describe and quantify clinical pharmacist' intervention (PIs) during validation of drug prescriptions on a computerized physician order entry system in a post-surgical and post-transplantation ward. We illustrate these interventions, focusing on one clearly identified recurrent problem. In a prospective study lasting 4 years, we recorded drug-related problems (DRPs) detected by pharmacists and whether the physician accepted the PI when prescription modification was suggested. Among 7005 orders, 1975 DRPs were detected. The frequency of PIs remained constant throughout the study period, with 921 PIs (47%) accepted, 383 (19%) refused and 671 (34%) not assessable. The most frequent DRP concerned improper administration mode (26%), drug interactions (21%) and overdosage (20%). These resulted in a change in the method of administration (25%), dose adjustment (24%) and drug discontinuation (23%) with 307 drugs being concerned by at least one PI. Paracetamol was involved in 26% of overdosage PIs. Erythromycin as prokinetic agent, presented a recurrent risk of potentially severe drug-drug interactions especially with other QT interval-prolonging drugs. Following an educational seminar targeting this problem, the rate of acceptation of PI concerning this DRP increased. Pharmacists detected many prescription errors that may have clinical implications and could be the basis for educational measures. © 2012 The Authors. Acta Anaesthesiologica Scandinavica © 2012 The Acta Anaesthesiologica Scandinavica Foundation.
A high speed sequential decoder
NASA Technical Reports Server (NTRS)
Lum, H., Jr.
1972-01-01
The performance and theory of operation for the High Speed Hard Decision Sequential Decoder are delineated. The decoder is a forward error correction system which is capable of accepting data from binary-phase-shift-keyed and quadriphase-shift-keyed modems at input data rates up to 30 megabits per second. Test results show that the decoder is capable of maintaining a composite error rate of 0.00001 at an input E sub b/N sub o of 5.6 db. This performance has been obtained with minimum circuit complexity.
Matsushima, Ken; Komune, Noritaka; Matsuo, Satoshi; Kohno, Michihiro
2017-07-01
The use of the retrosigmoid approach has recently been expanded by several modifications, including the suprameatal, transmeatal, suprajugular, and inframeatal extensions. Intradural temporal bone drilling without damaging vital structures inside or beside the bone, such as the internal carotid artery and jugular bulb, is a key step for these extensions. This study aimed to examine the microsurgical and endoscopic anatomy of the extensions of the retrosigmoid approach and to evaluate the clinical feasibility of an electromagnetic navigation system during intradural temporal bone drilling. Five temporal bones and 8 cadaveric cerebellopontine angles were examined to clarify the anatomy of retrosigmoid intradural temporal bone drilling. Twenty additional cerebellopontine angles were dissected in a clinical setting with an electromagnetic navigation system while measuring the target registration errors at 8 surgical landmarks on and inside the temporal bone. Retrosigmoid intradural temporal bone drilling expanded the surgical exposure to allow access to the petroclival and parasellar regions (suprameatal), internal acoustic meatus (transmeatal), upper jugular foramen (suprajugular), and petrous apex (inframeatal). The electromagnetic navigation continuously guided the drilling without line of sight limitation, and its small devices were easily manipulated in the deep and narrow surgical field in the posterior fossa. Mean target registration error was less than 0.50 mm during these procedures. The combination of endoscopic and microsurgical techniques aids in achieving optimal exposure for retrosigmoid intradural temporal bone drilling. The electromagnetic navigation system had clear advantages with acceptable accuracy including the usability of small devices without line of sight limitation. Copyright © 2017 Elsevier Inc. All rights reserved.
Optimization of traffic data collection for specific pavement design applications.
DOT National Transportation Integrated Search
2006-05-01
The objective of this study is to establish the minimum traffic data collection effort required for pavement design applications satisfying a maximum acceptable error under a prescribed confidence level. The approach consists of simulating the traffi...
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less
Measurement error is often neglected in medical literature: a systematic review.
Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten
2018-06-01
In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.
Netchacovitch, L; Thiry, J; De Bleye, C; Dumont, E; Cailletaud, J; Sacré, P-Y; Evrard, B; Hubert, Ph; Ziemons, E
2017-08-15
Since the Food and Drug Administration (FDA) published a guidance based on the Process Analytical Technology (PAT) approach, real-time analyses during manufacturing processes are in real expansion. In this study, in-line Raman spectroscopic analyses were performed during a Hot-Melt Extrusion (HME) process to determine the Active Pharmaceutical Ingredient (API) content in real-time. The method was validated based on a univariate and a multivariate approach and the analytical performances of the obtained models were compared. Moreover, on one hand, in-line data were correlated with the real API concentration present in the sample quantified by a previously validated off-line confocal Raman microspectroscopic method. On the other hand, in-line data were also treated in function of the concentration based on the weighing of the components in the prepared mixture. The importance of developing quantitative methods based on the use of a reference method was thus highlighted. The method was validated according to the total error approach fixing the acceptance limits at ±15% and the α risk at ±5%. This method reaches the requirements of the European Pharmacopeia norms for the uniformity of content of single-dose preparations. The validation proves that future results will be in the acceptance limits with a previously defined probability. Finally, the in-line validated method was compared with the off-line one to demonstrate its ability to be used in routine analyses. Copyright © 2017 Elsevier B.V. All rights reserved.
Acceptance criteria for urban dispersion model evaluation
NASA Astrophysics Data System (ADS)
Hanna, Steven; Chang, Joseph
2012-05-01
The authors suggested acceptance criteria for rural dispersion models' performance measures in this journal in 2004. The current paper suggests modified values of acceptance criteria for urban applications and tests them with tracer data from four urban field experiments. For the arc-maximum concentrations, the fractional bias should have a magnitude <0.67 (i.e., the relative mean bias is less than a factor of 2); the normalized mean-square error should be <6 (i.e., the random scatter is less than about 2.4 times the mean); and the fraction of predictions that are within a factor of two of the observations (FAC2) should be >0.3. For all data paired in space, for which a threshold concentration must always be defined, the normalized absolute difference should be <0.50, when the threshold is three times the instrument's limit of quantification (LOQ). An overall criterion is then applied that the total set of acceptance criteria should be satisfied in at least half of the field experiments. These acceptance criteria are applied to evaluations of the US Department of Defense's Joint Effects Model (JEM) with tracer data from US urban field experiments in Salt Lake City (U2000), Oklahoma City (JU2003), and Manhattan (MSG05 and MID05). JEM includes the SCIPUFF dispersion model with the urban canopy option and the urban dispersion model (UDM) option. In each set of evaluations, three or four likely options are tested for meteorological inputs (e.g., a local building top wind speed, the closest National Weather Service airport observations, or outputs from numerical weather prediction models). It is found that, due to large natural variability in the urban data, there is not a large difference between the performance measures for the two model options and the three or four meteorological input options. The more detailed UDM and the state-of-the-art numerical weather models do provide a slight improvement over the other options. The proposed urban dispersion model acceptance criteria are satisfied at over half of the field experiments.
Performance Evaluation of Five Turbidity Sensors in Three Primary Standards
Snazelle, Teri T.
2015-10-28
Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard deviation of 0.51 percent for the operating range, which was limited to 0.01–1600 NTU at the time of this report. Test results indicated an average percent error of 19.81 percent in the three standards for the EXO turbidity sensor and 9.66 percent for the YSI 6136. The significant variability in sensor performance in the three primary standards suggests that although all three types are accepted as primary calibration standards, they are not interchangeable, and sensor results in the three types of standards are not directly comparable.
Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels
Laenen, Antonius; Curtis, R. E.
1989-01-01
Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)
Managing human fallibility in critical aerospace situations
NASA Astrophysics Data System (ADS)
Tew, Larry
2014-11-01
Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.
Graeser, Karin; Zemtsovski, Mikhail; Kofoed, Klaus F; Winther-Jensen, Matilde; Nilsson, Jens C; Kjaergaard, Jesper; Møller-Sørensen, Hasse
2018-01-09
Estimation of cardiac output (CO) is essential in the treatment of circulatory unstable patients. CO measured by pulmonary artery catheter thermodilution is considered the gold standard but carries a small risk of severe complications. Stroke volume and CO can be measured by transesophageal echocardiography (TEE), which is widely used during cardiac surgery. We hypothesized that Doppler-derived CO by 3-dimensional (3D) TEE would agree well with CO measured with pulmonary artery catheter thermodilution as a reference method based on accurate measurements of the cross-sectional area of the left ventricular outflow tract. The primary aim was a systematic comparison of CO with Doppler-derived 3D TEE and CO by thermodilution in a broad population of patients undergoing cardiac surgery. A subanalysis was performed comparing cross-sectional area by TEE with cardiac computed tomography (CT) angiography. Sixty-two patients, scheduled for elective heart surgery, were included; 1 was subsequently excluded for logistic reasons. Inclusion criteria were coronary artery bypass surgery (N = 42) and aortic valve replacement (N = 19). Exclusion criteria were chronic atrial fibrillation, left ventricular ejection fraction below 0.40 and intracardiac shunts. Nineteen randomly selected patients had a cardiac CT the day before surgery. All images were stored for blinded post hoc analyses, and Bland-Altman plots were used to assess agreement between measurement methods, defined as the bias (mean difference between methods), limits of agreement (equal to bias ± 2 standard deviations of the bias), and percentage error (limits of agreement divided by the mean of the 2 methods). Precision was determined for the individual methods (equal to 2 standard deviations of the bias between replicate measurements) to determine the acceptable limits of agreement. We found a good precision for Doppler-derived CO measured by 3D TEE, but although the bias for Doppler-derived CO by 3D compared to thermodilution was only 0.3 L/min (confidence interval, 0.04-0.58), there were wide limits of agreement (-1.8 to 2.5 L/min) with a percentage error of 55%. Measurements of cross-sectional area by 3D TEE had low bias of -0.27 cm (confidence interval, -0.45 to -0.08) and a percentage error of 18% compared to cardiac CT angiography. Despite low bias, the wide limits of agreement of Doppler-derived CO by 3D TEE compared to CO by thermodilution will limit clinical application and can therefore not be considered interchangeable with CO obtained by thermodilution. The lack of agreement is not explained by lack of agreement of the 3D technique.
SU-F-T-294: The Analysis of Gamma Criteria for Delta4 Dosimetry Using Statistical Process Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, S; Ahn, S; Kim, J
Purpose: To evaluate the sensitivity of gamma criteria for patient-specific volumetric modulated arc therapy(VMAT) quality assurance of the Delta{sup 4} dosimetry program using the statistical process control(SPC) methodology. Methods: The authors selected 20 patient-specific VMAT QA cases which were undertaken MapCHECK and ArcCHECK with gamma pass rate better than 97%. The QAs data were collected Delta4 Phantom+ and Elekta Agility six megavolts without using an angle incrementer. The gamma index(GI) were calculated in 2D planes with normalizing deviation to local dose(local gamma). The sensitivity of the GI methodology using criterion of 3%/3mm, 3%/2mm and 2%/3mm was analyzed with using processmore » acceptability indices. We used local confidence(LC) level, the upper control limit(UCL) and lower control limit(LCL) of I-MR chart for process capability index(Cp) and a process acceptability index (Cpk). Results: The lower local confidence levels of 3%/3mm, 3%/2mm and 2%/3mm were 92.0%, 83.6% and 78.8% respectively. All of the calculated Cp and Cpk values that used LC level were under 1.0 in this study. The calculated LCLs of I-MR charts were 89.5%, 79.0% and 70.5% respectively. These values were higher than 1.0 which means good quality of QA. For the generally used lower limit of 90%, we acquired over 1.3 of Cp value for the gamma index of 3%/3mm and lower than 1.0 in the rest of GI. Conclusion: We applied SPC methodology to evaluate the sensitivity of gamma criteria and could see the lower control limits of VMAT QA for the Delta 4 dosimetry and could see that Delta 4 phantom+ dosimetry more affected by the position error and the I-MR chart derived values are more suitable for establishing lower limits. Acknowledgement: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (No. 2015R1D1A1A01060463)« less
10 CFR 2.643 - Acceptance and docketing of application for limited work authorization.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Acceptance and docketing of application for limited work authorization. 2.643 Section 2.643 Energy NUCLEAR REGULATORY COMMISSION RULES OF PRACTICE FOR DOMESTIC LICENSING... Construct Certain Utilization Facilities; and Advance Issuance of Limited Work Authorizations Phased...
Error Mitigation for Short-Depth Quantum Circuits
NASA Astrophysics Data System (ADS)
Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.
2017-11-01
Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.
NASA Technical Reports Server (NTRS)
Remer, L. A.; Wald, A. E.; Kaufman, Y. J.
1999-01-01
We obtain valuable information on the angular and seasonal variability of surface reflectance using a hand-held spectrometer from a light aircraft. The data is used to test a procedure that allows us to estimate visible surface reflectance from the longer wavelength 2.1 micrometer channel (mid-IR). Estimating or avoiding surface reflectance in the visible is a vital first step in most algorithms that retrieve aerosol optical thickness over land targets. The data indicate that specular reflection found when viewing targets from the forward direction can severely corrupt the relationships between the visible and 2.1 micrometer reflectance that were derived from nadir data. There is a month by month variation in the ratios between the visible and the mid-IR, weakly correlated to the Normalized Difference Vegetation Index (NDVI). If specular reflection is not avoided, the errors resulting from estimating surface reflectance from the mid-IR exceed the acceptable limit of DELTA-rho approximately 0.01 in roughly 40% of the cases, using the current algorithm. This is reduced to 25% of the cases if specular reflection is avoided. An alternative method that uses path radiance rather than explicitly estimating visible surface reflectance results in similar errors. The two methods have different strengths and weaknesses that require further study.
Electronic device for endosurgical skills training (EDEST): study of reliability.
Pagador, J B; Uson, J; Sánchez, M A; Moyano, J L; Moreno, J; Bustos, P; Mateos, J; Sánchez-Margallo, F M
2011-05-01
Minimally Invasive Surgery procedures are commonly used in many surgical practices, but surgeons need specific training models and devices due to its difficulty and complexity. In this paper, an innovative electronic device for endosurgical skills training (EDEST) is presented. A study on reliability for this device was performed. Different electronic components were used to compose this new training device. The EDEST was focused on two basic laparoscopic tasks: triangulation and coordination manoeuvres. A configuration and statistical software was developed to complement the functionality of the device. A calibration method was used to assure the proper work of the device. A total of 35 subjects (8 experts and 27 novices) were used to check the reliability of the system using the MTBF analysis. Configuration values for triangulation and coordination exercises were calculated as 0.5 s limit threshold and 800-11,000 lux range of light intensity, respectively. Zero errors in 1,050 executions (0%) for triangulation and 21 errors in 5,670 executions (0.37%) for coordination were obtained. A MTBF of 2.97 h was obtained. The results show that the reliability of the EDEST device is acceptable when used under previously defined light conditions. These results along with previous work could demonstrate that the EDEST device can help surgeons during first training stages.
Friendship, cliquishness, and the emergence of cooperation.
Hruschka, Daniel J; Henrich, Joseph
2006-03-07
The evolution of cooperation is a central problem in biology and the social sciences. While theoretical work using the iterated prisoner's dilemma (IPD) has shown that cooperation among non-kin can be sustained among reciprocal strategies (i.e. tit-for-tat), these results are sensitive to errors in strategy execution, cyclical invasions by free riders, and the specific ecology of strategies. Moreover, the IPD assumes that a strategy's probability of playing the PD game with other individuals is independent of the decisions made by others. Here, we remove the assumption of independent pairing by studying a more plausible cooperative dilemma in which players can preferentially interact with a limited set of known partners and also deploy longer-term accounting strategies that can counteract the effects of random errors. We show that cooperative strategies readily emerge and persist in a range of noisy environments, with successful cooperative strategies (henceforth, cliquers) maintaining medium-term memories for partners and low thresholds for acceptable cooperation (i.e. forgiveness). The success of these strategies relies on their cliquishness-a propensity to defect with strangers if they already have an adequate number of partners. Notably, this combination of medium-term accounting, forgiveness, and cliquishness fits with empirical studies of friendship and other long-term relationships among humans.
NASA Astrophysics Data System (ADS)
Ferrari, Luca; Rovati, Luigi; Fabbri, Paola; Pilati, Francesco
2013-02-01
During extracorporeal circulation (ECC), blood is periodically sampled and analyzed to maintain the blood-gas status of the patient within acceptable limits. This protocol has well-known drawbacks that may be overcome by continuous monitoring. We present the characterization of a new pH sensor for continuous monitoring in ECC. This monitoring device includes a disposable fluorescence-sensing element directly in contact with the blood, whose fluorescence intensity is strictly related to the pH of the blood. In vitro experiments show no significant difference between the blood gas analyzer values and the sensor readings; after proper calibration, it gives a correlation of R>0.9887, and measuring errors were lower than the 3% of the pH range of interest (RoI) with respect to a commercial blood gas analyzer. This performance has been confirmed also by simulating a moderate ipothermia condition, i.e., blood temperature 32°C, frequently used in cardiac surgery. In ex vivo experiments, performed with animal models, the sensor is continuously operated in an extracorporeal undiluted blood stream for a maximum of 11 h. It gives a correlation of R>0.9431, and a measuring error lower than the 3% of the pH RoI with respect to laboratory techniques.
Ferrari, Luca; Rovati, Luigi; Fabbri, Paola; Pilati, Francesco
2013-02-01
During extracorporeal circulation (ECC), blood is periodically sampled and analyzed to maintain the blood-gas status of the patient within acceptable limits. This protocol has well-known drawbacks that may be overcome by continuous monitoring. We present the characterization of a new pH sensor for continuous monitoring in ECC. This monitoring device includes a disposable fluorescence-sensing element directly in contact with the blood, whose fluorescence intensity is strictly related to the pH of the blood. In vitro experiments show no significant difference between the blood gas analyzer values and the sensor readings; after proper calibration, it gives a correlation of R>0.9887, and measuring errors were lower than the 3% of the pH range of interest (RoI) with respect to a commercial blood gas analyzer. This performance has been confirmed also by simulating a moderate ipothermia condition, i.e., blood temperature 32°C, frequently used in cardiac surgery. In ex vivo experiments, performed with animal models, the sensor is continuously operated in an extracorporeal undiluted blood stream for a maximum of 11 h. It gives a correlation of R>0.9431, and a measuring error lower than the 3% of the pH RoI with respect to laboratory techniques.
NASA Astrophysics Data System (ADS)
Choudhury, Pallab K.
2018-05-01
Spectrally shaped orthogonal frequency division multiplexing (OFDM) signal for symmetric 10 Gb/s cross-wavelength reuse reflective semiconductor optical amplifier (RSOA) based colorless wavelength division multiplexed passive optical network (WDM-PON) is proposed and further analyzed to support broadband services of next generation high speed optical access networks. The generated OFDM signal has subcarriers in separate frequency ranges for downstream and upstream, such that the re-modulation noise can be effectively minimized in upstream data receiver. Moreover, the cross wavelength reuse approach improves the tolerance against Rayleigh backscattering noise due to the propagation of different wavelengths in the same feeder fiber. The proposed WDM-PON is successfully demonstrated for 25 km fiber with 16-QAM (quadrature amplitude modulation) OFDM signal having bandwidth of 2.5 GHz for 10 Gb/s operation and subcarrier frequencies in 3-5.5 GHz and DC-2.5 GHz for downstream (DS) and upstream (US) transmission respectively. The result shows that the proposed scheme maintains a good bit error rate (BER) performance below the forward error correction (FEC) limit of 3.8 × 10-3 at acceptable receiver sensitivity and provides a high resilience against re-modulation and Rayleigh backscattering noises as well as chromatic dispersion.
Assessing agreement between malaria slide density readings.
Alexander, Neal; Schellenberg, David; Ngasala, Billy; Petzold, Max; Drakeley, Chris; Sutherland, Colin
2010-01-04
Several criteria have been used to assess agreement between replicate slide readings of malaria parasite density. Such criteria may be based on percent difference, or absolute difference, or a combination. Neither the rationale for choosing between these types of criteria, nor that for choosing the magnitude of difference which defines acceptable agreement, are clear. The current paper seeks a procedure which avoids the disadvantages of these current options and whose parameter values are more clearly justified. Variation of parasite density within a slide is expected, even when it has been prepared from a homogeneous sample. This places lower limits on sensitivity and observer agreement, quantified by the Poisson distribution. This means that, if a criterion of fixed percent difference criterion is used for satisfactory agreement, the number of discrepant readings is over-estimated at low parasite densities. With a criterion of fixed absolute difference, the same happens at high parasite densities. For an ideal slide, following the Poisson distribution, a criterion based on a constant difference in square root counts would apply for all densities. This can be back-transformed to a difference in absolute counts, which, as expected, gives a wider range of acceptable agreement at higher average densities. In an example dataset from Tanzania, observed differences in square root counts correspond to a 95% limits of agreement of -2,800 and +2,500 parasites/microl at average density of 2,000 parasites/microl, and -6,200 and +5,700 parasites/microl at 10,000 parasites/microl. However, there were more outliers beyond those ranges at higher densities, meaning that actual coverage of these ranges was not a constant 95%, but decreased with density. In a second study, a trial of microscopist training, the corresponding ranges of agreement are wider and asymmetrical: -8,600 to +5,200/microl, and -19,200 to +11,700/microl, respectively. By comparison, the optimal limits of agreement, corresponding to Poisson variation, are +/- 780 and +/- 1,800 parasites/microl, respectively. The focus of this approach on the volume of blood read leads to other conclusions. For example, no matter how large a volume of blood is read, some densities are too low to be reliably detected, which in turn means that disagreements on slide positivity may simply result from within-slide variation, rather than reading errors. The proposed method defines limits of acceptable agreement in a way which allows for the natural increase in variability with parasite density. This includes defining the levels of between-reader variability, which are consistent with random variation: disagreements within these limits should not trigger additional readings. This approach merits investigation in other settings, in order to determine both the extent of its applicability, and appropriate numerical values for limits of agreement.
Improved Quality in Aerospace Testing Through the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, R.
2000-01-01
This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.
Correction to: Attrition after Acceptance onto a Publicly Funded Bariatric Surgery Program.
Taylor, Tamasin; Wang, Yijiao; Rogerson, William; Bavin, Lynda; Sharon, Cindy; Beban, Grant; Evennett, Nicholas; Gamble, Greg; Cundy, Timothy
2018-03-20
Unfortunately, the original version of this article contained an error. The Methods section's first sentence and Table 1 both mistakenly contained the letters XXXX in place of the district health board and hospital city names.
Giduthuri, Joseph G.; Maire, Nicolas; Joseph, Saju; Kudale, Abhay; Schaetti, Christian; Sundaram, Neisha; Schindler, Christian; Weiss, Mitchell G.
2014-01-01
Background Mobile electronic devices are replacing paper-based instruments and questionnaires for epidemiological and public health research. The elimination of a data-entry step after an interview is a notable advantage over paper, saving investigator time, decreasing the time lags in managing and analyzing data, and potentially improving the data quality by removing the error-prone data-entry step. Research has not yet provided adequate evidence, however, to substantiate the claim of fewer errors for computerized interviews. Methodology We developed an Android-based illness explanatory interview for influenza vaccine acceptance and tested the instrument in a field study in Pune, India, for feasibility and acceptability. Error rates for tablet and paper were compared with reference to the voice recording of the interview as gold standard to assess discrepancies. We also examined the preference of interviewers for the classical paper-based or the electronic version of the interview and compared the costs of research with both data collection devices. Results In 95 interviews with household respondents, total error rates with paper and tablet devices were nearly the same (2.01% and 1.99% respectively). Most interviewers indicated no preference for a particular device; but those with a preference opted for tablets. The initial investment in tablet-based interviews was higher compared to paper, while the recurring costs per interview were lower with the use of tablets. Conclusion An Android-based tablet version of a complex interview was developed and successfully validated. Advantages were not compromised by increased errors, and field research assistants with a preference preferred the Android device. Use of tablets may be more costly than paper for small samples and less costly for large studies. PMID:25233212
Figueira, Bruno; Gonçalves, Bruno; Folgado, Hugo; Masiulis, Nerijus; Calleja-González, Julio; Sampaio, Jaime
2018-06-14
The present study aims to identify the accuracy of the NBN23 ® system, an indoor tracking system based on radio-frequency and standard Bluetooth Low Energy channels. Twelve capture tags were attached to a custom cart with fixed distances of 0.5, 1.0, 1.5, and 1.8 m. The cart was pushed along a predetermined course following the lines of a standard dimensions Basketball court. The course was performed at low speed (<10.0 km/h), medium speed (>10.0 km/h and <20.0 km/h) and high speed (>20.0 km/h). Root mean square error (RMSE) and percentage of variance accounted for (%VAF) were used as accuracy measures. The obtained data showed acceptable accuracy results for both RMSE and %VAF, despite the expected degree of error in position measurement at higher speeds. The RMSE for all the distances and velocities presented an average absolute error of 0.30 ± 0.13 cm with 90.61 ± 8.34 of %VAF, in line with most available systems, and considered acceptable for indoor sports. The processing of data with filter correction seemed to reduce the noise and promote a lower relative error, increasing the %VAF for each measured distance. Research using positional-derived variables in Basketball is still very scarce; thus, this independent test of the NBN23 ® tracking system provides accuracy details and opens up opportunities to develop new performance indicators that help to optimize training adaptations and performance.
NASA Technical Reports Server (NTRS)
Goodman, Jerry R.; Grosveld, Ferdinand
2007-01-01
The acoustics environment in space operations is important to maintain at manageable levels so that the crewperson can remain safe, functional, effective, and reasonably comfortable. High acoustic levels can produce temporary or permanent hearing loss, or cause other physiological symptoms such as auditory pain, headaches, discomfort, strain in the vocal cords, or fatigue. Noise is defined as undesirable sound. Excessive noise may result in psychological effects such as irritability, inability to concentrate, decrease in productivity, annoyance, errors in judgment, and distraction. A noisy environment can also result in the inability to sleep, or sleep well. Elevated noise levels can affect the ability to communicate, understand what is being said, hear what is going on in the environment, degrade crew performance and operations, and create habitability concerns. Superfluous noise emissions can also create the inability to hear alarms or other important auditory cues such as an equipment malfunctioning. Recent space flight experience, evaluations of the requirements in crew habitable areas, and lessons learned (Goodman 2003; Allen and Goodman 2003; Pilkinton 2003; Grosveld et al. 2003) show the importance of maintaining an acceptable acoustics environment. This is best accomplished by having a high-quality set of limits/requirements early in the program, the "designing in" of acoustics in the development of hardware and systems, and by monitoring, testing and verifying the levels to ensure that they are acceptable.
Combined dry plasma etching and online metrology for manufacturing highly focusing x-ray mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berujon, S., E-mail: berujon@esrf.eu; Ziegler, E., E-mail: ziegler@esrf.eu; Cunha, S. da
A new figuring station was designed and installed at the ESRF beamline BM05. It allows the figuring of mirrors within an iterative process combining the advantage of online metrology with dry etching. The complete process takes place under a vacuum environment to minimize surface contamination while non-contact surfacing tools open up the possibility of performing at-wavelength metrology and eliminating placement errors. The aim is to produce mirrors whose slopes do not deviate from the stigmatic profile by more than 0.1 µrad rms while keeping surface roughness in the acceptable limit of 0.1-0.2 nm rms. The desired elliptical mirror surface shapemore » can be achieved in a few iterations in about a one day time span. This paper describes some of the important aspects of the process regarding both the online metrology and the etching process.« less
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1993-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.
Mechanical evaluation of a ruptured Swedish adjustable gastric band.
Reijnen, Michael M P J; Naus, J H; Janssen, Ignace M C
2004-02-01
Leakage of a laparoscopically placed Swedish adjustable gastric band (SAGB) was observed 2 1/2 years after placement. The band was evaluated for mechanical inaccuracies by a laboratory. The ruptured SAGB was investigated microscopically and wall thicknesses were measured. An unused SAGB was tested, both empty and filled, for mechanical deformity after exposure to saline solution. A permanent transformation of the silicone rubber was found, caused by bowing of the device. 2 tears were present at the end of a kink. The mean wall thickness was within acceptable limits. Exposure of the gastric band to saline solution did not cause any sign of permanent deformity of the silicone rubber. The rupture of the gastric band did not seem to be caused by a production error. Long-term deformity, in combination with a continuous dynamic load, may increase the risk of tearing. Long-term follow up is recommended for patients treated with this device.
NASA Astrophysics Data System (ADS)
Yu, Miao; Li, Yan; Shu, Tong; Zhang, Yifan; Hong, Xiaobin; Qiu, Jifang; Zuo, Yong; Guo, Hongxiang; Li, Wei; Wu, Jian
2018-02-01
A method of recognizing 16QAM signal based on k-means clustering algorithm is proposed to mitigate the impact of transmitter finite extinction ratio. There are pilot symbols with 0.39% overhead assigned to be regarded as initial centroids of k-means clustering algorithm. Simulation result in 10 GBaud 16QAM system shows that the proposed method obtains higher precision of identification compared with traditional decision method for finite ER and IQ mismatch. Specially, the proposed method improves the required OSNR by 5.5 dB, 4.5 dB, 4 dB and 3 dB at FEC limit with ER= 12 dB, 16 dB, 20 dB and 24 dB, respectively, and the acceptable bias error and IQ mismatch range is widened by 767% and 360% with ER =16 dB, respectively.
Uncertainty Modeling and Evaluation of CMM Task Oriented Measurement Based on SVCMM
NASA Astrophysics Data System (ADS)
Li, Hongli; Chen, Xiaohuai; Cheng, Yinbao; Liu, Houde; Wang, Hanbin; Cheng, Zhenying; Wang, Hongtao
2017-10-01
Due to the variety of measurement tasks and the complexity of the errors of coordinate measuring machine (CMM), it is very difficult to reasonably evaluate the uncertainty of the measurement results of CMM. It has limited the application of CMM. Task oriented uncertainty evaluation has become a difficult problem to be solved. Taking dimension measurement as an example, this paper puts forward a practical method of uncertainty modeling and evaluation of CMM task oriented measurement (called SVCMM method). This method makes full use of the CMM acceptance or reinspection report and the Monte Carlo computer simulation method (MCM). The evaluation example is presented, and the results are evaluated by the traditional method given in GUM and the proposed method, respectively. The SVCMM method is verified to be feasible and practical. It can help CMM users to conveniently complete the measurement uncertainty evaluation through a single measurement cycle.
[Detection and classification of medication errors at Joan XXIII University Hospital].
Jornet Montaña, S; Canadell Vilarrasa, L; Calabuig Mũoz, M; Riera Sendra, G; Vuelta Arce, M; Bardají Ruiz, A; Gallart Mora, M J
2004-01-01
Medication errors are multifactorial and multidisciplinary, and may originate in processes such as drug prescription, transcription, dispensation, preparation and administration. The goal of this work was to measure the incidence of detectable medication errors that arise within a unit dose drug distribution and control system, from drug prescription to drug administration, by means of an observational method confined to the Pharmacy Department, as well as a voluntary, anonymous report system. The acceptance of this voluntary report system's implementation was also assessed. A prospective descriptive study was conducted. Data collection was performed at the Pharmacy Department from a review of prescribed medical orders, a review of pharmaceutical transcriptions, a review of dispensed medication and a review of medication returned in unit dose medication carts. A voluntary, anonymous report system centralized in the Pharmacy Department was also set up to detect medication errors. Prescription errors were the most frequent (1.12%), closely followed by dispensation errors (1.04%). Transcription errors (0.42%) and administration errors (0.69%) had the lowest overall incidence. Voluntary report involved only 4.25% of all detected errors, whereas unit dose medication cart review contributed the most to error detection. Recognizing the incidence and types of medication errors that occur in a health-care setting allows us to analyze their causes and effect changes in different stages of the process in order to ensure maximal patient safety.
NASA Astrophysics Data System (ADS)
Lu, Aiming; Atkinson, Ian C.; Vaughn, J. Thomas; Thulborn, Keith R.
2011-12-01
The rapid biexponential transverse relaxation of the sodium MR signal from brain tissue requires efficient k-space sampling for quantitative imaging in a time that is acceptable for human subjects. The flexible twisted projection imaging (flexTPI) sequence has been shown to be suitable for quantitative sodium imaging with an ultra-short echo time to minimize signal loss. The fidelity of the k-space center location is affected by the readout gradient timing errors on the three physical axes, which is known to cause image distortion for projection-based acquisitions. This study investigated the impact of these timing errors on the voxel-wise accuracy of the tissue sodium concentration (TSC) bioscale measured with the flexTPI sequence. Our simulations show greater than 20% spatially varying quantification errors when the gradient timing errors are larger than 10 μs on all three axes. The quantification is more tolerant of gradient timing errors on the Z-axis. An existing method was used to measure the gradient timing errors with <1 μs error. The gradient timing error measurement is shown to be RF coil dependent, and timing error differences of up to ˜16 μs have been observed between different RF coils used on the same scanner. The measured timing errors can be corrected prospectively or retrospectively to obtain accurate TSC values.
Improving end of life care: an information systems approach to reducing medical errors.
Tamang, S; Kopec, D; Shagas, G; Levy, K
2005-01-01
Chronic and terminally ill patients are disproportionately affected by medical errors. In addition, the elderly suffer more preventable adverse events than younger patients. Targeting system wide "error-reducing" reforms to vulnerable populations can significantly reduce the incidence and prevalence of human error in medical practice. Recent developments in health informatics, particularly the application of artificial intelligence (AI) techniques such as data mining, neural networks, and case-based reasoning (CBR), presents tremendous opportunities for mitigating error in disease diagnosis and patient management. Additionally, the ubiquity of the Internet creates the possibility of an almost ideal network for the dissemination of medical information. We explore the capacity and limitations of web-based palliative information systems (IS) to transform the delivery of care, streamline processes and improve the efficiency and appropriateness of medical treatment. As a result, medical error(s) that occur with patients dealing with severe, chronic illness and the frail elderly can be reduced.The palliative model grew out of the need for pain relief and comfort measures for patients diagnosed with cancer. Applied definitions of palliative care extend this convention, but there is no widely accepted definition. This research will discuss the development life cycle of two palliative information systems: the CONFER QOLP management information system (MIS), currently used by a community-based palliative care program in Brooklyn, New York, and the CAREN case-based reasoning prototype. CONFER is a web platform based on the idea of "eCare". CONFER uses XML (extensible mark-up language), a W3C-endorced standard mark up to define systems data. The second system, CAREN, is a CBR prototype designed for palliative care patients in the cancer trajectory. CBR is a technique, which tries to exploit the similarities of two situations and match decision-making to the best-known precedent cases. The prototype uses the opensource CASPIAN shell developed by the University of Aberystwyth, Wales and is available by anonymous FTP. We will discuss and analyze the preliminary results we have obtained using this CBR tool. Our research suggests that automated information systems can be used to improve the quality of care at the end of life and disseminate expert level 'know how' to palliative care clinicians. We will present how our CBR prototype can be successfully deployed, capable of securely transferring information using a Secure File Transfer Protocol (SFTP) and using a JAVA CBR engine.
Smith, Philip; Wallace, Melissa; Bekker, Linda-Gail
2016-01-01
Abstract Introduction: Since HIV testing in South African adolescents and young adults is sub-optimal, the objective of the current study was to investigate the feasibility and acceptability of an HIV rapid self-testing device in adolescents and young people at the Desmond Tutu HIV Foundation Youth Centre and Mobile Clinic. Methods: Self-presenting adolescents and young adults were invited to participate in a study investigating the fidelity, usability and acceptability of the AtomoRapid HIV Rapid self-testing device. Trained healthcare workers trained participants to use the device before the participant conducted the HIV self-test with device usage instructions. The healthcare worker then conducted a questionnaire-based survey to assess outcomes. Results: Of the 224 enrolled participants between 16 and 24 years of age, 155 (69,2%) were female. Overall, fidelity was high; 216 (96,4%) participants correctly completed the test and correctly read and interpreted the HIV test result. There were eight (3,6%) user errors overall; six participants failed to prick their finger even though the lancet fired correctly. There were two user errors where participants failed to use the capillary tube correctly. Participants rated acceptability and usability highly, with debut testers giving significantly higher ratings for both. Younger participants gave significantly higher ratings of acceptability. Conclusions: Adolescents and young adults found HIV self-testing highly acceptable with the AtomoRapid and they used the device accurately. Further research should investigate how, where and when to deploy HIV self-testing as a means to accompany existing strategies in reaching the UNAIDS goal to test 90% of all individuals worldwide. PMID:28406597
Timing analysis by model checking
NASA Technical Reports Server (NTRS)
Naydich, Dimitri; Guaspari, David
2000-01-01
The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.
Jolley, Suzanne; Thompson, Claire; Hurley, James; Medin, Evelina; Butler, Lucy; Bebbington, Paul; Dunn, Graham; Freeman, Daniel; Fowler, David; Kuipers, Elizabeth; Garety, Philippa
2014-01-01
Understanding how people with delusions arrive at false conclusions is central to the refinement of cognitive behavioural interventions. Making hasty decisions based on limited data (‘jumping to conclusions’, JTC) is one potential causal mechanism, but reasoning errors may also result from other processes. In this study, we investigated the correlates of reasoning errors under differing task conditions in 204 participants with schizophrenia spectrum psychosis who completed three probabilistic reasoning tasks. Psychotic symptoms, affect, and IQ were also evaluated. We found that hasty decision makers were more likely to draw false conclusions, but only 37% of their reasoning errors were consistent with the limited data they had gathered. The remainder directly contradicted all the presented evidence. Reasoning errors showed task-dependent associations with IQ, affect, and psychotic symptoms. We conclude that limited data-gathering contributes to false conclusions but is not the only mechanism involved. Delusions may also be maintained by a tendency to disregard evidence. Low IQ and emotional biases may contribute to reasoning errors in more complex situations. Cognitive strategies to reduce reasoning errors should therefore extend beyond encouragement to gather more data, and incorporate interventions focused directly on these difficulties. PMID:24958065
Influence of ECG measurement accuracy on ECG diagnostic statements.
Zywietz, C; Celikag, D; Joseph, G
1996-01-01
Computer analysis of electrocardiograms (ECGs) provides a large amount of ECG measurement data, which may be used for diagnostic classification and storage in ECG databases. Until now, neither error limits for ECG measurements have been specified nor has their influence on diagnostic statements been systematically investigated. An analytical method is presented to estimate the influence of measurement errors on the accuracy of diagnostic ECG statements. Systematic (offset) errors will usually result in an increase of false positive or false negative statements since they cause a shift of the working point on the receiver operating characteristics curve. Measurement error dispersion broadens the distribution function of discriminative measurement parameters and, therefore, usually increases the overlap between discriminative parameters. This results in a flattening of the receiver operating characteristics curve and an increase of false positive and false negative classifications. The method developed has been applied to ECG conduction defect diagnoses by using the proposed International Electrotechnical Commission's interval measurement tolerance limits. These limits appear too large because more than 30% of false positive atrial conduction defect statements and 10-18% of false intraventricular conduction defect statements could be expected due to tolerated measurement errors. To assure long-term usability of ECG measurement databases, it is recommended that systems provide its error tolerance limits obtained on a defined test set.
Campbell, Jeffrey I; Aturinda, Isaac; Mwesigwa, Evans; Burns, Bridget; Santorino, Data; Haberer, Jessica E; Bangsberg, David R; Holden, Richard J; Ware, Norma C; Siedner, Mark J
2017-11-01
Although mobile health (mHealth) technologies have shown promise in improving clinical care in resource-limited settings (RLS), they are infrequently brought to scale. One limitation to the success of many mHealth interventions is inattention to end-user acceptability, which is an important predictor of technology adoption. We conducted in-depth interviews with 43 people living with HIV in rural Uganda who had participated in a clinical trial of a short messaging system (SMS)-based intervention designed to prompt return to clinic after an abnormal laboratory test. Interviews focused on established features of technology acceptance models, including perceived ease of use and perceived usefulness, and included open-ended questions to gain insight into unexplored issues related to the intervention's acceptability. We used conventional (inductive) and direct content analysis to derive categories describing use behaviors and acceptability. Interviews guided development of a proposed conceptual framework, the technology acceptance model for resource-limited settings (TAM-RLS). This framework incorporates both classic technology acceptance model categories as well as novel factors affecting use in this setting. Participants described how SMS message language, phone characteristics, and experience with similar technologies contributed to the system's ease of use. Perceived usefulness was shaped by the perception that the system led to augmented HIV care services and improved access to social support from family and colleagues. Emergent themes specifically related to mHealth acceptance among PLWH in Uganda included (1) the importance of confidentiality, disclosure, and stigma, and (2) the barriers and facilitators downstream from the intervention that impacted achievement of the system's target outcome. The TAM-RLS is a proposed model of mHealth technology acceptance based upon end-user experiences in rural Uganda. Although the proposed model requires validation, the TAM-RLS may serve as a useful tool to guide design and implementation of mHealth interventions.
The Error in Total Error Reduction
Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.
2013-01-01
Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930
Clinical Implications of TiGRT Algorithm for External Audit in Radiation Oncology.
Shahbazi-Gahrouei, Daryoush; Saeb, Mohsen; Monadi, Shahram; Jabbari, Iraj
2017-01-01
Performing audits play an important role in quality assurance program in radiation oncology. Among different algorithms, TiGRT is one of the common application software for dose calculation. This study aimed to clinical implications of TiGRT algorithm to measure dose and compared to calculated dose delivered to the patients for a variety of cases, with and without the presence of inhomogeneities and beam modifiers. Nonhomogeneous phantom as quality dose verification phantom, Farmer ionization chambers, and PC-electrometer (Sun Nuclear, USA) as a reference class electrometer was employed throughout the audit in linear accelerators 6 and 18 MV energies (Siemens ONCOR Impression Plus, Germany). Seven test cases were performed using semi CIRS phantom. In homogeneous regions and simple plans for both energies, there was a good agreement between measured and treatment planning system calculated dose. Their relative error was found to be between 0.8% and 3% which is acceptable for audit, but in nonhomogeneous organs, such as lung, a few errors were observed. In complex treatment plans, when wedge or shield in the way of energy is used, the error was in the accepted criteria. In complex beam plans, the difference between measured and calculated dose was found to be 2%-3%. All differences were obtained between 0.4% and 1%. A good consistency was observed for the same type of energy in the homogeneous and nonhomogeneous phantom for the three-dimensional conformal field with a wedge, shield, asymmetric using the TiGRT treatment planning software in studied center. The results revealed that the national status of TPS calculations and dose delivery for 3D conformal radiotherapy was globally within acceptable standards with no major causes for concern.
Colour compatibility between teeth and dental shade guides in Quinquagenarians and Septuagenarians.
Cocking, C; Cevirgen, E; Helling, S; Oswald, M; Corcodel, N; Rammelsberg, P; Reinelt, G; Hassel, A J
2009-11-01
The aim of this investigation was to determine colour compatibility between dental shade guides, namely, VITA Classical (VC) and VITA 3D-Master (3D), and human teeth in quinquagenarians and septuagenarians. Tooth colour, described in terms of L*a*b* values of the middle third of facial tooth surface of 1391 teeth, was measured using VITA Easyshade in 195 subjects (48% female). These were compared with the colours (L*a*b* values) of the shade tabs of VC and 3D. The mean coverage error and the percentage of tooth colours being within a given colour difference (DeltaE(ab)) from the tabs of VC and 3D were calculated. For comparison, hypothetical, optimized, population-specific shade guides were additionally calculated based on discrete optimization techniques for optimizing coverage. Mean coverage error was DeltaE(ab) = 3.51 for VC and DeltaE(ab) = 2.96 for 3D. Coverage of tooth colours by the tabs of VC and 3D within DeltaE(ab) = 2 was 23% and 24%, respectively, (DeltaE(ab) = 2 as clinically acceptable match). The hypothetical guides performed better and would only need seven to eight tabs to reach the same results as VC and 3D. Both guides had a mean coverage error that was too high and coverage that was too low according to an acceptable colour difference of tooth colour for these subjects. The optimized hypothetical, population-specific guides performed better indicating the possibility for improvement in colour compatibility of the guides with tooth colour in future shade guide development, allowing acceptable shade matching for most of the patients in clinical routine.
Clinical Implications of TiGRT Algorithm for External Audit in Radiation Oncology
Shahbazi-Gahrouei, Daryoush; Saeb, Mohsen; Monadi, Shahram; Jabbari, Iraj
2017-01-01
Background: Performing audits play an important role in quality assurance program in radiation oncology. Among different algorithms, TiGRT is one of the common application software for dose calculation. This study aimed to clinical implications of TiGRT algorithm to measure dose and compared to calculated dose delivered to the patients for a variety of cases, with and without the presence of inhomogeneities and beam modifiers. Materials and Methods: Nonhomogeneous phantom as quality dose verification phantom, Farmer ionization chambers, and PC-electrometer (Sun Nuclear, USA) as a reference class electrometer was employed throughout the audit in linear accelerators 6 and 18 MV energies (Siemens ONCOR Impression Plus, Germany). Seven test cases were performed using semi CIRS phantom. Results: In homogeneous regions and simple plans for both energies, there was a good agreement between measured and treatment planning system calculated dose. Their relative error was found to be between 0.8% and 3% which is acceptable for audit, but in nonhomogeneous organs, such as lung, a few errors were observed. In complex treatment plans, when wedge or shield in the way of energy is used, the error was in the accepted criteria. In complex beam plans, the difference between measured and calculated dose was found to be 2%–3%. All differences were obtained between 0.4% and 1%. Conclusions: A good consistency was observed for the same type of energy in the homogeneous and nonhomogeneous phantom for the three-dimensional conformal field with a wedge, shield, asymmetric using the TiGRT treatment planning software in studied center. The results revealed that the national status of TPS calculations and dose delivery for 3D conformal radiotherapy was globally within acceptable standards with no major causes for concern. PMID:28989910
The nearest neighbor and the bayes error rates.
Loizou, G; Maybank, S J
1987-02-01
The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.
An investigation of error correcting techniques for OMV data
NASA Technical Reports Server (NTRS)
Ingels, Frank; Fryer, John
1992-01-01
Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).
Applying an overstress principle in accelerated testing of absorbing mechanisms
NASA Astrophysics Data System (ADS)
Tsyss, V. G.; Sergaeva, M. Yu; Sergaev, A. A.
2018-04-01
The relevance of using overstress test as a forced one to determine the pneumatic absorber lifespan was studied. The obtained results demonstrated that at low load overstress the relative error for the absorber lifespan evaluation is no more than 3%. This means that the test results spread has almost no effect on the lifespan evaluation, and this effect is several times less than that at high load overstress tests. Accelerated testing of absorbers with low load overstress is more acceptable since the relative error for the lifespan evaluation is negligible.
Safe and effective error rate monitors for SS7 signaling links
NASA Astrophysics Data System (ADS)
Schmidt, Douglas C.
1994-04-01
This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
Altimeter error sources at the 10-cm performance level
NASA Technical Reports Server (NTRS)
Martin, C. F.
1977-01-01
Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.
Nascimento, D L; Nascimento, F S
2012-11-01
The ability to discriminate nestmates from non-nestmates in insect societies is essential to protect colonies from conspecific invaders. The acceptance threshold hypothesis predicts that organisms whose recognition systems classify recipients without errors should optimize the balance between acceptance and rejection. In this process, cuticular hydrocarbons play an important role as cues of recognition in social insects. The aims of this study were to determine whether guards exhibit a restrictive level of rejection towards chemically distinct individuals, becoming more permissive during the encounters with either nestmate or non-nestmate individuals bearing chemically similar profiles. The study demonstrates that Melipona asilvai (Hymenoptera: Apidae: Meliponini) guards exhibit a flexible system of nestmate recognition according to the degree of chemical similarity between the incoming forager and its own cuticular hydrocarbons profile. Guards became less restrictive in their acceptance rates when they encounter non-nestmates with highly similar chemical profiles, which they probably mistake for nestmates, hence broadening their acceptance level.
Medication errors in anesthesia: unacceptable or unavoidable?
Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra
Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2011-07-01
The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow and where peak-flow timing at sub-daily time scales is of high importance. The results suggest that the calibration method can be useful when observation time periods for discharge and model input data do not overlap. The method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2010-12-01
The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow. The results suggest that the new calibration method can be useful when observation time periods for discharge and model input data do not overlap. The new method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
Vinney, Lisa A; Grade, John D; Connor, Nadine P
2012-01-01
The manner in which a communication disorder affects health-related quality of life (QOL) in children is not known. Unfortunately, collection of quality of life data via traditional paper measures is labor intensive and has several other limitations, which hinder the investigation of pediatric quality of life in children. Currently, there is not sufficient research regarding the use of electronic devices to collect pediatric patient reported outcomes in order to address such limitations. Thus, we used a cross-over design to compare responses to a pediatric health quality of life instrument (PedsQL 4.0) delivered using a handheld electronic device to those from a traditional paper form. Respondents were children with (n=9) and without (n=10) a speech or voice disorder. For paper versus the electronic format, we examined time to completion, number of incomplete or inaccurate question responses, intra-rater reliability, ease of use, and child and parent preference. There were no significant differences between children's scores, time to complete the measure, or ratings related to ease of answering questions. The percentage of children who made answering errors or omissions with paper and pencil was significantly greater than the percentage of children who made such errors using the device. This preliminary study demonstrated that use of an electronic device to collect QOL or patient-reported outcomes (PRO) data from children is more efficient than and just as feasible, reliable, and acceptable as using paper forms. The development of hardware and software applications for the collection of QOL and/or PRO data in children with speech disorders is likely warranted. The reader will be able to understand: (1) The potential benefits of using electronic data capture via handheld devices for collecting pediatric patient reported outcomes; (2) The Pediatric Quality of Life Inventory 4.0 is a measure of the perception of general health quality that has distinguished between healthy children and those with chronic health conditions; (3) Past research in communication disorders indicates that voice and speech disorders may impact quality of life in children; (4) Based on preliminary data, electronic collection of patient reported outcomes in children with and without speech/voice disorders is more efficient and equally feasible, reliable, and acceptable when compared to paper forms. Copyright © 2011 Elsevier Inc. All rights reserved.
Truncation of CPC solar collectors and its effect on energy collection
NASA Astrophysics Data System (ADS)
Carvalho, M. J.; Collares-Pereira, M.; Gordon, J. M.; Rabl, A.
1985-01-01
Analytic expressions are derived for the angular acceptance function of two-dimensional compound parabolic concentrator solar collectors (CPC's) of arbitrary degree of truncation. Taking into account the effect of truncation on both optical and thermal losses in real collectors, the increase in monthly and yearly collectible energy is also evaluated. Prior analyses that have ignored the correct behavior of the angular acceptance function at large angles for truncated collectors are shown to be in error by 0-2 percent in calculations of yearly collectible energy for stationary collectors.
Guidelines for the assessment and acceptance of potential brain-dead organ donors
Westphal, Glauco Adrieno; Garcia, Valter Duro; de Souza, Rafael Lisboa; Franke, Cristiano Augusto; Vieira, Kalinca Daberkow; Birckholz, Viviane Renata Zaclikevis; Machado, Miriam Cristine; de Almeida, Eliana Régia Barbosa; Machado, Fernando Osni; Sardinha, Luiz Antônio da Costa; Wanzuita, Raquel; Silvado, Carlos Eduardo Soares; Costa, Gerson; Braatz, Vera; Caldeira Filho, Milton; Furtado, Rodrigo; Tannous, Luana Alves; de Albuquerque, André Gustavo Neves; Abdala, Edson; Gonçalves, Anderson Ricardo Roman; Pacheco-Moreira, Lúcio Filgueiras; Dias, Fernando Suparregui; Fernandes, Rogério; Giovanni, Frederico Di; de Carvalho, Frederico Bruzzi; Fiorelli, Alfredo; Teixeira, Cassiano; Feijó, Cristiano; Camargo, Spencer Marcantonio; de Oliveira, Neymar Elias; David, André Ibrahim; Prinz, Rafael Augusto Dantas; Herranz, Laura Brasil; de Andrade, Joel
2016-01-01
Organ transplantation is the only alternative for many patients with terminal diseases. The increasing disproportion between the high demand for organ transplants and the low rate of transplants actually performed is worrisome. Some of the causes of this disproportion are errors in the identification of potential organ donors and in the determination of contraindications by the attending staff. Therefore, the aim of the present document is to provide guidelines for intensive care multi-professional staffs for the recognition, assessment and acceptance of potential organ donors. PMID:27737418
Extinction measurements with low-power hsrl systems—error limits
NASA Astrophysics Data System (ADS)
Eloranta, Ed
2018-04-01
HSRL measurements of extinction are more difficult than backscatter measurements. This is particularly true for low-power, eye-safe systems. This paper looks at error sources that currently provide an error limit of 10-5 m-1 for boundary layer extinction measurements made with University of Wisconsin HSRL systems. These eye-safe systems typically use 300mW transmitters and 40 cm diameter receivers with a 10-4 radian field-of-view.
A Posteriori Correction of Forecast and Observation Error Variances
NASA Technical Reports Server (NTRS)
Rukhovets, Leonid
2005-01-01
Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.
NASA Technical Reports Server (NTRS)
Page, J.
1981-01-01
The effects of an independent verification and integration (V and I) methodology on one class of application are described. Resource profiles are discussed. The development environment is reviewed. Seven measures are presented to test the hypothesis that V and I improve the development and product. The V and I methodology provided: (1) a decrease in requirements ambiguities and misinterpretation; (2) no decrease in design errors; (3) no decrease in the cost of correcting errors; (4) a decrease in the cost of system and acceptance testing; (5) an increase in early discovery of errors; (6) no improvement in the quality of software put into operation; and (7) a decrease in productivity and an increase in cost.
Correcting for particle counting bias error in turbulent flow
NASA Technical Reports Server (NTRS)
Edwards, R. V.; Baratuci, W.
1985-01-01
An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.
Accounting for measurement error in log regression models with applications to accelerated testing.
Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M
2018-01-01
In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.
Improved Calibration through SMAP RFI Change Detection
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey; De Amici, Giovanni; Mohammed, Priscilla; Peng, Jinzheng
2017-01-01
Anthropogenic Radio-Frequency Interference (RFI) drove both the SMAP (Soil Moisture Active Passive) microwave radiometer hardware and Level 1 science algorithm designs to use new technology and techniques for the first time on a spaceflight project. Care was taken to provide special features allowing the detection and removal of harmful interference in order to meet the error budget. Nonetheless, the project accepted a risk that RFI and its mitigation would exceed the 1.3-K error budget. Thus, RFI will likely remain a challenge afterwards due to its changing and uncertain nature. To address the challenge, we seek to answer the following questions: How does RFI evolve over the SMAP lifetime? What calibration error does the changing RFI environment cause? Can time series information be exploited to reduce these errors and improve calibration for all science products reliant upon SMAP radiometer data? In this talk, we address the first question.
Code of Federal Regulations, 2010 CFR
2010-04-01
... accepting proof of support or application for a lump-sum death payment. (a) When evidence of good cause is... death payment. You may be asked for evidence of good cause for these delays if— (1) You are the insured... limits on accepting proof of support or application for a lump-sum death payment. 404.780 Section 404.780...
St-Pierre, Corinne; Desmeules, François; Dionne, Clermont E; Frémont, Pierre; MacDermid, Joy C; Roy, Jean-Sébastien
2016-01-01
To conduct a systematic review of the psychometric properties (reliability, validity and responsiveness) of self-report questionnaires used to assess symptoms and functional limitations of individuals with rotator cuff (RC) disorders. A systematic search in three databases (Cinahl, Medline and Embase) was conducted. Data extraction and critical methodological appraisal were performed independently by three raters using structured tools, and agreement was achieved by consensus. A descriptive synthesis was performed. One-hundred and twenty articles reporting on 11 questionnaires were included. All questionnaires were highly reliable and responsive to change, and showed construct validity; seven questionnaires also shown known-group validity. The minimal detectable change ranged from 6.4% to 20.8% of total score; only two questionnaires (American Shoulder and Elbow Surgeon questionnaire [ASES] and Upper Limb Functional Index [ULFI]) had a measurement error below 10% of global score. Minimal clinically important differences were established for eight questionnaires, and ranged from 8% to 20% of total score. Overall, included questionnaires showed acceptable psychometric properties for individuals with RC disorders. The ASES and ULFI have the smallest absolute error of measurement, while the Western Ontario RC Index is one of the most responsive questionnaires for individuals suffering from RC disorders. All included questionnaires are reliable, valid and responsive for the evaluation of individuals with RC disorders. As all included questionnaires showed good psychometric properties for the targeted population, the choice should be made according to the purpose of the evaluation and to the construct being evaluated by the questionnaire. The WORC, a RC-specific questionnaire, appeared to be more responsive. It should therefore be used to evaluate change in time. If the evaluation is time-limited, shorter questionnaires or short versions should be considered (such as Quick DASH or SST).
NASA Technical Reports Server (NTRS)
Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.
2005-01-01
This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.
Detection, prevention, and rehabilitation of amblyopia.
Spiritus, M
1997-10-01
The necessity of visual preschool screening for reducing the prevalence of amblyopia is widely accepted. The beneficial results of large-scale screening programs conducted in Scandinavia are reported. Screening monocular visual acuity at 3.5 to 4 years of age appears to be an excellent basis for detecting and treating amblyopia and an acceptable compromise between the pitfalls encountered in screening younger children and the cost-to-benefit ratio. In this respect, several preschoolers' visual acuity charts have been evaluated. New recently developed small-target random stereotests and binocular suppression tests have also been developed with the aim of correcting the many false negatives (anisometropic amblyopia or bilateral high ametropia) induced by the usual stereotests. Longitudinal studies demonstrate that correction of high refractive errors decreases the risk of amblyopia and does not impede emmetropization. The validity of various photoscreening and videoscreening procedures for detecting refractive errors in infants prior to the onset of strabismus or amblyopia, as well as alternatives to conventional occlusion therapy, is discussed.
Payne, Velma L; Medvedeva, Olga; Legowski, Elizabeth; Castine, Melissa; Tseytlin, Eugene; Jukic, Drazen; Crowley, Rebecca S
2009-11-01
Determine effects of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths. Determine if limited enforcement in a medical tutoring system inhibits students from learning the optimal and most efficient solution path. Describe the type of deviations from the optimal solution path that occur during tutoring, and how these deviations change over time. Determine if the size of the problem-space (domain scope), has an effect on learning gains when using a tutor with limited enforcement. Analyzed data mined from 44 pathology residents using SlideTutor-a Medical Intelligent Tutoring System in Dermatopathology that teaches histopathologic diagnosis and reporting skills based on commonly used diagnostic algorithms. Two subdomains were included in the study representing sub-algorithms of different sizes and complexities. Effects of the tutoring system on student errors, goal states and solution paths were determined. Students gradually increase the frequency of steps that match the tutoring system's expectation of expert performance. Frequency of errors gradually declines in all categories of error significance. Student performance frequently differs from the tutor-defined optimal path. However, as students continue to be tutored, they approach the optimal solution path. Performance in both subdomains was similar for both errors and goal differences. However, the rate at which students progress toward the optimal solution path differs between the two domains. Tutoring in superficial perivascular dermatitis, the larger and more complex domain was associated with a slower rate of approximation towards the optimal solution path. Students benefit from a limited-enforcement tutoring system that leverages diagnostic algorithms but does not prevent alternative strategies. Even with limited enforcement, students converge toward the optimal solution path.
Optimal allocation of conservation resources to species that may be extinct.
Rout, Tracy M; Heinze, Dean; McCarthy, Michael A
2010-08-01
Statements of extinction will always be uncertain because of imperfect detection of species in the wild. Two errors can be made when declaring a species extinct. Extinction can be declared prematurely, with a resulting loss of protection and management intervention. Alternatively, limited conservation resources can be wasted attempting to protect a species that no longer exists. Rather than setting an arbitrary level of certainty at which to declare extinction, we argue that the decision must trade off the expected costs of both errors. Optimal decisions depend on the cost of continued intervention, the probability the species is extant, and the estimated value of management (the benefit of management times the value of the species). We illustrated our approach with three examples: the Dodo (Raphus cucullatus), the Ivory-billed Woodpecker (U.S. subspecies Campephilus principalis principalis), and the mountain pygmy-possum (Burramys parvus). The dodo was extremely unlikely to be extant, so managing and monitoring for it today would not be cost-effective unless the value of management was extremely high. The probability the Ivory-billed woodpecker is extant depended on whether recent controversial sightings were accepted. Without the recent controversial sightings, it was optimal to declare extinction of the species in 1965 at the latest. Accepting the recent controversial sightings, it was optimal to continue monitoring and managing until 2032 at the latest. The mountain pygmy-possum is currently extant, with a rapidly declining sighting rate. It was optimal to conduct as many as 66 surveys without sighting before declaring the species extinct. The probability of persistence remained high even after many surveys without sighting because it was difficult to determine whether the species was extinct or undetected. If the value of management is high enough, continued intervention can be cost-effective even if the species is likely to be extinct.
Cembrowski, G S; Hackney, J R; Carey, N
1993-04-01
The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.
Optimization of multimagnetometer systems on a spacecraft
NASA Technical Reports Server (NTRS)
Neubauer, F. M.
1975-01-01
The problem of optimizing the position of magnetometers along a boom of given length to yield a minimized total error is investigated. The discussion is limited to at most four magnetometers, which seems to be a practical limit due to weight, power, and financial considerations. The outlined error analysis is applied to some illustrative cases. The optimal magnetometer locations, for which the total error is minimum, are computed for given boom length, instrument errors, and very conservative magnetic field models characteristic for spacecraft with only a restricted or ineffective magnetic cleanliness program. It is shown that the error contribution by the magnetometer inaccuracy is increased as the number of magnetometers is increased, whereas the spacecraft field uncertainty is diminished by an appreciably larger amount.
Jolley, Suzanne; Thompson, Claire; Hurley, James; Medin, Evelina; Butler, Lucy; Bebbington, Paul; Dunn, Graham; Freeman, Daniel; Fowler, David; Kuipers, Elizabeth; Garety, Philippa
2014-10-30
Understanding how people with delusions arrive at false conclusions is central to the refinement of cognitive behavioural interventions. Making hasty decisions based on limited data ('jumping to conclusions', JTC) is one potential causal mechanism, but reasoning errors may also result from other processes. In this study, we investigated the correlates of reasoning errors under differing task conditions in 204 participants with schizophrenia spectrum psychosis who completed three probabilistic reasoning tasks. Psychotic symptoms, affect, and IQ were also evaluated. We found that hasty decision makers were more likely to draw false conclusions, but only 37% of their reasoning errors were consistent with the limited data they had gathered. The remainder directly contradicted all the presented evidence. Reasoning errors showed task-dependent associations with IQ, affect, and psychotic symptoms. We conclude that limited data-gathering contributes to false conclusions but is not the only mechanism involved. Delusions may also be maintained by a tendency to disregard evidence. Low IQ and emotional biases may contribute to reasoning errors in more complex situations. Cognitive strategies to reduce reasoning errors should therefore extend beyond encouragement to gather more data, and incorporate interventions focused directly on these difficulties. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Wee, Leonard; Hackett, Sara Lyons; Jones, Andrew; Lim, Tee Sin; Harper, Christopher Stirling
2013-01-01
This study evaluated the agreement of fiducial marker localization between two modalities — an electronic portal imaging device (EPID) and cone‐beam computed tomography (CBCT) — using a low‐dose, half‐rotation scanning protocol. Twenty‐five prostate cancer patients with implanted fiducial markers were enrolled. Before each daily treatment, EPID and half‐rotation CBCT images were acquired. Translational shifts were computed for each modality and two marker‐matching algorithms, seed‐chamfer and grey‐value, were performed for each set of CBCT images. The localization offsets, and systematic and random errors from both modalities were computed. Localization performances for both modalities were compared using Bland‐Altman limits of agreement (LoA) analysis, Deming regression analysis, and Cohen's kappa inter‐rater analysis. The differences in the systematic and random errors between the modalities were within 0.2 mm in all directions. The LoA analysis revealed a 95% agreement limit of the modalities of 2 to 3.5 mm in any given translational direction. Deming regression analysis demonstrated that constant biases existed in the shifts computed by the modalities in the superior–inferior (SI) direction, but no significant proportional biases were identified in any direction. Cohen's kappa analysis showed good agreement between the modalities in prescribing translational corrections of the couch at 3 and 5 mm action levels. Images obtained from EPID and half‐rotation CBCT showed acceptable agreement for registration of fiducial markers. The seed‐chamfer algorithm for tracking of fiducial markers in CBCT datasets yielded better agreement than the grey‐value matching algorithm with EPID‐based registration. PACS numbers: 87.55.km, 87.55.Qr PMID:23835391
Software thresholds alter the bias of actigraphy for monitoring sleep in team-sport athletes.
Fuller, Kate L; Juliff, Laura; Gore, Christopher J; Peiffer, Jeremiah J; Halson, Shona L
2017-08-01
Actical ® actigraphy is commonly used to monitor athlete sleep. The proprietary software, called Actiware ® , processes data with three different sleep-wake thresholds (Low, Medium or High), but there is no standardisation regarding their use. The purpose of this study was to examine validity and bias of the sleep-wake thresholds for processing Actical ® sleep data in team sport athletes. Validation study comparing actigraph against accepted gold standard polysomnography (PSG). Sixty seven nights of sleep were recorded simultaneously with polysomnography and Actical ® devices. Individual night data was compared across five sleep measures for each sleep-wake threshold using Actiware ® software. Accuracy of each sleep-wake threshold compared with PSG was evaluated from mean bias with 95% confidence limits, Pearson moment-product correlation and associated standard error of estimate. The Medium threshold generated the smallest mean bias compared with polysomnography for total sleep time (8.5min), sleep efficiency (1.8%) and wake after sleep onset (-4.1min); whereas the Low threshold had the smallest bias (7.5min) for wake bouts. Bias in sleep onset latency was the same across thresholds (-9.5min). The standard error of the estimate was similar across all thresholds; total sleep time ∼25min, sleep efficiency ∼4.5%, wake after sleep onset ∼21min, and wake bouts ∼8 counts. Sleep parameters measured by the Actical ® device are greatly influenced by the sleep-wake threshold applied. In the present study the Medium threshold produced the smallest bias for most parameters compared with PSG. Given the magnitude of measurement variability, confidence limits should be employed when interpreting changes in sleep parameters. Copyright © 2017 Sports Medicine Australia. All rights reserved.
Validity and reliability of the Fitbit Zip as a measure of preschool children’s step count
Sharp, Catherine A; Mackintosh, Kelly A; Erjavec, Mihela; Pascoe, Duncan M; Horne, Pauline J
2017-01-01
Objectives Validation of physical activity measurement tools is essential to determine the relationship between physical activity and health in preschool children, but research to date has not focused on this priority. The aims of this study were to ascertain inter-rater reliability of observer step count, and interdevice reliability and validity of Fitbit Zip accelerometer step counts in preschool children. Methods Fifty-six children aged 3–4 years (29 girls) recruited from 10 nurseries in North Wales, UK, wore two Fitbit Zip accelerometers while performing a timed walking task in their childcare settings. Accelerometers were worn in secure pockets inside a custom-made tabard. Video recordings enabled two observers to independently code the number of steps performed in 3 min by each child during the walking task. Intraclass correlations (ICCs), concordance correlation coefficients, Bland-Altman plots and absolute per cent error were calculated to assess the reliability and validity of the consumer-grade device. Results An excellent ICC was found between the two observer codings (ICC=1.00) and the two Fitbit Zips (ICC=0.91). Concordance between the Fitbit Zips and observer counts was also high (r=0.77), with an acceptable absolute per cent error (6%–7%). Bland-Altman analyses identified a bias for Fitbit 1 of 22.8±19.1 steps with limits of agreement between −14.7 and 60.2 steps, and a bias for Fitbit 2 of 25.2±23.2 steps with limits of agreement between −20.2 and 70.5 steps. Conclusions Fitbit Zip accelerometers are a reliable and valid method of recording preschool children’s step count in a childcare setting. PMID:29081984
ERIC Educational Resources Information Center
Kearsley, Greg P.
This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…
Effects of Error Experience When Learning to Simulate Hypernasality
ERIC Educational Resources Information Center
Wong, Andus W.-K.; Tse, Andy C.-Y.; Ma, Estella P.-M.; Whitehill, Tara L.; Masters, Rich S. W.
2013-01-01
Purpose: The purpose of this study was to evaluate the effects of error experience on the acquisition of hypernasal speech. Method: Twenty-eight healthy participants were asked to simulate hypernasality in either an "errorless learning" condition (in which the possibility for errors was limited) or an "errorful learning"…
Mazor, Kathleen; Roblin, Douglas W; Greene, Sarah M; Fouayzi, Hassan; Gallagher, Thomas H
2016-10-01
Full disclosure of harmful errors to patients, including a statement of regret, an explanation, acceptance of responsibility and commitment to prevent recurrences is the current standard for physicians in the USA. To examine the extent to which primary care physicians' perceptions of event-level, physician-level and organisation-level factors influence intent to disclose a medical error in challenging situations. Cross-sectional survey containing two hypothetical vignettes: (1) delayed diagnosis of breast cancer, and (2) care coordination breakdown causing a delayed response to patient symptoms. In both cases, multiple physicians shared responsibility for the error, and both involved oncology diagnoses. The study was conducted in the context of the HMO Cancer Research Network Cancer Communication Research Center. Primary care physicians from three integrated healthcare delivery systems located in Washington, Massachusetts and Georgia; responses from 297 participants were included in these analyses. The dependent variable intent to disclose included intent to provide an apology, an explanation, information about the cause and plans for preventing recurrences. Independent variables included event-level factors (responsibility for the event, perceived seriousness of the event, predictions about a lawsuit); physician-level factors (value of patient-centred communication, communication self-efficacy and feelings about practice); organisation-level factors included perceived support for communication and time constraints. A majority of respondents would not fully disclose in either situation. The strongest predictors of disclosure were perceived personal responsibility, perceived seriousness of the event and perceived value of patient-centred communication. These variables were consistently associated with intent to disclose. To make meaningful progress towards improving disclosure; physicians, risk managers, organisational leaders, professional organisations and accreditation bodies need to understand the factors which influence disclosure. Such an understanding is required to inform institutional policies and provider training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Mukasa, Oscar; Mushi, Hildegalda P; Maire, Nicolas; Ross, Amanda; de Savigny, Don
2017-01-01
Data entry at the point of collection using mobile electronic devices may make data-handling processes more efficient and cost-effective, but there is little literature to document and quantify gains, especially for longitudinal surveillance systems. To examine the potential of mobile electronic devices compared with paper-based tools in health data collection. Using data from 961 households from the Rufiji Household and Demographic Survey in Tanzania, the quality and costs of data collected on paper forms and electronic devices were compared. We also documented, using qualitative approaches, field workers, whom we called 'enumerators', and households' members on the use of both methods. Existing administrative records were combined with logistics expenditure measured directly from comparison households to approximate annual costs per 1,000 households surveyed. Errors were detected in 17% (166) of households for the paper records and 2% (15) for the electronic records (p < 0.001). There were differences in the types of errors (p = 0.03). Of the errors occurring, a higher proportion were due to accuracy in paper surveys (79%, 95% CI: 72%, 86%) compared with electronic surveys (58%, 95% CI: 29%, 87%). Errors in electronic surveys were more likely to be related to completeness (32%, 95% CI 12%, 56%) than in paper surveys (11%, 95% CI: 7%, 17%).The median duration of the interviews ('enumeration'), per household was 9.4 minutes (90% central range 6.4, 12.2) for paper and 8.3 (6.1, 12.0) for electronic surveys (p = 0.001). Surveys using electronic tools, compared with paper-based tools, were less costly by 28% for recurrent and 19% for total costs. Although there were technical problems with electronic devices, there was good acceptance of both methods by enumerators and members of the community. Our findings support the use of mobile electronic devices for large-scale longitudinal surveys in resource-limited settings.
NASA Astrophysics Data System (ADS)
Bravar, Alessandro
2010-03-01
As the intensity of neutrino beams produced at accelerators increases, the systematic errors due to the poor characterization of the neutrino flux become a limiting factor for high precision neutrino oscillation experiments like T2K. This limitation comes mainly from the poor knowledge of production cross sections for pions and kaons at the same energy and over the same phase-space yielding these neutrino beams. Therefore new hadro-production measurements are mandatory. The NA61/SHINE is a large acceptance hadron spectrometer at the CERN-SPS designed for the study of the hadronic final states produced in interactions of various beam particles (protons, π's, and heavy ions) with a variety of fixed targets at the SPS energies. Ongoing measurements with the NA61 detector for characterizing the neutrino beam of the T2K experiment at J-PARC are introduced. These measurements are performed using a 30 GeV proton beam impinging on carbon targets of different lengths, including a replica of the T2K target. The performance of the NA61 detector and preliminary NA61 measurements from the 2007 run are presented.
Prospective memory in an air traffic control simulation: External aids that signal when to act
Loft, Shayne; Smith, Rebekah E.; Bhaskara, Adella
2011-01-01
At work and in our personal life we often need to remember to perform intended actions at some point in the future, referred to as Prospective Memory. Individuals sometimes forget to perform intentions in safety-critical work contexts. Holding intentions can also interfere with ongoing tasks. We applied theories and methods from the experimental literature to test the effectiveness of external aids in reducing prospective memory error and costs to ongoing tasks in an air traffic control simulation. Participants were trained to accept and hand-off aircraft, and to detect aircraft conflicts. For the prospective memory task participants were required to substitute alternative actions for routine actions when accepting target aircraft. Across two experiments, external display aids were provided that presented the details of target aircraft and associated intended actions. We predicted that aids would only be effective if they provided information that was diagnostic of target occurrence and in this study we examined the utility of aids that directly cued participants when to allocate attention to the prospective memory task. When aids were set to flash when the prospective memory target aircraft needed to be accepted, prospective memory error and costs to ongoing tasks of aircraft acceptance and conflict detection were reduced. In contrast, aids that did not alert participants specifically when the target aircraft were present provided no advantage compared to when no aids we used. These findings have practical implications for the potential relative utility of automated external aids for occupations where individuals monitor multi-item dynamic displays. PMID:21443381
Prospective memory in an air traffic control simulation: external aids that signal when to act.
Loft, Shayne; Smith, Rebekah E; Bhaskara, Adella
2011-03-01
At work and in our personal life we often need to remember to perform intended actions at some point in the future, referred to as Prospective Memory. Individuals sometimes forget to perform intentions in safety-critical work contexts. Holding intentions can also interfere with ongoing tasks. We applied theories and methods from the experimental literature to test the effectiveness of external aids in reducing prospective memory error and costs to ongoing tasks in an air traffic control simulation. Participants were trained to accept and hand-off aircraft and to detect aircraft conflicts. For the prospective memory task, participants were required to substitute alternative actions for routine actions when accepting target aircraft. Across two experiments, external display aids were provided that presented the details of target aircraft and associated intended actions. We predicted that aids would only be effective if they provided information that was diagnostic of target occurrence, and in this study, we examined the utility of aids that directly cued participants when to allocate attention to the prospective memory task. When aids were set to flash when the prospective memory target aircraft needed to be accepted, prospective memory error and costs to ongoing tasks of aircraft acceptance and conflict detection were reduced. In contrast, aids that did not alert participants specifically when the target aircraft were present provided no advantage compared to when no aids were used. These findings have practical implications for the potential relative utility of automated external aids for occupations where individuals monitor multi-item dynamic displays.
Test-retest reliability of 3D ultrasound measurements of the thoracic spine.
Fölsch, Christian; Schlögel, Stefanie; Lakemeier, Stefan; Wolf, Udo; Timmesfeld, Nina; Skwara, Adrian
2012-05-01
To explore the reliability of the Zebris CMS 20 ultrasound analysis system with pointer application for measuring end-range flexion, end-range extension, and neutral kyphosis angle of the thoracic spine. The study was performed within the School of Physiotherapy in cooperation with the Orthopedic Department at a University Hospital. The thoracic spines of 28 healthy subjects were measured. Measurements for neutral kyphosis angle, end-range flexion, and end-range extension were taken once at each time point. The bone landmarks were palpated by one examiner and marked with a pointer containing 2 transmitters using a frequency of 40 kHz. A third transmitter was fixed to the pelvis, and 3 microphones were used as receiver. The real angle was calculated by the software. Bland-Altman plots with 95% limits of agreement, intraclass correlations (ICC), standard deviations of mean measurements, and standard error of measurements were used for statistical analyses. The test-retest reliability in this study was measured within a 24-hour interval. Statistical parameters were used to judge reliability. The mean kyphosis angle was 44.8° with a standard deviation of 17.3° at the first measurement and a mean of 45.8° with a standard deviation of 16.2° the following day. The ICC was high at 0.95 for the neutral kyphosis angle, and the Bland-Altman 95% limits of agreement were within clinical acceptable margins. The ICC was 0.71 for end-range flexion and 0.34 for end-range extension, whereas the Bland-Altman 95% limits of agreement were wider than with the static measurement of kyphosis. Compared with static measurements, the analysis of motion with 3-dimensional ultrasound showed an increased standard deviation for test-retest measurements. The test-retest reliability of ultrasound measuring of the neutral kyphosis angle of the thoracic spine was demonstrated within 24 hours. Bland-Altman 95% limits of agreement and the standard deviation of differences did not appear to be clinically acceptable for measuring flexion and extension. Copyright © 2012 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Metrics for Business Process Models
NASA Astrophysics Data System (ADS)
Mendling, Jan
Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.
Geometric Quality Assessment of LIDAR Data Based on Swath Overlap
NASA Astrophysics Data System (ADS)
Sampath, A.; Heidemann, H. K.; Stensaas, G. L.
2016-06-01
This paper provides guidelines on quantifying the relative horizontal and vertical errors observed between conjugate features in the overlapping regions of lidar data. The quantification of these errors is important because their presence quantifies the geometric quality of the data. A data set can be said to have good geometric quality if measurements of identical features, regardless of their position or orientation, yield identical results. Good geometric quality indicates that the data are produced using sensor models that are working as they are mathematically designed, and data acquisition processes are not introducing any unforeseen distortion in the data. High geometric quality also leads to high geolocation accuracy of the data when the data acquisition process includes coupling the sensor with geopositioning systems. Current specifications (e.g. Heidemann 2014) do not provide adequate means to quantitatively measure these errors, even though they are required to be reported. Current accuracy measurement and reporting practices followed in the industry and as recommended by data specification documents also potentially underestimate the inter-swath errors, including the presence of systematic errors in lidar data. Hence they pose a risk to the user in terms of data acceptance (i.e. a higher potential for Type II error indicating risk of accepting potentially unsuitable data). For example, if the overlap area is too small or if the sampled locations are close to the center of overlap, or if the errors are sampled in flat regions when there are residual pitch errors in the data, the resultant Root Mean Square Differences (RMSD) can still be small. To avoid this, the following are suggested to be used as criteria for defining the inter-swath quality of data: a) Median Discrepancy Angle b) Mean and RMSD of Horizontal Errors using DQM measured on sloping surfaces c) RMSD for sampled locations from flat areas (defined as areas with less than 5 degrees of slope) It is suggested that 4000-5000 points are uniformly sampled in the overlapping regions of the point cloud, and depending on the surface roughness, to measure the discrepancy between swaths. Care must be taken to sample only areas of single return points only. Point-to-Plane distance based data quality measures are determined for each sample point. These measurements are used to determine the above mentioned parameters. This paper details the measurements and analysis of measurements required to determine these metrics, i.e. Discrepancy Angle, Mean and RMSD of errors in flat regions and horizontal errors obtained using measurements extracted from sloping regions (slope greater than 10 degrees). The research is a result of an ad-hoc joint working group of the US Geological Survey and the American Society for Photogrammetry and Remote Sensing (ASPRS) Airborne Lidar Committee.
Intertester agreement in refractive error measurements.
Huang, Jiayan; Maguire, Maureen G; Ciner, Elise; Kulp, Marjean T; Quinn, Graham E; Orel-Bixler, Deborah; Cyert, Lynn A; Moore, Bruce; Ying, Gui-Shuang
2013-10-01
To determine the intertester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor and the SureSight Vision Screener. Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3 to 5 years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Intertester agreement between lay and nurse screeners was assessed for sphere, cylinder, and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean intertester difference (lay minus nurse) was compared between groups defined based on the child's age, cycloplegic refractive error, and the reading's confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Intereye correlation was accounted for in all analyses. The mean intertester differences (95% limits of agreement) were -0.04 (-1.63, 1.54) diopter (D) sphere, 0.00 (-0.52, 0.51) D cylinder, and -0.04 (1.65, 1.56) D SE for the Retinomax and 0.05 (-1.48, 1.58) D sphere, 0.01 (-0.58, 0.60) D cylinder, and 0.06 (-1.45, 1.57) D SE for the SureSight. For either instrument, the mean intertester differences in sphere and SE did not differ by the child's age, cycloplegic refractive error, or the reading's confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading's confidence number was below the manufacturer's recommended value. Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar intertester agreement in refractive error measurements independent of the child's age. Significant refractive error and a reading with low confidence number were associated with worse intertester agreement.
Kwon, Heon-Ju; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu
2018-01-01
Background/Aims Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Methods Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (VP) was measured via the assumptive hepatectomy plane. Retrospective liver volume (VR) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) VP and VR were evaluated. Plane-dependent error in VP was defined as the absolute difference between VP and VR. % plane-dependent error was defined as follows: |VP–VR|/W∙100. Results Mean VP, VR, and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in VP were 73.3 mL and 10.7%. Mean error and % error in VR were 64.4 mL and 9.3%. Mean plane-dependent error in VP was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in VP exceeded 10% of W in approximately 10% of the subjects in our study. Conclusions There was approximately 5% plane-dependent error in liver VP on CT volumetry. Plane-dependent error in VP exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane. PMID:28759989
NASA Astrophysics Data System (ADS)
Wang, Guochao; Xie, Xuedong; Yan, Shuhua
2010-10-01
Principle of the dual-wavelength single grating nanometer displacement measuring system, with a long range, high precision, and good stability, is presented. As a result of the nano-level high-precision displacement measurement, the error caused by a variety of adverse factors must be taken into account. In this paper, errors, due to the non-ideal performance of the dual-frequency laser, including linear error caused by wavelength instability and non-linear error caused by elliptic polarization of the laser, are mainly discussed and analyzed. On the basis of theoretical modeling, the corresponding error formulas are derived as well. Through simulation, the limit value of linear error caused by wavelength instability is 2nm, and on the assumption that 0.85 x T = , 1 Ty = of the polarizing beam splitter(PBS), the limit values of nonlinear-error caused by elliptic polarization are 1.49nm, 2.99nm, 4.49nm while the non-orthogonal angle is selected correspondingly at 1°, 2°, 3° respectively. The law of the error change is analyzed based on different values of Tx and Ty .
Lee, Yueh-Chang; Wang, Jen-Hung; Chiu, Cheng-Jen
2017-12-08
Several studies reported the efficacy of orthokeratology for myopia control. Somehow, there is limited publication with follow-up longer than 3 years. This study aims to research whether overnight orthokeratology influences the progression rate of the manifest refractive error of myopic children in a longer follow-up period (up to 12 years). And if changes in progression rate are found, to investigate the relationship between refractive changes and different baseline factors, including refraction error, wearing age and lens replacement frequency. In addition, this study collects long-term safety profile of overnight orthokeratology. This is a retrospective study of sixty-six school-age children who received overnight orthokeratology correction between January 1998 and December 2013. Thirty-six subjects whose baseline age and refractive error matched with those in the orthokeratology group were selected to form control group. These subjects were followed up at least for 12 months. Manifest refractions, cycloplegic refractions, uncorrected and best-corrected visual acuities, power vector of astigmatism, corneal curvature, and lens replacement frequency were obtained for analysis. Data of 203 eyes were derived from 66 orthokeratology subjects (31 males and 35 females) and 36 control subjects (22 males and 14 females) enrolled in this study. Their wearing ages ranged from 7 years to 16 years (mean ± SE, 11.72 ± 0.18 years). The follow-up time ranged from 1 year to 13 years (mean ± SE, 6.32 ± 0.15 years). At baseline, their myopia ranged from -0.5 D to -8.0 D (mean ± SE, -3.70 ± 0.12 D), and astigmatism ranged from 0 D to -3.0 D (mean ± SE, -0.55 ± 0.05 D). Comparing with control group, orthokeratology group had a significantly (p < 0.001) lower trend of refractive error change during the follow-up periods. According to the analysis results of GEE model, greater power of astigmatism was found to be associated with increased change of refractive error during follow-up years. Overnight orthokeratology was effective in slowing myopia progression over a twelve-year follow-up period and demonstrated a clinically acceptable safety profile. Initial higher astigmatism power was found to be associated with increased change of refractive error during follow-up years.
Thorup, Charlotte Brun; Grønkjær, Mette; Dinesen, Birthe Irene
2017-01-01
Background Step counters have been used to observe activity and support physical activity, but there is limited evidence on their accuracy. Objective The purpose was to investigate the step accuracy of the Fitbit Zip (Zip) in healthy adults during treadmill walking and in patients with cardiac disease while hospitalised at home. Methods Twenty healthy adults aged 39±13.79 (mean ±SD) wore four Zips while walking on a treadmill at different speeds (1.7–6.1 km/hour), and 24 patients with cardiac disease (age 67±10.03) wore a Zip for 24 hours during hospitalisation and for 4 weeks thereafter at home. A Shimmer3 device was used as a criterion standard. Results At a treadmill speed of 3.6 km/hour, the relative error (±SD) for the Zips on the upper body was −0.02±0.67 on the right side and −0.09 (0.67) on the left side. For the Zips on the waist, this was 0.08±0.71 for the right side and -0.08 (0.47) on the left side. At a treadmill speed of 3.6 km/hour and higher, the average per cent of relative error was <3%. The 24-hour test for the hospitalised patients showed a relative error of −47.15±24.11 (interclass correlation coefficient (ICC): 0.60), and for the 24-hour test at home, the relative error was −27.51±28.78 (ICC: 0.87). Thus, none of the 24-hour tests had less than the expected 20% error. In time periods of evident walking during the 24 h test, the Zip had an average per cent relative error of <3% at 3.6 km/hour and higher speeds. Conclusions A speed of 3.6 km/hour or higher is required to expect acceptable accuracy in step measurement using a Zip, on a treadmill and in real life. Inaccuracies are directly related to slow speeds, which might be a problem for patients with cardiac disease who walk at a slow pace. PMID:28363918
Cost-effectiveness of an electronic medication ordering system (CPOE/CDSS) in hospitalized patients.
Vermeulen, K M; van Doormaal, J E; Zaal, R J; Mol, P G M; Lenderink, A W; Haaijer-Ruskamp, F M; Kosterink, J G W; van den Bemt, P M L A
2014-08-01
Prescribing medication is an important aspect of almost all in-hospital treatment regimes. Besides their obviously beneficial effects, medicines can also cause adverse drug events (ADE), which increase morbidity, mortality and health care costs. Partially, these ADEs arise from medication errors, e.g. at the prescribing stage. ADEs caused by medication errors are preventable ADEs. Until now, medication ordering was primarily a paper-based process and consequently, it was error prone. Computerized Physician Order Entry, combined with basic Clinical Decision Support System (CPOE/CDSS) is considered to enhance patient safety. Limited information is available on the balance between the health gains and the costs that need to be invested in order to achieve these positive effects. Aim of this study was to study the balance between the effects and costs of CPOE/CDSS compared to the traditional paper-based medication ordering. The economic evaluation was performed alongside a clinical study (interrupted time series design) on the effectiveness of CPOE/CDSS, including a cost minimization and a cost-effectiveness analysis. Data collection took place between 2005 and 2008. Analyses were performed from a hospital perspective. The study was performed in a general teaching hospital and a University Medical Centre on general internal medicine, gastroenterology and geriatric wards. Computerized Physician Order Entry, combined with basic Clinical Decision Support System (CPOE/CDSS) was compared to a traditional paper based system. All costs of both medication ordering systems are based on resources used and time invested. Prices were expressed in Euros (price level 2009). Effectiveness outcomes were medication errors and preventable adverse drug events. During the paper-based prescribing period 592 patients were included, and during the CPOE/CDSS period 603. Total costs of the paper-based system and CPOE/CDSS amounted to €12.37 and €14.91 per patient/day respectively. The Incremental Cost-Effectiveness Ratio (ICER) for medication errors was 3.54 and for preventable adverse drug events 322.70, indicating the extra amount (€) that has to be invested in order to prevent one medication error or one pADE. CPOE with basic CDSS contributes to a decreased risk of preventable harm. Overall, the extra costs of CPOE/CDSS needed to prevent one ME or one pADE seem to be acceptable. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Wahi, A. K.
2003-12-01
Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid discriminator of whether or not the estimator provides accurate estimates of the gradient magnitude and orientation. This research was funded by WIPP programs administered by the U.S Department of Energy. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 1 2011-01-01 2011-01-01 false Acceptances. 7.1007 Section 7.1007 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY BANK ACTIVITIES AND OPERATIONS Bank Powers § 7.1007 Acceptances. A national bank is not limited in the character of acceptances it may make in...
Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement
Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian
2013-01-01
Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990
Accuracy study of a robotic system for MRI-guided prostate needle placement.
Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian
2013-09-01
Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.
SURBAL: computerized metes and bounds surveying
Roger N. Baughman; James H. Patric
1970-01-01
A computer program has been developed at West Virginia University for use in metes and bounds surveying. Stations, slope distances, slope angles, and bearings are primary information needed for this program. Other information needed may include magnetic deviation, acceptable closure error, desired map scale, and title designation. SURBAL prints out latitudes and...
Resolving Ethical Disputes Through Arbitration: An Alternative to Code Penalties.
ERIC Educational Resources Information Center
Barwis, Gail Lund
Arbitration cases involving journalism ethics can be grouped into three major categories: outside activities that lead to conflicts of interest, acceptance of gifts that compromise journalistic objectivity, and writing false or misleading information or failing to check facts or correct errors. In most instances, failure to adhere to ethical…
Junior High Student Responsibilities for Basic Skills.
ERIC Educational Resources Information Center
Parker, Charles C.
This paper advances the thesis that students should be trained to recognize acceptable and unacceptable performances in basic skill areas and should assume responsibility for attaining proficiency in these areas. Among the topics discussed are the value of having junior high school students check their own assignments, discover their errors, and…
49 CFR 236.1023 - Errors and malfunctions.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., the railroad shall maintain a database of all safety-relevant hazards as set forth in the PTCSP and... next business day; (2) Be transmitted in a manner and form acceptable to the Associate Administrator... information shall be forwarded to the Associate Administrator as soon as practicable in supplemental reports...
49 CFR 236.1023 - Errors and malfunctions.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., the railroad shall maintain a database of all safety-relevant hazards as set forth in the PTCSP and... next business day; (2) Be transmitted in a manner and form acceptable to the Associate Administrator... information shall be forwarded to the Associate Administrator as soon as practicable in supplemental reports...
49 CFR 236.1023 - Errors and malfunctions.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., the railroad shall maintain a database of all safety-relevant hazards as set forth in the PTCSP and... next business day; (2) Be transmitted in a manner and form acceptable to the Associate Administrator... information shall be forwarded to the Associate Administrator as soon as practicable in supplemental reports...
49 CFR 236.1023 - Errors and malfunctions.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., the railroad shall maintain a database of all safety-relevant hazards as set forth in the PTCSP and... next business day; (2) Be transmitted in a manner and form acceptable to the Associate Administrator... information shall be forwarded to the Associate Administrator as soon as practicable in supplemental reports...
49 CFR 236.1023 - Errors and malfunctions.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., the railroad shall maintain a database of all safety-relevant hazards as set forth in the PTCSP and... next business day; (2) Be transmitted in a manner and form acceptable to the Associate Administrator... information shall be forwarded to the Associate Administrator as soon as practicable in supplemental reports...
A Productivity Analysis of Nonprocedural Languages.
1982-12-01
abstracts. The tools they work with are up-to- date, well documented, and f:om acceptable/reliable sources. With their Maket - 4- 1 a nd teeoo in enced...Eie invarsion are possible at any level. Additionally, any fisld carn be indexed at any level. b. Online operation with iateractive error- zorrec- c
The Prevalence and Special Educational Requirements of Dyscompetent Physicians
ERIC Educational Resources Information Center
Williams, Betsy W.
2006-01-01
Underperformance among physicians is not well studied or defined; yet, the identification and remediation of physicians who are not performing up to acceptable standards is central to quality care and patient safety. Methods for estimating the prevalence of dyscompetence include evaluating available data on medical errors, malpractice claims,…
NASA Technical Reports Server (NTRS)
Mercer, Joey S.; Bienert, Nancy; Gomez, Ashley; Hunt, Sarah; Kraut, Joshua; Martin, Lynne; Morey, Susan; Green, Steven M.; Prevot, Thomas; Wu, Minghong G.
2013-01-01
A Human-In-The-Loop air traffic control simulation investigated the impact of uncertainties in trajectory predictions on NextGen Trajectory-Based Operations concepts, seeking to understand when the automation would become unacceptable to controllers or when performance targets could no longer be met. Retired air traffic controllers staffed two en route transition sectors, delivering arrival traffic to the northwest corner-post of Atlanta approach control under time-based metering operations. Using trajectory-based decision-support tools, the participants worked the traffic under varying levels of wind forecast error and aircraft performance model error, impacting the ground automations ability to make accurate predictions. Results suggest that the controllers were able to maintain high levels of performance, despite even the highest levels of trajectory prediction errors.
Progressive statistics for studies in sports medicine and exercise science.
Hopkins, William G; Marshall, Stephen W; Batterham, Alan M; Hanin, Juri
2009-01-01
Statistical guidelines and expert statements are now available to assist in the analysis and reporting of studies in some biomedical disciplines. We present here a more progressive resource for sample-based studies, meta-analyses, and case studies in sports medicine and exercise science. We offer forthright advice on the following controversial or novel issues: using precision of estimation for inferences about population effects in preference to null-hypothesis testing, which is inadequate for assessing clinical or practical importance; justifying sample size via acceptable precision or confidence for clinical decisions rather than via adequate power for statistical significance; showing SD rather than SEM, to better communicate the magnitude of differences in means and nonuniformity of error; avoiding purely nonparametric analyses, which cannot provide inferences about magnitude and are unnecessary; using regression statistics in validity studies, in preference to the impractical and biased limits of agreement; making greater use of qualitative methods to enrich sample-based quantitative projects; and seeking ethics approval for public access to the depersonalized raw data of a study, to address the need for more scrutiny of research and better meta-analyses. Advice on less contentious issues includes the following: using covariates in linear models to adjust for confounders, to account for individual differences, and to identify potential mechanisms of an effect; using log transformation to deal with nonuniformity of effects and error; identifying and deleting outliers; presenting descriptive, effect, and inferential statistics in appropriate formats; and contending with bias arising from problems with sampling, assignment, blinding, measurement error, and researchers' prejudices. This article should advance the field by stimulating debate, promoting innovative approaches, and serving as a useful checklist for authors, reviewers, and editors.
MacIntyre, Hugh L; Cullen, John J
2016-08-01
Regulations for ballast water treatment specify limits on the concentrations of living cells in discharge water. The vital stains fluorescein diacetate (FDA) and 5-chloromethylfluorescein diacetate (CMFDA) in combination have been recommended for use in verification of ballast water treatment technology. We tested the effectiveness of FDA and CMFDA, singly and in combination, in discriminating between living and heat-killed populations of 24 species of phytoplankton from seven divisions, verifying with quantitative growth assays that uniformly live and dead populations were compared. The diagnostic signal, per-cell fluorescence intensity, was measured by flow cytometry and alternate discriminatory thresholds were defined statistically from the frequency distributions of the dead or living cells. Species were clustered by staining patterns: for four species, the staining of live versus dead cells was distinct, and live-dead classification was essentially error free. But overlap between the frequency distributions of living and heat-killed cells in the other taxa led to unavoidable errors, well in excess of 20% in many. In 4 very weakly staining taxa, the mean fluorescence intensity in the heat-killed cells was higher than that of the living cells, which is inconsistent with the assumptions of the method. Applying the criteria of ≤5% false negative plus ≤5% false positive errors, and no significant loss of cells due to staining, FDA and FDA+CMFDA gave acceptably accurate results for only 8-10 of 24 species (i.e., 33%-42%). CMFDA was the least effective stain and its addition to FDA did not improve the performance of FDA alone. © 2016 The Authors. Journal of Phycology published by Wiley Periodicals, Inc. on behalf of Phycological Society of America.
de Mesquita, Gabriel Nunes; de Oliveira, Marcela Nicácio Medeiros; Matoso, Amanda Ellen Rodrigues; Filho, Alberto Galvão de Moura; de Oliveira, Rodrigo Ribeiro
2018-04-24
Study Design Clinical measurement study. Background Achilles tendon disorders are very common among athletes and it is important to objectively measure symptoms and functional limitations related to Achilles tendinopathy using outcome measures that have been validated in the language of the target population. Objectives To perform a cross-cultural adaptation and to evaluate the measurement properties of the Brazilian version of the Victorian Institute of Sport Assessment-Achilles (VISA-A) questionnaire. Methods We adapted the VISA-A questionnaire to Brazilian Portuguese (VISA-A-Br). The questionnaire was applied on 2 occasions with an interval of 5 to 14 days. We evaluated the following measurement properties: internal consistency, test-retest reliability, measurement error, construct validity, and ceiling and floor effects. Results The VISA-A-Br showed good internal consistency (Cronbach's alpha = 0.79; after excluding 1 item at a time, Cronbach's α = 0.73 to 0.84), good test-retest reliability (ICC agreement2,1 = 0.84, 95% confidence interval = 0.71-0.91), an acceptable measurement error (standard error of measurement = 3.25 points and Smallest Detectable Change= 9.02 points), good construct validity (Spearman's coefficient with LEFS= 0.73 and FAOS in its 5 subscales; Pain= 0.66, other Symptoms=0.48, Function in daily living (ADL)= 0.59, Function in sport and recreation=0.67, and foot and ankle-related Quality of Life = 0.7), and no ceiling and floor effects. Conclusion The VISA-A-Br is equivalent to the original version; it has been validated and confirmed as reliable to measure pain and function among the Brazilian population with Achilles tendinopathy, and it can be used in clinical and scientific settings. J Orthop Sports Phys Ther, Epub 24 Apr 2018. doi:10.2519/jospt.2018.7897.
An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.
Liu, Zhiyuan; Wang, Changhui
2015-10-23
In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.
Goede, Simon L; Leow, Melvin Khee-Shing
2013-01-01
This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.
The error in total error reduction.
Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R
2014-02-01
Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.
Geometrical correction factors for heat flux meters
NASA Technical Reports Server (NTRS)
Baumeister, K. J.; Papell, S. S.
1974-01-01
General formulas are derived for determining gage averaging errors of strip-type heat flux meters used in the measurement of one-dimensional heat flux distributions. The local averaging error e(x) is defined as the difference between the measured value of the heat flux and the local value which occurs at the center of the gage. In terms of e(x), a correction procedure is presented which allows a better estimate for the true value of the local heat flux. For many practical problems, it is possible to use relatively large gages to obtain acceptable heat flux measurements.
SABRINA - an interactive geometry modeler for MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.; Murphy, J.
One of the most difficult tasks when analyzing a complex three-dimensional system with Monte Carlo is geometry model development. SABRINA attempts to make the modeling process more user-friendly and less of an obstacle. It accepts both combinatorial solid bodies and MCNP surfaces and produces MCNP cells. The model development process in SABRINA is highly interactive and gives the user immediate feedback on errors. Users can view their geometry from arbitrary perspectives while the model is under development and interactively find and correct modeling errors. An example of a SABRINA display is shown. It represents a complex three-dimensional shape.
Inter-satellite links for satellite autonomous integrity monitoring
NASA Astrophysics Data System (ADS)
Rodríguez-Pérez, Irma; García-Serrano, Cristina; Catalán Catalán, Carlos; García, Alvaro Mozo; Tavella, Patrizia; Galleani, Lorenzo; Amarillo, Francisco
2011-01-01
A new integrity monitoring mechanisms to be implemented on-board on a GNSS taking advantage of inter-satellite links has been introduced. This is based on accurate range and Doppler measurements not affected neither by atmospheric delays nor ground local degradation (multipath and interference). By a linear combination of the Inter-Satellite Links Observables, appropriate observables for both satellite orbits and clock monitoring are obtained and by the proposed algorithms it is possible to reduce the time-to-alarm and the probability of undetected satellite anomalies.Several test cases have been run to assess the performances of the new orbit and clock monitoring algorithms in front of a complete scenario (satellite-to-satellite and satellite-to-ground links) and in a satellite-only scenario. The results of this experimentation campaign demonstrate that the Orbit Monitoring Algorithm is able to detect orbital feared events when the position error at the worst user location is still under acceptable limits. For instance, an unplanned manoeuvre in the along-track direction is detected (with a probability of false alarm equals to 5 × 10-9) when the position error at the worst user location is 18 cm. The experimentation also reveals that the clock monitoring algorithm is able to detect phase jumps, frequency jumps and instability degradation on the clocks but the latency of detection as well as the detection performances strongly depends on the noise added by the clock measurement system.
The next generation in optical transport semiconductors: IC solutions at the system level
NASA Astrophysics Data System (ADS)
Gomatam, Badri N.
2005-02-01
In this tutorial overview, we survey some of the challenging problems facing Optical Transport and their solutions using new semiconductor-based technologies. Advances in 0.13um CMOS, SiGe/HBT and InP/HBT IC process technologies and mixed-signal design strategies are the fundamental breakthroughs that have made these solutions possible. In combination with innovative packaging and transponder/transceiver architectures IC approaches have clearly demonstrated enhanced optical link budgets with simultaneously lower (perhaps the lowest to date) cost and manufacturability tradeoffs. This paper will describe: *Electronic Dispersion Compensation broadly viewed as the overcoming of dispersion based limits to OC-192 links and extending link budgets, *Error Control/Coding also known as Forward Error Correction (FEC), *Adaptive Receivers for signal quality monitoring for real-time estimation of Q/OSNR, eye-pattern, signal BER and related temporal statistics (such as jitter). We will discuss the theoretical underpinnings of these receiver and transmitter architectures, provide examples of system performance and conclude with general market trends. These Physical layer IC solutions represent a fundamental new toolbox of options for equipment designers in addressing systems level problems. With unmatched cost and yield/performance tradeoffs, it is expected that IC approaches will provide significant flexibility in turn, for carriers and service providers who must ultimately manage the network and assure acceptable quality of service under stringent cost constraints.
Navigation Accuracy Guidelines for Orbital Formation Flying
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Alfriend, Kyle T.
2004-01-01
Some simple guidelines based on the accuracy in determining a satellite formation s semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver time, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.
Navigation Accuracy Guidelines for Orbital Formation Flying Missions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Alfriend, Kyle T.
2003-01-01
Some simple guidelines based on the accuracy in determining a satellite formation's semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver we, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.
Temporal consistent depth map upscaling for 3DTV
NASA Astrophysics Data System (ADS)
Schwarz, Sebastian; Sjöström, Mârten; Olsson, Roger
2014-03-01
The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.
Human machine interface by using stereo-based depth extraction
NASA Astrophysics Data System (ADS)
Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan
2014-03-01
The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.
Kane, J.S.; Evans, J.R.; Jackson, J.C.
1989-01-01
Accurate and precise determinations of tin in geological materials are needed for fundamental studies of tin geochemistry, and for tin prospecting purposes. Achieving the required accuracy is difficult because of the different matrices in which Sn can occur (i.e. sulfides, silicates and cassiterite), and because of the variability of literature values for Sn concentrations in geochemical reference materials. We have evaluated three methods for the analysis of samples for Sn concentration: graphite furnace atomic absorption spectrometry (HGA-AAS) following iodide extraction, inductively coupled plasma atomic emission spectrometry (ICP-OES), and energy-dispersive X-ray fluorescence (EDXRF) spectrometry. Two of these methods (HGA-AAS and ICP-OES) required sample decomposition either by acid digestion or fusion, while the third (EDXRF) was performed directly on the powdered sample. Analytical details of all three methods, their potential errors, and the steps necessary to correct these errors were investigated. Results showed that similar accuracy was achieved from all methods for unmineralized samples, which contain no known Sn-bearing phase. For mineralized samples, which contain Sn-bearing minerals, either cassiterite or stannous sulfides, only EDXRF and fusion ICP-OES methods provided acceptable accuracy. This summary of our study provides information which helps to assure correct interpretation of data bases for underlying geochemical processes, regardless of method of data collection and its inherent limitations. ?? 1989.
Evaluation of Trajectory Errors in an Automated Terminal-Area Environment
NASA Technical Reports Server (NTRS)
Oseguera-Lohr, Rosa M.; Williams, David H.
2003-01-01
A piloted simulation experiment was conducted to document the trajectory errors associated with use of an airplane's Flight Management System (FMS) in conjunction with a ground-based ATC automation system, Center-TRACON Automation System (CTAS) in the terminal area. Three different arrival procedures were compared: current-day (vectors from ATC), modified (current-day with minor updates), and data link with FMS lateral navigation. Six active airline pilots flew simulated arrivals in a fixed-base simulator. The FMS-datalink procedure resulted in the smallest time and path distance errors, indicating that use of this procedure could reduce the CTAS arrival-time prediction error by about half over the current-day procedure. Significant sources of error contributing to the arrival-time error were crosstrack errors and early speed reduction in the last 2-4 miles before the final approach fix. Pilot comments were all very positive, indicating the FMS-datalink procedure was easy to understand and use, and the increased head-down time and workload did not detract from the benefit. Issues that need to be resolved before this method of operation would be ready for commercial use include development of procedures acceptable to controllers, better speed conformance monitoring, and FMS database procedures to support the approach transitions.
Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O
2016-11-01
Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.
The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2016-01-01
Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.
Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.
Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B
2017-01-01
In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.
NASA Astrophysics Data System (ADS)
Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong
2013-04-01
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.
A new accuracy measure based on bounded relative error for time series forecasting
Twycross, Jamie; Garibaldi, Jonathan M.
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480
A new accuracy measure based on bounded relative error for time series forecasting.
Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.
Evaluation of a Teleform-based data collection system: a multi-center obesity research case study.
Jenkins, Todd M; Wilson Boyce, Tawny; Akers, Rachel; Andringa, Jennifer; Liu, Yanhong; Miller, Rosemary; Powers, Carolyn; Ralph Buncher, C
2014-06-01
Utilizing electronic data capture (EDC) systems in data collection and management allows automated validation programs to preemptively identify and correct data errors. For our multi-center, prospective study we chose to use TeleForm, a paper-based data capture software that uses recognition technology to create case report forms (CRFs) with similar functionality to EDC, including custom scripts to identify entry errors. We quantified the accuracy of the optimized system through a data audit of CRFs and the study database, examining selected critical variables for all subjects in the study, as well as an audit of all variables for 25 randomly selected subjects. Overall we found 6.7 errors per 10,000 fields, with similar estimates for critical (6.9/10,000) and non-critical (6.5/10,000) variables-values that fall below the acceptable quality threshold of 50 errors per 10,000 established by the Society for Clinical Data Management. However, error rates were found to widely vary by type of data field, with the highest rate observed with open text fields. Copyright © 2014 Elsevier Ltd. All rights reserved.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Omar, Hazim; Ahmad, Alwani Liyan; Hayashi, Noburo; Idris, Zamzuri; Abdullah, Jafri Malin
2015-12-01
Magnetoencephalography (MEG) has been extensively used to measure small-scale neuronal brain activity. Although it is widely acknowledged as a sensitive tool for deciphering brain activity and source localisation, the accuracy of the MEG system must be critically evaluated. Typically, on-site calibration with the provided phantom (Local phantom) is used. However, this method is still questionable due to the uncertainty that may originate from the phantom itself. Ideally, the validation of MEG data measurements would require cross-site comparability. A simple method of phantom testing was used twice in addition to a measurement taken with a calibrated reference phantom (RefPhantom) obtained from Elekta Oy of Helsinki, Finland. The comparisons of two main aspects were made in terms of the dipole moment (Qpp) and the difference in the dipole distance from the origin (d) after the tests of statistically equal means and variance were confirmed. The result of Qpp measurements for the LocalPhantom and RefPhantom were 978 (SD24) nAm and 988 (SD32) nAm, respectively, and were still optimally within the accepted range of 900 to 1100 nAm. Moreover, the shifted d results for the LocalPhantom and RefPhantom were 1.84 mm (SD 0.53) and 2.14 mm (SD 0.78), respectively, and these values were below the maximum acceptance range of within 5.0 mm of the nominal dipole location. The Local phantom seems to outperform the reference phantom as indicated by the small standard error of the former (SE 0.094) compared with the latter (SE 0.138). The result indicated that HUSM MEG system was in excellent working condition in terms of the dipole magnitude and localisation measurements as these values passed the acceptance limits criteria of the phantom test.
Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.
Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D
2018-04-07
We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.
Is the perception of clean, humid air indeed affected by cooling the respiratory tract?
NASA Astrophysics Data System (ADS)
Burek, Rudolf; Polednik, Bernard; Guz, Łukasz
2017-07-01
The study aims at determining exposure-response relationships after short exposure to clean air and long exposure to air polluted by people. The impact of water vapor content in the indoor air on its acceptability (ACC) was assessed by the occupants after a short exposure to clean air and an hour-long exposure to increasingly polluted air. The study presents a critical analysis pertaining to the stimulation of olfactory sensations by the air enthalpy suggested in previous models and proposes a new model based on the Weber-Fechner law. Our assumption was that water vapor is the stimulus of olfactory sensations. The model was calibrated and verified in field conditions, in a mechanically ventilated and air conditioned auditorium. Measurements of the air temperature, relative humidity, velocity and CO2 content were carried out; the acceptability of air quality was assessed by 162 untrained students. The subjective assessments and the measurements of the environmental qualities allowed for determining the Weber coefficients and the threshold concentrations of water vapor, as well as for establishing the limitations of the model at short and long exposure to polluted indoor air. The results are in agreement with previous studies. The standard error equals 0.07 for immediate assessments and 0.17 for assessments after adaptation. Based on the model one can predict the ACC assessments of trained and untrained participants.
Experimental comparison of icing cloud instruments
NASA Technical Reports Server (NTRS)
Olsen, W.; Takeuchi, D. M.; Adams, K.
1983-01-01
Icing cloud instruments were tested in the spray cloud Icing Research Tunnel (IRT) in order to determine their relative accuracy and their limitations over a broad range of conditions. It was found that the average of the readings from each of the liquid water content (LWC) instruments tested agreed closely with each other and with the IRT calibration; but all have a data scatter (+ or - one standard deviation) of about + or - 20 percent. The effect of this + or - 20 percent uncertainty is probably acceptable in aero-penalty and deicer experiments. Existing laser spectrometers proved to be too inaccurate for LWC measurements. The error due to water runoff was the same for all ice accretion LWC instruments. Any given laser spectrometer proved to be highly repeatable in its indications of volume median drop size (DVM), LWC and drop size distribution. However, there was a significant disagreement between different spectrometers of the same model, even after careful standard calibration and data analysis. The scatter about the mean of the DVM data from five Axial Scattering Spectrometer Probes was + or - 20 percent (+ or - one standard deviation) and the average was 20 percent higher than the old IRT calibration. The + or - 20 percent uncertainty in DVM can cause an unacceptable variation in the drag coefficient of an airfoil with ice; however, the variation in a deicer performance test may be acceptable.
Sunstein, Cass R
2014-10-01
Choice can be an extraordinary benefit or an immense burden. In some contexts, people choose not to choose, or would do so if they were asked. In part because of limitations of "bandwidth," and in part because of awareness of their own lack of information and potential biases, people sometimes want other people to choose for them. For example, many people prefer not to make choices about their health or retirement plans; they want to delegate those choices to a private or public institution that they trust (and may well be willing to pay a considerable amount to those who are willing to accept such delegations). This point suggests that however well accepted, the line between active choosing and paternalism is often illusory. When private or public institutions override people's desire not to choose and insist on active choosing, they may well be behaving paternalistically, through a form of choice-requiring paternalism. Active choosing can be seen as a form of libertarian paternalism, and a frequently attractive one, if people are permitted to opt out of choosing in favor of a default (and in that sense permitted not to choose); it is a form of nonlibertarian paternalism insofar as people are required to choose. For both ordinary people and private or public institutions, the ultimate judgment in favor of active choosing, or in favor of choosing not to choose, depends largely on the costs of decisions and the costs of errors.
Song, Lunar; Park, Byeonghwa; Oh, Kyeung Mi
2015-04-01
Serious medication errors continue to exist in hospitals, even though there is technology that could potentially eliminate them such as bar code medication administration. Little is known about the degree to which the culture of patient safety is associated with behavioral intention to use bar code medication administration. Based on the Technology Acceptance Model, this study evaluated the relationships among patient safety culture and perceived usefulness and perceived ease of use, and behavioral intention to use bar code medication administration technology among nurses in hospitals. Cross-sectional surveys with a convenience sample of 163 nurses using bar code medication administration were conducted. Feedback and communication about errors had a positive impact in predicting perceived usefulness (β=.26, P<.01) and perceived ease of use (β=.22, P<.05). In a multiple regression model predicting for behavioral intention, age had a negative impact (β=-.17, P<.05); however, teamwork within hospital units (β=.20, P<.05) and perceived usefulness (β=.35, P<.01) both had a positive impact on behavioral intention. The overall bar code medication administration behavioral intention model explained 24% (P<.001) of the variance. Identified factors influencing bar code medication administration behavioral intention can help inform hospitals to develop tailored interventions for RNs to reduce medication administration errors and increase patient safety by using this technology.
Beyond wilderness: Broadening the applicability of limits of acceptable change
Mark W. Brunson
1977-01-01
The Limits of Acceptable Change (LAC) process helps managers preserve wilderness attributes along with recreation opportunities. Ecosystem management likewise requires managers to balance societal and ecosystem needs. Both are more likely to succeed through collaborative planning. Consequently, LAC can offer a conceptual framework for achieving sustainable solutions...
Neurochemical enhancement of conscious error awareness.
Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A
2012-02-22
How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.
A systematic comparison of error correction enzymes by next-generation sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
A systematic comparison of error correction enzymes by next-generation sequencing
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.; ...
2017-08-01
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs
NASA Astrophysics Data System (ADS)
Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken
2015-09-01
To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.
Goulet, Eric D B; Baker, Lindsay B
2017-12-01
The B-722 Laqua Twin is a low cost, portable, and battery operated sodium analyzer, which can be used for the assessment of sweat sodium concentration. The Laqua Twin is reliable and provides a degree of accuracy similar to more expensive analyzers; however, its interunit measurement error remains unknown. The purpose of this study was to compare the sodium concentration values of 70 sweat samples measured using three different Laqua Twin units. Mean absolute errors, random errors and constant errors among the different Laqua Twins ranged respectively between 1.7 mmol/L to 3.5 mmol/L, 2.5 mmol/L to 3.7 mmol/L and -0.6 mmol/L to 3.9 mmol/L. Proportional errors among Laqua Twins were all < 2%. Based on a within-subject biological variability in sweat sodium concentration of ± 12%, the maximal allowable imprecision among instruments was considered to be £ 6%. In that respect, the within (2.9%), between (4.5%), and total (5.4%) measurement error coefficient of variations were all < 6%. For a given sweat sodium concentration value, the largest observed difference in mean and lower and upper bound error of measurements among instruments were, respectively, 4.7 mmol/L, 2.3 mmol/L, and 7.0 mmol/L. In conclusion, our findings show that the interunit measurement error of the B-722 Laqua Twin is low and methodologically acceptable.
[CIRRNET® - learning from errors, a success story].
Frank, O; Hochreutener, M; Wiederkehr, P; Staender, S
2012-06-01
CIRRNET® is the network of local error-reporting systems of the Swiss Patient Safety Foundation. The network has been running since 2006 together with the Swiss Society for Anaesthesiology and Resuscitation (SGAR), and network participants currently include 39 healthcare institutions from all four different language regions of Switzerland. Further institutions can join at any time. Local error reports in CIRRNET® are bundled at a supraregional level, categorised in accordance with the WHO classification, and analysed by medical experts. The CIRRNET® database offers a solid pool of data with error reports from a wide range of medical specialist's areas and provides the basis for identifying relevant problem areas in patient safety. These problem areas are then processed in cooperation with specialists with extremely varied areas of expertise, and recommendations for avoiding these errors are developed by changing care processes (Quick-Alerts®). Having been approved by medical associations and professional medical societies, Quick-Alerts® are widely supported and well accepted in professional circles. The CIRRNET® database also enables any affiliated CIRRNET® participant to access all error reports in the 'closed user area' of the CIRRNET® homepage and to use these error reports for in-house training. A healthcare institution does not have to make every mistake itself - it can learn from the errors of others, compare notes with other healthcare institutions, and use existing knowledge to advance its own patient safety.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Autonomous Control Modes and Optimized Path Guidance for Shipboard Landing in High Sea States
2015-11-16
a degraded visual environment, workload during the landing task begins to approach the limits of a human pilot’s capability. It is a similarly...Figure 2. Approach Trajectory ±4 ft landing error ±8 ft landing error ±12 ft landing error Flight Path -3000...heave and yaw axes. Figure 5. Open loop system generation ±4 ft landing error ±8 ft landing error ±12 ft landing error -10 -8 -6 -4 -2 0 2 4
Cao, Hui; Stetson, Peter; Hripcsak, George
2003-01-01
In this study, we assessed the explicit reporting of medical errors in the electronic record. We looked for cases in which the provider explicitly stated that he or she or another provider had committed an error. The advantage of the technique is that it is not limited to a specific type of error. Our goals were to 1) measure the rate at which medical errors were documented in medical records, and 2) characterize the types of errors that were reported.
Adaptive control system for pulsed megawatt klystrons
Bolie, Victor W.
1992-01-01
The invention provides an arrangement for reducing waveform errors such as errors in phase or amplitude in output pulses produced by pulsed power output devices such as klystrons by generating an error voltage representing the extent of error still present in the trailing edge of the previous output pulse, using the error voltage to provide a stored control voltage, and applying the stored control voltage to the pulsed power output device to limit the extent of error in the leading edge of the next output pulse.
Oliva, Alexis; Fariña, José B; Llabrés, Matías
2013-10-15
A simple and reproducible UPLC method was developed and validated for the quantitative analysis of finasteride in low-dose drug products. Method validation demonstrated the reliability and consistency of analytical results. Due to the regulatory requirements of pharmaceutical analysis in particular, evaluation of robustness is vital to predict how small variations in operating conditions affect the responses. Response surface methodology as an optimization technique was used to evaluate the robustness. For this, a central composite design was implemented around the nominal conditions. Statistical treatment of the responses (retention factor and drug concentrations expressed as percentage of label claim) showed that methanol content in mobile-phase and flow rate were the most influential factors. In the optimization process, the compromise decision support problem (cDSP) strategy was used. Construction of the robust domain from response-surfaces provided tolerance windows for the factors affecting the effectiveness of the method. The specified limits for the USP uniformity of dosage units assay (98.5-101.5%) and the purely experimental variations based on the repeatability test for center points (nominal conditions repetitions) were used as criteria to establish the tolerance windows, which allowed definition design space (DS) of analytical method. Thus, the acceptance criteria values (AV) proposed by the USP-uniformity of assay only depend on the sampling error. If the variation in the responses corresponded to approximately twice the repeatability standard deviation, individual values for percentage label claim (%LC) response may lie outside the specified limits; this implies the data are not centered between the specified limits, and that this term plus the sampling error affects the AV value. To avoid this fact, the limits specified by the Uniformity of Dosage Form assay (i.e., 98.5-101.5%) must be taken into consideration to fix the tolerance windows for each factor. All these results were verified by the Monte Carlo simulation. In conclusion, the level of variability for different factors must be calculated for each case, and not arbitrary way, provided a variation is found higher than the repeatability for center points and secondly, the %LC response must lie inside the specified limits i.e., 98.5-101.5%. If not the UPLC method must be re-developed. © 2013 Elsevier B.V. All rights reserved.
Combustion Device Failures During Space Shuttle Main Engine Development
NASA Technical Reports Server (NTRS)
Goetz, Otto K.; Monk, Jan C.
2005-01-01
Major Causes: Limited Initial Materials Properties. Limited Structural Models - especially fatigue. Limited Thermal Models. Limited Aerodynamic Models. Human Errors. Limited Component Test. High Pressure. Complicated Control.
Analysis of error-correction constraints in an optical disk.
Roberts, J D; Ryley, A; Jones, D M; Burke, D
1996-07-10
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Analysis of error-correction constraints in an optical disk
NASA Astrophysics Data System (ADS)
Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David
1996-07-01
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Being a Victim of Medical Error in Brazil: An (Un)Real Dilemma
Mendonça, Vitor Silva; Custódio, Eda Marconi
2016-01-01
Medical error stems from inadequate professional conduct that is capable of producing harm to life or exacerbating the health of another, whether through act or omission. This situation has become increasingly common in Brazil and worldwide. In this study, the aim was to understand what being the victim of medical error is like and to investigate the circumstances imposed on this condition of victims in Brazil. A semi-structured interview was conducted with twelve people who had gone through situations of medical error in their lives, creating a space for narratives of their experiences and deep reflection on the phenomenon. The concept of medical error has a negative connotation, often being associated with the incompetence of a medical professional. Medical error in Brazil is demonstrated by low-quality professional performance and represents the current reality of the country because of the common lack of respect and consideration for patients. Victims often remark on their loss of identity, as their social functions have been interrupted and they do not expect to regain such. It was found, however, little assumption of error in the involved doctors’ discourses and attitudes, which felt a need to judge the medical conduct in an attempt to assert their rights. Medical error in Brazil presents a punitive character and is little discussed in medical and scientific circles. The stigma of medical error is closely connected to the value and cultural judgments of the country, making it difficult to accept, both by victims and professionals. PMID:27403461
Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid
NASA Technical Reports Server (NTRS)
VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)
1997-01-01
The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).
Cabilan, C J; Kynoch, Kathryn
2017-09-01
Second victims are clinicians who have made adverse errors and feel traumatized by the experience. The current published literature on second victims is mainly representative of doctors, hence nurses' experiences are not fully depicted. This systematic review was necessary to understand the second victim experience for nurses, explore the support provided, and recommend appropriate support systems for nurses. To synthesize the best available evidence on nurses' experiences as second victims, and explore their experiences of the support they receive and the support they need. Participants were registered nurses who made adverse errors. The review included studies that described nurses' experiences as second victims and/or the support they received after making adverse errors. All studies conducted in any health care settings worldwide. The qualitative studies included were grounded theory, discourse analysis and phenomenology. A structured search strategy was used to locate all unpublished and published qualitative studies, but was limited to the English language, and published between 1980 and February 2017. The references of studies selected for eligibility screening were hand-searched for additional literature. Eligible studies were assessed by two independent reviewers for methodological quality using a standardized critical appraisal instrument from the Joanna Briggs Institute Qualitative Assessment and Review Instrument (JBI QARI). Themes and narrative statements were extracted from papers included in the review using the standardized data extraction tool from JBI QARI. Data synthesis was conducted using the Joanna Briggs Institute meta-aggregation approach. There were nine qualitative studies included in the review. The narratives of 284 nurses generated a total of 43 findings, which formed 15 categories based on similarity of meaning. Four synthesized findings were generated from the categories: (i) The error brings a considerable emotional burden to the nurse that can last for a long time. In some cases, the error can alter nurses' perspectives and disrupt workplace relations; (ii) The type of support received influences how the nurse will feel about the error. Often nurses choose to speak with colleagues who have had similar experiences. Strategies need to focus on helping them to overcome the negative emotions associated with being a second victim; (iii) After the error, nurses are confronted with the dilemma of disclosure. Disclosure is determined by the following factors: how nurses feel about the error, harm to the patient, the support available to the nurse, and how errors are dealt with in the past; and (iv) Reconciliation is every nurse's endeavor. Predominantly, this is achieved by accepting fallibility, followed by acts of restitution, such as making positive changes in practice and disclosure to attain closure (see "Summary of findings"). Adverse errors were distressing for nurses, but they did not always receive the support they needed from colleagues. The lack of support had a significant impact on nurses' decisions on whether to disclose the error and his/her recovery process. Therefore, a good support system is imperative in alleviating the emotional burden, promoting the disclosure process, and assisting nurses with reconciliation. This review also highlighted research gaps that encompass the characteristics of the support system preferred by nurses, and the scarcity of studies worldwide.
Demand forecasting of electricity in Indonesia with limited historical data
NASA Astrophysics Data System (ADS)
Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif
2018-03-01
Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).
Quantitative evaluation of patient-specific quality assurance using online dosimetry system
NASA Astrophysics Data System (ADS)
Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk
2018-01-01
In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).
Balancing the books - a statistical theory of prospective budgets in Earth System science
NASA Astrophysics Data System (ADS)
O'Kane, J. Philip
An honest declaration of the error in a mass, momentum or energy balance, ɛ, simply raises the question of its acceptability: "At what value of ɛ is the attempted balance to be rejected?" Answering this question requires a reference quantity against which to compare ɛ. This quantity must be a mathematical function of all the data used in making the balance. To deliver this function, a theory grounded in a workable definition of acceptability is essential. A distinction must be drawn between a retrospective balance and a prospective budget in relation to any natural space-filling body. Balances look to the past; budgets look to the future. The theory is built on the application of classical sampling theory to the measurement and closure of a prospective budget. It satisfies R.A. Fisher's "vital requirement that the actual and physical conduct of experiments should govern the statistical procedure of their interpretation". It provides a test, which rejects, or fails to reject, the hypothesis that the closing error on the budget, when realised, was due to sampling error only. By increasing the number of measurements, the discrimination of the test can be improved, controlling both the precision and accuracy of the budget and its components. The cost-effective design of such measurement campaigns is discussed briefly. This analysis may also show when campaigns to close a budget on a particular space-filling body are not worth the effort for either scientific or economic reasons. Other approaches, such as those based on stochastic processes, lack this finality, because they fail to distinguish between different types of error in the mismatch between a set of realisations of the process and the measured data.
[Errors in laboratory daily practice].
Larrose, C; Le Carrer, D
2007-01-01
Legislation set by GBEA (Guide de bonne exécution des analyses) requires that, before performing analysis, the laboratory directors have to check both the nature of the samples and the patients identity. The data processing of requisition forms, which identifies key errors, was established in 2000 and in 2002 by the specialized biochemistry laboratory, also with the contribution of the reception centre for biological samples. The laboratories follow a strict criteria of defining acceptability as a starting point for the reception to then check requisition forms and biological samples. All errors are logged into the laboratory database and analysis report are sent to the care unit specifying the problems and the consequences they have on the analysis. The data is then assessed by the laboratory directors to produce monthly or annual statistical reports. This indicates the number of errors, which are then indexed to patient files to reveal the specific problem areas, therefore allowing the laboratory directors to teach the nurses and enable corrective action.
Sonority contours in word recognition
NASA Astrophysics Data System (ADS)
McLennan, Sean
2003-04-01
Contrary to the Generativist distinction between competence and performance which asserts that speech or perception errors are due to random, nonlinguistic factors, it seems likely that errors are principled and possibly governed by some of the same constraints as language. A preliminary investigation of errors modeled after the child's ``Chain Whisper'' game (a degraded stimulus task) suggests that a significant number of recognition errors can be characterized as an improvement in syllable sonority contour towards the linguistically least-marked, voiceless-stop-plus-vowel syllable. An independent study of sonority contours showed that approximately half of the English lexicon can be uniquely identified by their contour alone. Additionally, ``sororities'' (groups of words that share a single sonority contour), surprisingly, show no correlation to familiarity or frequency in either size or membership. Together these results imply that sonority contours may be an important factor in word recognition and in defining word ``neighborhoods.'' Moreover, they suggest that linguistic markedness constraints may be more prevalent in performance-related phenomena than previously accepted.
NASA Technical Reports Server (NTRS)
Rosenberg, Linda H.; Arthur, James D.; Stapko, Ruth K.; Davani, Darush
1999-01-01
The Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center has been investigating how projects can determine when sufficient testing has been completed. For most projects, schedules are underestimated, and the last phase of the software development, testing, must be decreased. Two questions are frequently asked: "To what extent is the software error-free? " and "How much time and effort is required to detect and remove the remaining errors? " Clearly, neither question can be answered with absolute certainty. Nonetheless, the ability to answer these questions with some acceptable level of confidence is highly desirable. First, knowing the extent to which a product is error-free, we can judge when it is time to terminate testing. Secondly, if errors are judged to be present, we can perform a cost/benefit trade-off analysis to estimate when the software will be ready for use and at what cost. This paper explains the efforts of the SATC to help projects determine what is sufficient testing and when is the most cost-effective time to stop testing.
NASA Astrophysics Data System (ADS)
Judt, Falko
2017-04-01
A tremendous increase in computing power has facilitated the advent of global convection-resolving numerical weather prediction (NWP) models. Although this technological breakthrough allows for the seamless prediction of weather from local to global scales, the predictability of multiscale weather phenomena in these models is not very well known. To address this issue, we conducted a global high-resolution (4-km) predictability experiment using the Model for Prediction Across Scales (MPAS), a state-of-the-art global NWP model developed at the National Center for Atmospheric Research. The goals of this experiment are to investigate error growth from convective to planetary scales and to quantify the intrinsic, scale-dependent predictability limits of atmospheric motions. The globally uniform resolution of 4 km allows for the explicit treatment of organized deep moist convection, alleviating grave limitations of previous predictability studies that either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. Error growth is analyzed within the context of an "identical twin" experiment setup: the error is defined as the difference between a 20-day long "nature run" and a simulation that was perturbed with small-amplitude noise, but is otherwise identical. It is found that in convectively active regions, errors grow by several orders of magnitude within the first 24 h ("super-exponential growth"). The errors then spread to larger scales and begin a phase of exponential growth after 2-3 days when contaminating the baroclinic zones. After 16 days, the globally averaged error saturates—suggesting that the intrinsic limit of atmospheric predictability (in a general sense) is about two weeks, which is in line with earlier estimates. However, error growth rates differ between the tropics and mid-latitudes as well as between the troposphere and stratosphere, highlighting that atmospheric predictability is a complex problem. The comparatively slower error growth in the tropics and in the stratosphere indicates that certain weather phenomena could potentially have longer predictability than currently thought.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Able, CM; Baydush, AH; Nguyen, C
Purpose: To determine the effectiveness of SPC analysis for a model predictive maintenance process that uses accelerator generated parameter and performance data contained in trajectory log files. Methods: Each trajectory file is decoded and a total of 131 axes positions are recorded (collimator jaw position, gantry angle, each MLC, etc.). This raw data is processed and either axis positions are extracted at critical points during the delivery or positional change over time is used to determine axis velocity. The focus of our analysis is the accuracy, reproducibility and fidelity of each axis. A reference positional trace of the gantry andmore » each MLC is used as a motion baseline for cross correlation (CC) analysis. A total of 494 parameters (482 MLC related) were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and parameter/system specifications. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: TG-142 and published analysis of VMAT delivery accuracy. Results: All errors introduced were detected. Synthetic positional errors of 2mm for collimator jaw and MLC carriage exceeded the chart limits. Gantry speed and each MLC speed are analyzed at two different points in the delivery. Simulated Gantry speed error (0.2 deg/sec) and MLC speed error (0.1 cm/sec) exceeded the speed chart limits. Gantry position error of 0.2 deg was detected by the CC maximum value charts. The MLC position error of 0.1 cm was detected by the CC maximum value location charts for every MLC. Conclusion: SPC I/MR evaluation of trajectory log file parameters may be effective in providing an early warning of performance degradation or component failure for medical accelerator systems.« less
NASA Technical Reports Server (NTRS)
Todling, Ricardo
2015-01-01
Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.
Electronic acquisition of OSCE performance using tablets.
Hochlehnert, Achim; Schultz, Jobst-Hendrik; Möltner, Andreas; Tımbıl, Sevgi; Brass, Konstantin; Jünger, Jana
2015-01-01
Objective Structured Clinical Examinations (OSCEs) often involve a considerable amount of resources in terms of materials and organization since the scores are often recorded on paper. Computer-assisted administration is an alternative with which the need for material resources can be reduced. In particular, the use of tablets seems sensible because these are easy to transport and flexible to use. User acceptance concerning the use of tablets during OSCEs has not yet been extensively investigated. The aim of this study was to evaluate tablet-based OSCEs from the perspective of the user (examiner) and the student examinee. For two OSCEs in Internal Medicine at the University of Heidelberg, user acceptance was analyzed regarding tablet-based administration (satisfaction with functionality) and the subjective amount of effort as perceived by the examiners. Standardized questionnaires and semi-standardized interviews were conducted (complete survey of all participating examiners). In addition, for one OSCE, the subjective evaluation of this mode of assessment was gathered from a random sample of participating students in semi-standardized interviews. Overall, the examiners were very satisfied with using tablets during the assessment. The subjective amount of effort to use the tablet was found on average to be "hardly difficult". The examiners identified the advantages of this mode of administration as being in particular the ease of use and low rate of error. During the interviews of the examinees, acceptance for the use of tablets during the assessment was also detected. Overall, it was found that the use of tablets during OSCEs was well accepted by both examiners and examinees. We expect that this mode of assessment also offers advantages regarding assessment documentation, use of resources, and rate of error in comparison with paper-based assessments; all of these aspects should be followed up on in further studies.
Ten Ways to Cope with Foreign Language Anxiety.
ERIC Educational Resources Information Center
Donley, Philip
1997-01-01
Proposes strategies for reducing foreign language anxiety in the classroom: (1) discuss feelings with instructor and other students; (2) relax, exercise, and eat well; (3) prepare for and attend every class; (4) keep foreign language class in perspective; (5) seek opportunities to practice the language and accept errors are a part of the learning…
Adaptive Methods for Compressible Flow
1994-03-01
labor -intensive task of purpose of this work is to demonstrate the generating acceptable surface triangulations, advantages of integrating the CAD/CAM...sintilar results). L 1 (’-1)(2sn~p) boundary error (MUSCL) The flow variables wre then given by .04 .78% M=asOIne/i .02 AM% v= acosO /sintt .01 .0 p
40 CFR 227.27 - Limiting permissible con-cen-tra-tion (LPC).
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Limiting permissible con-cen-tra-tion... scientific literature or accepted by EPA as being reliable test organisms to determine the anticipated impact... for each type they represent, and that are documented in the scientific literature and accepted by EPA...
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis
NASA Technical Reports Server (NTRS)
Ghrist, Richard W.; Plakalovic, Dragan
2012-01-01
An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.
Large Sample Confidence Limits for Goodman and Kruskal's Proportional Prediction Measure TAU-b
ERIC Educational Resources Information Center
Berry, Kenneth J.; Mielke, Paul W.
1976-01-01
A Fortran Extended program which computes Goodman and Kruskal's Tau-b, its asymmetrical counterpart, Tau-a, and three sets of confidence limits for each coefficient under full multinomial and proportional stratified sampling is presented. A correction of an error in the calculation of the large sample standard error of Tau-b is discussed.…
Why You Should Believe Cold Fusion is Real
NASA Astrophysics Data System (ADS)
Storms, Edmund K.
2005-03-01
Nuclear reactions are now claimed to be initiated in certain solid materials at an energy too low to overcome the Coulomb barrier. These reactions include fusion, accelerated radioactive decay, and transmutation involving heavy elements. Evidence is based on hundreds of measurements of anomalous energy using a variety of calorimeters at levels far in excess of error, measurement of nuclear products using many normally accepted techniques, observations of many patterns of behavior common to all studies, measurement of anomalous energetic emissions using accepted techniques, and an understanding of most variables that have hindered reproducibility in the past. This evidence can be found at www.LENR-CANR.orgwww.LENR-CANR.org. Except for an accepted theory, the claims have met all requirements normally required before a new idea is accepted by conventional science, yet rejection continues. How long can the US afford to reject a clean and potentially cheap source of energy, especially when other nations are attempting to develop this energy and the need for such an energy source is so great?
NASA Technical Reports Server (NTRS)
Montez, M. N.
1980-01-01
The results of a six degree of freedom (6-DOF) nonlinear Monte Carlo dispersion analysis for the latest glide return to landing site (GRTLS) abort trajectory for the Space Transportation System 1 Flight are presented. For this GRTLS, the number two main engine fails at 262.5 seconds ground elapsed time. Fifty randomly selected simulations, initialized at external tank separation, are analyzed. The initial covariance matrix is a 20 x 20 matrix and includes navigation errors and dispersions in position and velocity, time, accelerometer bias, and inertial platform misalinements. In all 50 samples, speedbrake, rudder, elevon, and body flap hinge moments are acceptable. Transitions to autoland begin before 9,000 feet and there are no tailscrapes. Navigation derived dynamic pressure accuracies exceed the flight control system constraints above Mach 2.5. Three out of 50 landings exceeded tire specification limit speed of 222 knots. Pilot manual landings are expected to reduce landing speed by landing farther downrange.
NASA Astrophysics Data System (ADS)
Vâjâiac, Sorin Nicolae; Filip, Valeriu; Štefan, Sabina; Boscornea, Andreea
2014-03-01
The paper describes a method of assessing the size distribution of fog droplets in a cloud chamber, based on measuring the time variation of the transmission of a light beam during the gravitational settling of droplets. Using a model of light extinction by floating spherical particles, the size distribution of droplets is retrieved, along with characteristic structural parameters of the fog (total droplet concentration, liquid water content and effective radius). Moreover, the time variation of the effective radius can be readily extracted from the model. The errors of the method are also estimated and fall within acceptable limits. The method proves sensitive enough to resolve various modes in the droplet distribution and to point out changes in the distribution due to diverse types of aerosol present in the chamber or to the thermal condition of the fog. It is speculated that the method can be further simplified to reach an in-situ version for real-time field measurements.
Moses, Gerald R; Doarn, Charles R
2008-01-01
A portable robotic telesurgery network could remove the geographic disparity of surgical care and provide expert surgical support for first responders to traumatic injury. This is particularly relevant to battlefield medicine where surgical intervention is currently not available to the most perilous fighting circumstances. Similar utility applies to the peacetime healthcare mission. The authors identify the potential advantage to healthcare from a mobile robotic telesurgery system and specify barriers to the employability and acceptance of such a system. The proposed research roadmap will describe a portable telesurgery system that represe government/industrial recognition and reward for excellent care provided by quality surgeons. The provision of expert surgical care improves the outcomes of surgical intervention by reducing errors. For example, during the common procedure of laparoscopic cholecystectomy, distributed telesurgical care could normalize surgical performance and limit major variance of surgeon outliers; such as reducing common bile duct injury to the very low rate seen with operation by proficient surgeons.
Measurements of Gluconeogenesis and Glycogenolysis: A Methodological Review.
Chung, Stephanie T; Chacko, Shaji K; Sunehag, Agneta L; Haymond, Morey W
2015-12-01
Gluconeogenesis is a complex metabolic process that involves multiple enzymatic steps regulated by myriad factors, including substrate concentrations, the redox state, activation and inhibition of specific enzyme steps, and hormonal modulation. At present, the most widely accepted technique to determine gluconeogenesis is by measuring the incorporation of deuterium from the body water pool into newly formed glucose. However, several techniques using radioactive and stable-labeled isotopes have been used to quantitate the contribution and regulation of gluconeogenesis in humans. Each method has its advantages, methodological assumptions, and set of propagated errors. In this review, we examine the strengths and weaknesses of the most commonly used stable isotopes methods to measure gluconeogenesis in vivo. We discuss the advantages and limitations of each method and summarize the applicability of these measurements in understanding normal and pathophysiological conditions. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.
Functional Allocation for Ground-Based Automated Separation Assurance in NextGen
NASA Technical Reports Server (NTRS)
Prevot, Thomas; Mercer, Joey; Martin, Lynne; Homola, Jeffrey; Cabrall, Christopher; Brasil, Connie
2010-01-01
As part of an ongoing research effort into functional allocation in a NextGen environment, a controller-in-the-loop study on ground-based automated separation assurance was conducted at NASA Ames' Airspace Operations Laboratory in February 2010. Participants included six FAA front line managers, who are currently certified professional controllers and four recently retired controllers. Traffic scenarios were 15 and 30 minutes long where controllers interacted with advanced technologies for ground-based separation assurance, weather avoidance, and arrival metering. The automation managed the separation by resolving conflicts automatically and involved controllers only by exception, e.g., when the automated resolution would have been outside preset limits. Results from data analyses show that workload was low despite high levels of traffic, Operational Errors did occur but were closely tied to local complexity, and safety acceptability ratings varied with traffic levels. Positive feedback was elicited for the overall concept with discussion on the proper allocation of functions and trust in automation.
Assessment of Three “WHO” Patient Safety Solutions: Where Do We Stand and What Can We Do?
Banihashemi, Sheida; Hatam, Nahid; Zand, Farid; Kharazmi, Erfan; Nasimi, Soheila; Askarian, Mehrdad
2015-01-01
Background: Most medical errors are preventable. The aim of this study was to compare the current execution of the 3 patient safety solutions with WHO suggested actions and standards. Methods: Data collection forms and direct observation were used to determine the status of implementation of existing protocols, resources, and tools. Results: In the field of patient hand-over, there was no standardized approach. In the field of the performance of correct procedure at the correct body site, there were no safety checklists, guideline, and educational content for informing the patients and their families about the procedure. In the field of hand hygiene (HH), although availability of necessary resources was acceptable, availability of promotional HH posters and reminders was substandard. Conclusions: There are some limitations of resources, protocols, and standard checklists in all three areas. We designed some tools that will help both wards to improve patient safety by the implementation of adapted WHO suggested actions. PMID:26900434
Page, Mark; Taylor, Jane; Blenkin, Matt
2011-07-01
Many studies regarding the legal status of forensic science have relied on the U.S. Supreme Court's mandate in Daubert v. Merrell Dow Pharmaceuticals Inc., and its progeny in order to make subsequent recommendations or rebuttals. This paper focuses on a more pragmatic approach to analyzing forensic science's immediate deficiencies by considering a qualitative analysis of actual judicial reasoning where forensic identification evidence has been excluded on reliability grounds since the Daubert precedent. Reliance on general acceptance is becoming insufficient as proof of the admissibility of forensic evidence. The citation of unfounded statistics, error rates and certainties, a failure to document the analytical process or follow standardized procedures, and the existence of observe bias represent some of the concerns that have lead to the exclusion or limitation of forensic identification evidence. Analysis of these reasons may serve to refocus forensic practitioners' testimony, resources, and research toward rectifying shortfalls in these areas. © 2011 American Academy of Forensic Sciences.
VIS-IR transmitting BGG glass windows
NASA Astrophysics Data System (ADS)
Bayya, Shyam S.; Chin, Geoff D.; Sanghera, Jasbinder S.; Aggarwal, Ishwar D.
2003-09-01
BaO-Ga2O3-GeO2 (BGG) glasses have the desired properties for various window applications in the 0.5-5 μm wavelength region. These glasses are low cost alternatives to the currently used window materials. Fabrication of a high optical quality 18" diameter BGG glass window has been demonstrated with a transmitted wave front error of λ/10 at 632 nm. BGG substrates have also been successfully tested for environmental weatherability (MIL-F-48616) and rain erosion durability up to 300 mph. Preliminary EMI grids have been successfully applied on BGG glasses demonstrating attenuation of 20dB in X and Ku bands. Although the mechanical properties of BGG glasses are acceptable for various window applications, it is demonstrated here that the properties can be further improved significantly by the glassceramization process. The ceramization process does not add any significant cost to the final window material. The crystallite size in the present glass-ceramic limits its transmission to the 2-5 μm region.
LiDAR error estimation with WAsP engineering
NASA Astrophysics Data System (ADS)
Bingöl, F.; Mann, J.; Foussekis, D.
2008-05-01
The LiDAR measurements, vertical wind profile in any height between 10 to 150m, are based on assumption that the measured wind is a product of a homogenous wind. In reality there are many factors affecting the wind on each measurement point which the terrain plays the main role. To model LiDAR measurements and predict possible error in different wind directions for a certain terrain we have analyzed two experiment data sets from Greece. In both sites LiDAR and met, mast data have been collected and the same conditions are simulated with RisØ/DTU software, WAsP Engineering 2.0. Finally measurement data is compared with the model results. The model results are acceptable and very close for one site while the more complex one is returning higher errors at higher positions and in some wind directions.
Inverse sequential detection of parameter changes in developing time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1992-01-01
Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.
Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín
2010-01-01
The objective of this study was to develop a methodology for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG), including moisture, volatile matter, fixed carbon and ash content. The sampling procedure of the TG analysis was of particular interest and was conducted with care. The results of the present study were compared to those of a prompt analysis, and a correlation between the mean values and maximum sampling errors of the methods were not observed. In general, low and acceptable levels of uncertainty and error were obtained, demonstrating that the properties evaluated by TG analysis were representative of the overall fuel composition. The accurate determination of the thermal properties of biomass with precise confidence intervals is of particular interest in energetic biomass applications. PMID:20717532
Error and its meaning in forensic science.
Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M
2014-01-01
The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.
Achieving the Heisenberg limit in quantum metrology using quantum error correction.
Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang
2018-01-08
Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.
Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R
2003-09-10
We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.
Reducing Error Rates for Iris Image using higher Contrast in Normalization process
NASA Astrophysics Data System (ADS)
Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa
2017-08-01
Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.
Selection of noisy measurement locations for error reduction in static parameter identification
NASA Astrophysics Data System (ADS)
Sanayei, Masoud; Onipede, Oladipo; Babu, Suresh R.
1992-09-01
An incomplete set of noisy static force and displacement measurements is used for parameter identification of structures at the element level. Measurement location and the level of accuracy in the measured data can drastically affect the accuracy of the identified parameters. A heuristic method is presented to select a limited number of degrees of freedom (DOF) to perform a successful parameter identification and to reduce the impact of measurement errors on the identified parameters. This pretest simulation uses an error sensitivity analysis to determine the effect of measurement errors on the parameter estimates. The selected DOF can be used for nondestructive testing and health monitoring of structures. Two numerical examples, one for a truss and one for a frame, are presented to demonstrate that using the measurements at the selected subset of DOF can limit the error in the parameter estimates.
Knols, Ruud H; Aufdemkampe, Geert; de Bruin, Eling D; Uebelhart, Daniel; Aaronson, Neil K
2009-01-01
Background Hand-held dynamometry is a portable and inexpensive method to quantify muscle strength. To determine if muscle strength has changed, an examiner must know what part of the difference between a patient's pre-treatment and post-treatment measurements is attributable to real change, and what part is due to measurement error. This study aimed to determine the relative and absolute reliability of intra and inter-observer strength measurements with a hand-held dynamometer (HHD). Methods Two observers performed maximum voluntary peak torque measurements (MVPT) for isometric knee extension in 24 patients with haematological malignancies. For each patient, the measurements were carried out on the same day. The main outcome measures were the intraclass correlation coefficient (ICC ± 95%CI), the standard error of measurement (SEM), the smallest detectable difference (SDD), the relative values as % of the grand mean of the SEM and SDD, and the limits of agreement for the intra- and inter-observer '3 repetition average' and the 'highest value of 3 MVPT' knee extension strength measures. Results The intra-observer ICCs were 0.94 for the average of 3 MVPT (95%CI: 0.86–0.97) and 0.86 for the highest value of 3 MVPT (95%CI: 0.71–0.94). The ICCs for the inter-observer measurements were 0.89 for the average of 3 MVPT (95%CI: 0.75–0.95) and 0.77 for the highest value of 3 MVPT (95%CI: 0.54–0.90). The SEMs for the intra-observer measurements were 6.22 Nm (3.98% of the grand mean (GM) and 9.83 Nm (5.88% of GM). For the inter-observer measurements, the SEMs were 9.65 Nm (6.65% of GM) and 11.41 Nm (6.73% of GM). The SDDs for the generated parameters varied from 17.23 Nm (11.04% of GM) to 27.26 Nm (17.09% of GM) for intra-observer measurements, and 26.76 Nm (16.77% of GM) to 31.62 Nm (18.66% of GM) for inter-observer measurements, with similar results for the limits of agreement. Conclusion The results indicate that there is acceptable relative reliability for evaluating knee strength with a HHD, while the measurement error observed was modest. The HHD may be useful in detecting changes in knee extension strength at the individual patient level. PMID:19272149
Ballistic projectile trajectory determining system
Karr, Thomas J.
1997-01-01
A computer controlled system determines the three-dimensional trajectory of a ballistic projectile. To initialize the system, predictions of state parameters for a ballistic projectile are received at an estimator. The estimator uses the predictions of the state parameters to estimate first trajectory characteristics of the ballistic projectile. A single stationary monocular sensor then observes the actual first trajectory characteristics of the ballistic projectile. A comparator generates an error value related to the predicted state parameters by comparing the estimated first trajectory characteristics of the ballistic projectile with the observed first trajectory characteristics of the ballistic projectile. If the error value is equal to or greater than a selected limit, the predictions of the state parameters are adjusted. New estimates for the trajectory characteristics of the ballistic projectile are made and are then compared with actual observed trajectory characteristics. This process is repeated until the error value is less than the selected limit. Once the error value is less than the selected limit, a calculator calculates trajectory characteristics such a the origin and destination of the ballistic projectile.
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
Studies of atmospheric refraction effects on laser data
NASA Technical Reports Server (NTRS)
Dunn, P. J.; Pearce, W. A.; Johnson, T. S.
1982-01-01
The refraction effect from three perspectives was considered. An analysis of the axioms on which the accepted correction algorithms were based was the first priority. The integrity of the meteorological measurements on which the correction model is based was also considered and a large quantity of laser observations was processed in an effort to detect any serious anomalies in them. The effect of refraction errors on geodetic parameters estimated from laser data using the most recent analysis procedures was the focus of the third element of study. The results concentrate on refraction errors which were found to be critical in the eventual use of the data for measurements of crustal dynamics.
Synopsis of timing measurement techniques used in telecommunications
NASA Technical Reports Server (NTRS)
Zampetti, George
1993-01-01
Historically, Maximum Time Interval Error (MTIE) and Maximum Relative Time Interval Error (MRTIE) have been the main measurement techniques used to characterize timing performance in telecommunications networks. Recently, a new measurement technique, Time Variance (TVAR) has gained acceptance in the North American (ANSI) standards body. TVAR was developed in concurrence with NIST to address certain inadequacies in the MTIE approach. The advantages and disadvantages of each of these approaches are described. Real measurement examples are presented to illustrate the critical issues in actual telecommunication applications. Finally, a new MTIE measurement is proposed (ZTIE) that complements TVAR. Together, TVAR and ZTIE provide a very good characterization of network timing.
Submillimeter, millimeter, and microwave spectral line catalogue
NASA Technical Reports Server (NTRS)
Poynter, R. L.; Pickett, H. M.
1980-01-01
A computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between O and 3000 GHz (such as; wavelengths longer than 100 m) is discussed. The catalogue was used as a planning guide and as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances.
Fischer, Melissa A; Mazor, Kathleen M; Baril, Joann; Alper, Eric; DeMarco, Deborah; Pugnaire, Michele
2006-01-01
CONTEXT Trainees are exposed to medical errors throughout medical school and residency. Little is known about what facilitates and limits learning from these experiences. OBJECTIVE To identify major factors and areas of tension in trainees' learning from medical errors. DESIGN, SETTING, AND PARTICIPANTS Structured telephone interviews with 59 trainees (medical students and residents) from 1 academic medical center. Five authors reviewed transcripts of audiotaped interviews using content analysis. RESULTS Trainees were aware that medical errors occur from early in medical school. Many had an intense emotional response to the idea of committing errors in patient care. Students and residents noted variation and conflict in institutional recommendations and individual actions. Many expressed role confusion regarding whether and how to initiate discussion after errors occurred. Some noted the conflict between reporting errors to seniors who were responsible for their evaluation. Learners requested more open discussion of actual errors and faculty disclosure. No students or residents felt that they learned better from near misses than from actual errors, and many believed that they learned the most when harm was caused. CONCLUSIONS Trainees are aware of medical errors, but remaining tensions may limit learning. Institutions can immediately address variability in faculty response and local culture by disseminating clear, accessible algorithms to guide behavior when errors occur. Educators should develop longitudinal curricula that integrate actual cases and faculty disclosure. Future multi-institutional work should focus on identified themes such as teaching and learning in emotionally charged situations, learning from errors and near misses and balance between individual and systems responsibility. PMID:16704381
Survey and Method for Determination of Trajectory Predictor Requirements
NASA Technical Reports Server (NTRS)
Rentas, Tamika L.; Green, Steven M.; Cate, Karen Tung
2009-01-01
A survey of air-traffic-management researchers, representing a broad range of automation applications, was conducted to document trajectory-predictor requirements for future decision-support systems. Results indicated that the researchers were unable to articulate a basic set of trajectory-prediction requirements for their automation concepts. Survey responses showed the need to establish a process to help developers determine the trajectory-predictor-performance requirements for their concepts. Two methods for determining trajectory-predictor requirements are introduced. A fast-time simulation method is discussed that captures the sensitivity of a concept to the performance of its trajectory-prediction capability. A characterization method is proposed to provide quicker, yet less precise results, based on analysis and simulation to characterize the trajectory-prediction errors associated with key modeling options for a specific concept. Concept developers can then identify the relative sizes of errors associated with key modeling options, and qualitatively determine which options lead to significant errors. The characterization method is demonstrated for a case study involving future airport surface traffic management automation. Of the top four sources of error, results indicated that the error associated with accelerations to and from turn speeds was unacceptable, the error associated with the turn path model was acceptable, and the error associated with taxi-speed estimation was of concern and needed a higher fidelity concept simulation to obtain a more precise result
Stephen F. McCool; David N. Cole
1997-01-01
Experience with Limits of Acceptable Change (LAC) and related planning processes has accumulated since the mid-1980's. These processes were developoed as a means of dealing with recreation carrying capacity issues in wilderness and National Parks. These processes clearly also have application outside of protected areas and to issues other than recreation...
12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”
Code of Federal Regulations, 2011 CFR
2011-01-01
... acceptances is an essential part of banking authorized by 12 U.S.C. 24.” Comptroller's manual 7.7420. Therefore, national banks are authorized by the Comptroller to make acceptances under 12 U.S.C. 24, although the acceptances are not the type described in section 13 of the Federal Reserve Act. (c) A review of...
12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”
Code of Federal Regulations, 2013 CFR
2013-01-01
..., since the making of acceptances is an essential part of banking authorized by 12 U.S.C. 24.” Comptroller... under 12 U.S.C. 24, although the acceptances are not the type described in section 13 of the Federal Reserve Act. (c) A review of the legislative history surrounding the enactment of the acceptance...
12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”
Code of Federal Regulations, 2010 CFR
2010-01-01
... acceptances is an essential part of banking authorized by 12 U.S.C. 24.” Comptroller's manual 7.7420. Therefore, national banks are authorized by the Comptroller to make acceptances under 12 U.S.C. 24, although the acceptances are not the type described in section 13 of the Federal Reserve Act. (c) A review of...
12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”
Code of Federal Regulations, 2012 CFR
2012-01-01
..., since the making of acceptances is an essential part of banking authorized by 12 U.S.C. 24.” Comptroller... under 12 U.S.C. 24, although the acceptances are not the type described in section 13 of the Federal Reserve Act. (c) A review of the legislative history surrounding the enactment of the acceptance...
12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”
Code of Federal Regulations, 2014 CFR
2014-01-01
..., since the making of acceptances is an essential part of banking authorized by 12 U.S.C. 24.” Comptroller... under 12 U.S.C. 24, although the acceptances are not the type described in section 13 of the Federal Reserve Act. (c) A review of the legislative history surrounding the enactment of the acceptance...
An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine
Liu, Zhiyuan; Wang, Changhui
2015-01-01
In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method. PMID:26512675
Shawahna, Ramzi; Masri, Dina; Al-Gharabeh, Rawan; Deek, Rawan; Al-Thayba, Lama; Halaweh, Masa
2016-02-01
To develop and achieve formal consensus on a definition of medication administration errors and scenarios that should or should not be considered as medication administration errors in hospitalised patient settings. Medication administration errors occur frequently in hospitalised patient settings. Currently, there is no formal consensus on a definition of medication administration errors or scenarios that should or should not be considered as medication administration errors. This was a descriptive study using Delphi technique. A panel of experts (n = 50) recruited from major hospitals, nursing schools and universities in Palestine took part in the study. Three Delphi rounds were followed to achieve consensus on a proposed definition of medication administration errors and a series of 61 scenarios representing potential medication administration error situations formulated into a questionnaire. In the first Delphi round, key contact nurses' views on medication administration errors were explored. In the second Delphi round, consensus was achieved to accept the proposed definition of medication administration errors and to include 36 (59%) scenarios and exclude 1 (1·6%) as medication administration errors. In the third Delphi round, consensus was achieved to consider further 14 (23%) and exclude 2 (3·3%) as medication administration errors while the remaining eight (13·1%) were considered equivocal. Of the 61 scenarios included in the Delphi process, experts decided to include 50 scenarios as medication administration errors, exclude three scenarios and include or exclude eight scenarios depending on the individual clinical situation. Consensus on a definition and scenarios representing medication administration errors can be achieved using formal consensus techniques. Researchers should be aware that using different definitions of medication administration errors, inclusion or exclusion of medication administration error situations could significantly affect the rate of medication administration errors reported in their studies. Consensual definitions and medication administration error situations can be used in future epidemiology studies investigating medication administration errors in hospitalised patient settings which may permit and promote direct comparisons of different studies. © 2015 John Wiley & Sons Ltd.
VizieR Online Data Catalog: 2014-2017 photometry for ASASSN-13db (Sicilia-Aguilar+, 2017)
NASA Astrophysics Data System (ADS)
Sicilia-Aguilar, A.; Oprandi, A.; Froebrich, D.; Fang, M.; Prieto, J. L.; Stanek, K.; Scholz, A.; Kochanek, C. S.; Henning, T.; Gredel, R.; Holoien, T. S. W.; Rabus, M.; Shappee, B. J.; Billington, S. J.; Campbell-White, J.; Zegmott, T. J.
2017-08-01
Table 1 contains the full photometry from the All Sky Automated Survey for Supernovae (ASAS-SN) for the variable star ASASSN-13db. Detections with their errors and 5-sigma upper limits are given. Upper limits are marked by the "<" sign and have the error column set to 99.99. (1 data file).
Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.
2018-01-01
Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737
Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D
2018-01-01
Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.
NASA Astrophysics Data System (ADS)
Xiong, B.; Oude Elberink, S.; Vosselman, G.
2014-07-01
In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.
Kwon, Heon-Ju; Kim, Kyoung Won; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu
2018-03-01
Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (V P ) was measured via the assumptive hepatectomy plane. Retrospective liver volume (V R ) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) V P and V R were evaluated. Plane-dependent error in V P was defined as the absolute difference between V P and V R . % plane-dependent error was defined as follows: |V P -V R |/W∙100. Mean V P , V R , and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in V P were 73.3 mL and 10.7%. Mean error and % error in V R were 64.4 mL and 9.3%. Mean plane-dependent error in V P was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in V P exceeded 10% of W in approximately 10% of the subjects in our study. There was approximately 5% plane-dependent error in liver V P on CT volumetry. Plane-dependent error in V P exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.
Basaki, Kinga; Alkumru, Hasan; De Souza, Grace; Finer, Yoav
To assess the three-dimensional (3D) accuracy and clinical acceptability of implant definitive casts fabricated using a digital impression approach and to compare the results with those of a conventional impression method in a partially edentulous condition. A mandibular reference model was fabricated with implants in the first premolar and molar positions to simulate a patient with bilateral posterior edentulism. Ten implant-level impressions per method were made using either an intraoral scanner with scanning abutments for the digital approach or an open-tray technique and polyvinylsiloxane material for the conventional approach. 3D analysis and comparison of implant location on resultant definitive casts were performed using laser scanner and quality control software. The inter-implant distances and interimplant angulations for each implant pair were measured for the reference model and for each definitive cast (n = 20 per group); these measurements were compared to calculate the magnitude of error in 3D for each definitive cast. The influence of implant angulation on definitive cast accuracy was evaluated for both digital and conventional approaches. Statistical analysis was performed using t test (α = .05) for implant position and angulation. Clinical qualitative assessment of accuracy was done via the assessment of the passivity of a master verification stent for each implant pair, and significance was analyzed using chi-square test (α = .05). A 3D error of implant positioning was observed for the two impression techniques vs the reference model, with mean ± standard deviation (SD) error of 116 ± 94 μm and 56 ± 29 μm for the digital and conventional approaches, respectively (P = .01). In contrast, the inter-implant angulation errors were not significantly different between the two techniques (P = .83). Implant angulation did not have a significant influence on definitive cast accuracy within either technique (P = .64). The verification stent demonstrated acceptable passive fit for 11 out of 20 casts and 18 out of 20 casts for the digital and conventional methods, respectively (P = .01). Definitive casts fabricated using the digital impression approach were less accurate than those fabricated from the conventional impression approach for this simulated clinical scenario. A significant number of definitive casts generated by the digital technique did not meet clinically acceptable accuracy for the fabrication of a multiple implant-supported restoration.
Myopia, contact lens use and self-esteem
Dias, Lynette; Manny, Ruth E; Weissberg, Erik; Fern, Karen D
2013-01-01
Purpose To evaluate whether contact lens (CL) use was associated with self-esteem in myopic children originally enrolled in the Correction of Myopia Evaluation Trial (COMET), that after five years continued as an observational study of myopia progression with CL use permitted. Methods Usable data at the six-year visit, one year after CL use was allowed (n = 423/469, age 12-17 years), included questions on CL use, refractive error measurements and self-reported self-esteem in several areas (scholastic/athletic competence, physical appearance, social acceptance, behavioural conduct and global self-worth). Self-esteem, scored from 1 (low) to 4 (high), was measured by the Self-Perception Profile for Children in participants under 14 years or the Self-Perception Profile for Adolescents, in those 14 years and older. Multiple regression analyses were used to evaluate associations between self-esteem and relevant factors identified by univariate analyses (e.g., CL use, gender, ethnicity), while adjusting for baseline self-esteem prior to CL use. Results Mean (±SD) self-esteem scores at the six-year visit (mean age=15.3±1.3 years; mean refractive error= −4.6 ±1.5D) ranged from 2.74 (± 0.76) on athletic competence to 3.33 (± 0.53) on global self-worth. CL wearers (n=224) compared to eyeglass wearers (n=199) were more likely to be female (p<0.0001). Those who chose to wear CLs had higher social acceptance, athletic competence and behavioural conduct scores (p < 0.05) at baseline compared to eyeglass users. CL users continued to report higher social acceptance scores at the six-year visit (p=0.03), after adjusting for baseline scores and other covariates. Ethnicity was also independently associated with social acceptance in the multivariable analyses (p=0.011); African-Americans had higher scores than Asians, Whites and Hispanics. Age and refractive error were not associated with self-esteem or CL use. Conclusions COMET participants who chose to wear CLs after five years of eyeglass use had higher self-esteem compared to those who remained in glasses both preceding and following CL use. This suggests that self-esteem may influence the decision to wear CLs and that CLs in turn are associated with higher self-esteem in individuals most likely to wear them. PMID:23763482
Bedini, José Luis; Wallace, Jane F; Pardo, Scott; Petruschke, Thorsten
2015-10-07
Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients' health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. © 2015 Diabetes Technology Society.
Resnick, Daniel K
2003-06-01
Fluoroscopy-based frameless stereotactic systems provide feedback to the surgeon using virtual fluoroscopic images. The real-life accuracy of these virtual images has not been compared with traditional fluoroscopy in a clinical setting. We prospectively studied 23 consecutive cases. In two cases, registration errors precluded the use of virtual fluoroscopy. Pedicle probes placed with virtual fluoroscopic imaging were imaged with traditional fluoroscopy in the remaining 21 cases. Position of the probes was judged to be ideal, acceptable but not ideal, or not acceptable based on the traditional fluoroscopic images. Virtual fluoroscopy was used to place probes in for 97 pedicles from L1 to the sacrum. Eighty-eight probes were judged to be in ideal position, eight were judged to be acceptable but not ideal, and one probe was judged to be in an unacceptable position. This probe was angled toward an adjacent disc space. Therefore, 96 of 97 probes placed using virtual fluoroscopy were found to be in an acceptable position. The positive predictive value for acceptable screw placement with virtual fluoroscopy compared with traditional fluoroscopy was 99%. A probe placed with virtual fluoroscopic guidance will be judged to be in an acceptable position when imaged with traditional fluoroscopy 99% of the time.
Extending Moore's Law via Computationally Error Tolerant Computing.
Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...
2018-03-01
Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less
Extending Moore's Law via Computationally Error Tolerant Computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.
Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less
Kaneko, Takaaki; Tomonaga, Masaki
2014-06-01
Humans are often unaware of how they control their limb motor movements. People pay attention to their own motor movements only when their usual motor routines encounter errors. Yet little is known about the extent to which voluntary actions rely on automatic control and when automatic control shifts to deliberate control in nonhuman primates. In this study, we demonstrate that chimpanzees and humans showed similar limb motor adjustment in response to feedback error during reaching actions, whereas attentional allocation inferred from gaze behavior differed. We found that humans shifted attention to their own motor kinematics as errors were induced in motor trajectory feedback regardless of whether the errors actually disrupted their reaching their action goals. In contrast, chimpanzees shifted attention to motor execution only when errors actually interfered with their achieving a planned action goal. These results indicate that the species differed in their criteria for shifting from automatic to deliberate control of motor actions. It is widely accepted that sophisticated motor repertoires have evolved in humans. Our results suggest that the deliberate monitoring of one's own motor kinematics may have evolved in the human lineage. Copyright © 2014 Elsevier B.V. All rights reserved.
Error-rate prediction for programmable circuits: methodology, tools and studied cases
NASA Astrophysics Data System (ADS)
Velazco, Raoul
2013-05-01
This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).
NASA Astrophysics Data System (ADS)
Mazurowski, Maciej A.; Zhang, Jing; Lo, Joseph Y.; Kuzmiak, Cherie M.; Ghate, Sujata V.; Yoon, Sora
2014-03-01
Providing high quality mammography education to radiology trainees is essential, as good interpretation skills potentially ensure the highest benefit of screening mammography for patients. We have previously proposed a computer-aided education system that utilizes trainee models, which relate human-assessed image characteristics to interpretation error. We proposed that these models be used to identify the most difficult and therefore the most educationally useful cases for each trainee. In this study, as a next step in our research, we propose to build trainee models that utilize features that are automatically extracted from images using computer vision algorithms. To predict error, we used a logistic regression which accepts imaging features as input and returns error as output. Reader data from 3 experts and 3 trainees were used. Receiver operating characteristic analysis was applied to evaluate the proposed trainee models. Our experiments showed that, for three trainees, our models were able to predict error better than chance. This is an important step in the development of adaptive computer-aided education systems since computer-extracted features will allow for faster and more extensive search of imaging databases in order to identify the most educationally beneficial cases.
Utility of a Newly Designed Film Holder for Premolar Bitewing Radiography.
Safi, Yaser; Esmaeelinejad, Mohammad; Vasegh, Zahra; Valizadeh, Solmaz; Aghdasi, Mohammad Mehdi; Sarani, Omid; Afsahi, Mahmoud
2015-11-01
Bitewing radiography is a valuable technique for assessment of proximal caries, alveolar crest and periodontal status. Technical errors during radiography result in erroneous radiographic interpretation, misdiagnosis, possible mistreatment or unnecessary exposure of patient for taking a repeat radiograph. In this study, we aimed to evaluate the efficacy of a film holder modified from the conventional one and compared it with that of conventional film holder. Our study population comprised of 70 patients who were referred to the Radiology Department for bilateral premolar bitewing radiographs as requested by their attending clinician. Bitewing radiographs in each patient were taken using the newly designed holder in one side and the conventional holder in the other side. The acceptability of the two holders from the perspectives of the technician and patients was determined using a 0-20 point scale. The frequency of overlap and film positioning errors was calculated for each method. The conventional holder had greater acceptability among patients compared to the newly designed holder (mean score of 16.59 versus 13.37). From the technicians' point of view, the newly designed holder was superior to the conventional holder (mean score of 17.33 versus 16.44). The frequency of overlap was lower using the newly designed holder (p<0.001) and it allowed more accurate film positioning (p=0.005). The newly designed holder may facilitate the process of radiography for technicians and may be associated with less frequency of radiographic errors compared to the conventional holder.
Ethnic diversity deflates price bubbles
Levine, Sheen S.; Apfelbaum, Evan P.; Bernard, Mark; Bartelt, Valerie L.; Zajac, Edward J.; Stark, David
2014-01-01
Markets are central to modern society, so their failures can be devastating. Here, we examine a prominent failure: price bubbles. Bubbles emerge when traders err collectively in pricing, causing misfit between market prices and the true values of assets. The causes of such collective errors remain elusive. We propose that bubbles are affected by ethnic homogeneity in the market and can be thwarted by diversity. In homogenous markets, traders place undue confidence in the decisions of others. Less likely to scrutinize others’ decisions, traders are more likely to accept prices that deviate from true values. To test this, we constructed experimental markets in Southeast Asia and North America, where participants traded stocks to earn money. We randomly assigned participants to ethnically homogeneous or diverse markets. We find a marked difference: Across markets and locations, market prices fit true values 58% better in diverse markets. The effect is similar across sites, despite sizeable differences in culture and ethnic composition. Specifically, in homogenous markets, overpricing is higher as traders are more likely to accept speculative prices. Their pricing errors are more correlated than in diverse markets. In addition, when bubbles burst, homogenous markets crash more severely. The findings suggest that price bubbles arise not only from individual errors or financial conditions, but also from the social context of decision making. The evidence may inform public discussion on ethnic diversity: it may be beneficial not only for providing variety in perspectives and skills, but also because diversity facilitates friction that enhances deliberation and upends conformity. PMID:25404313
Scheduling real-time, periodic jobs using imprecise results
NASA Technical Reports Server (NTRS)
Liu, Jane W. S.; Lin, Kwei-Jay; Natarajan, Swaminathan
1987-01-01
A process is called a monotone process if the accuracy of its intermediate results is non-decreasing as more time is spent to obtain the result. The result produced by a monotone process upon its normal termination is the desired result; the error in this result is zero. External events such as timeouts or crashes may cause the process to terminate prematurely. If the intermediate result produced by the process upon its premature termination is saved and made available, the application may still find the result unusable and, hence, acceptable; such a result is said to be an imprecise one. The error in an imprecise result is nonzero. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. This problem differs from the traditional scheduling problems since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result. Consequently, the amounts of processor time assigned to tasks in a valid schedule can be less than the amounts of time required to complete the tasks. A meaningful formulation of this problem taking into account the quality of the overall result is discussed. Three algorithms for scheduling jobs for which the effects of errors in results produced in different periods are not cumulative are described, and their relative merits are evaluated.