Psychrometric Measurement of Leaf Water Potential: Lack of Error Attributable to Leaf Permeability.
Barrs, H D
1965-07-02
A report that low permeability could cause gross errors in psychrometric determinations of water potential in leaves has not been confirmed. No measurable error from this source could be detected for either of two types of thermocouple psychrometer tested on four species, each at four levels of water potential. No source of error other than tissue respiration could be demonstrated.
Theoretical and experimental errors for in situ measurements of plant water potential.
Shackel, K A
1984-07-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (-0.6 to -1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design.
Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1
Shackel, Kenneth A.
1984-01-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagayama, T.; Bailey, J. E.; Loisel, G. P.
Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less
Nagayama, T.; Bailey, J. E.; Loisel, G. P.; ...
2017-06-26
Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagayama, T.; Bailey, J. E.; Loisel, G. P.
Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less
Accuracy of measurement in electrically evoked compound action potentials.
Hey, Matthias; Müller-Deile, Joachim
2015-01-15
Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.
Bennett, Jerry M.; Cortes, Peter M.
1985-01-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367
Bennett, J M; Cortes, P M
1985-09-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.
Error simulation of paired-comparison-based scaling methods
NASA Astrophysics Data System (ADS)
Cui, Chengwu
2000-12-01
Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.
Systematic Error in Leaf Water Potential Measurements with a Thermocouple Psychrometer.
Rawlins, S L
1964-10-30
To allow for the error in measurement of water potentials in leaves, introduced by the presence of a water droplet in the chamber of the psychrometer, a correction must be made for the permeability of the leaf.
BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation...
Medication administration errors in nursing homes using an automated medication dispensing system.
van den Bemt, Patricia M L A; Idzinga, Jetske C; Robertz, Hans; Kormelink, Dennis Groot; Pels, Neske
2009-01-01
OBJECTIVE To identify the frequency of medication administration errors as well as their potential risk factors in nursing homes using a distribution robot. DESIGN The study was a prospective, observational study conducted within three nursing homes in the Netherlands caring for 180 individuals. MEASUREMENTS Medication errors were measured using the disguised observation technique. Types of medication errors were described. The correlation between several potential risk factors and the occurrence of medication errors was studied to identify potential causes for the errors. RESULTS In total 2,025 medication administrations to 127 clients were observed. In these administrations 428 errors were observed (21.2%). The most frequently occurring types of errors were use of wrong administration techniques (especially incorrect crushing of medication and not supervising the intake of medication) and wrong time errors (administering the medication at least 1 h early or late).The potential risk factors female gender (odds ratio (OR) 1.39; 95% confidence interval (CI) 1.05-1.83), ATC medication class antibiotics (OR 11.11; 95% CI 2.66-46.50), medication crushed (OR 7.83; 95% CI 5.40-11.36), number of dosages/day/client (OR 1.03; 95% CI 1.01-1.05), nursing home 2 (OR 3.97; 95% CI 2.86-5.50), medication not supplied by distribution robot (OR 2.92; 95% CI 2.04-4.18), time classes "7-10 am" (OR 2.28; 95% CI 1.50-3.47) and "10 am-2 pm" (OR 1.96; 1.18-3.27) and day of the week "Wednesday" (OR 1.46; 95% CI 1.03-2.07) are associated with a higher risk of administration errors. CONCLUSIONS Medication administration in nursing homes is prone to many errors. This study indicates that the handling of the medication after removing it from the robot packaging may contribute to this high error frequency, which may be reduced by training of nurse attendants, by automated clinical decision support and by measures to reduce workload.
Errors in retarding potential analyzers caused by nonuniformity of the grid-plane potential.
NASA Technical Reports Server (NTRS)
Hanson, W. B.; Frame, D. R.; Midgley, J. E.
1972-01-01
One aspect of the degradation in performance of retarding potential analyzers caused by potential depressions in the retarding grid is quantitatively estimated from laboratory measurements and theoretical calculations. A simple expression is obtained that permits the use of laboratory measurements of grid properties to make first-order corrections to flight data. Systematic positive errors in ion temperature of approximately 16% for the Ogo 4 instrument and 3% for the Ogo 6 instrument are deduced. The effects of the transverse electric fields arising from the grid potential depressions are not treated.
NASA Astrophysics Data System (ADS)
Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang
2018-01-01
The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.
Calibration and temperature correction of heat dissipation matric potential sensors
Flint, A.L.; Campbell, G.S.; Ellett, K.M.; Calissendorff, C.
2002-01-01
This paper describes how heat dissipation sensors, used to measure soil water matric potential, were analyzed to develop a normalized calibration equation and a temperature correction method. Inference of soil matric potential depends on a correlation between the variable thermal conductance of the sensor's porous ceramic and matric poten-tial. Although this correlation varies among sensors, we demonstrate a normalizing procedure that produces a single calibration relationship. Using sensors from three sources and different calibration methods, the normalized calibration resulted in a mean absolute error of 23% over a matric potential range of -0.01 to -35 MPa. Because the thermal conductivity of variably saturated porous media is temperature dependent, a temperature correction is required for application of heat dissipation sensors in field soils. A temperature correction procedure is outlined that reduces temperature dependent errors by 10 times, which reduces the matric potential measurement errors by more than 30%. The temperature dependence is well described by a thermal conductivity model that allows for the correction of measurements at any temperature to measurements at the calibration temperature.
Blood transfusion sampling and a greater role for error recovery.
Oldham, Jane
Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.
Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources
NASA Astrophysics Data System (ADS)
Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.
2011-05-01
The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.
Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials
Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Àngels
2013-01-01
This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants’ math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN. PMID:24236212
The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control
ERIC Educational Resources Information Center
Page, A.; Moreno, R.; Candelas, P.; Belmar, F.
2008-01-01
In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…
Medication Administration Errors in Nursing Homes Using an Automated Medication Dispensing System
van den Bemt, Patricia M.L.A.; Idzinga, Jetske C.; Robertz, Hans; Kormelink, Dennis Groot; Pels, Neske
2009-01-01
Objective To identify the frequency of medication administration errors as well as their potential risk factors in nursing homes using a distribution robot. Design The study was a prospective, observational study conducted within three nursing homes in the Netherlands caring for 180 individuals. Measurements Medication errors were measured using the disguised observation technique. Types of medication errors were described. The correlation between several potential risk factors and the occurrence of medication errors was studied to identify potential causes for the errors. Results In total 2,025 medication administrations to 127 clients were observed. In these administrations 428 errors were observed (21.2%). The most frequently occurring types of errors were use of wrong administration techniques (especially incorrect crushing of medication and not supervising the intake of medication) and wrong time errors (administering the medication at least 1 h early or late).The potential risk factors female gender (odds ratio (OR) 1.39; 95% confidence interval (CI) 1.05–1.83), ATC medication class antibiotics (OR 11.11; 95% CI 2.66–46.50), medication crushed (OR 7.83; 95% CI 5.40–11.36), number of dosages/day/client (OR 1.03; 95% CI 1.01–1.05), nursing home 2 (OR 3.97; 95% CI 2.86–5.50), medication not supplied by distribution robot (OR 2.92; 95% CI 2.04–4.18), time classes “7–10 am” (OR 2.28; 95% CI 1.50–3.47) and “10 am-2 pm” (OR 1.96; 1.18–3.27) and day of the week “Wednesday” (OR 1.46; 95% CI 1.03–2.07) are associated with a higher risk of administration errors. Conclusions Medication administration in nursing homes is prone to many errors. This study indicates that the handling of the medication after removing it from the robot packaging may contribute to this high error frequency, which may be reduced by training of nurse attendants, by automated clinical decision support and by measures to reduce workload. PMID:19390109
Sobel, Michael E; Lindquist, Martin A
2014-07-01
Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.
Measurement Error and Environmental Epidemiology: A Policy Perspective
Edwards, Jessie K.; Keil, Alexander P.
2017-01-01
Purpose of review Measurement error threatens public health by producing bias in estimates of the population impact of environmental exposures. Quantitative methods to account for measurement bias can improve public health decision making. Recent findings We summarize traditional and emerging methods to improve inference under a standard perspective, in which the investigator estimates an exposure response function, and a policy perspective, in which the investigator directly estimates population impact of a proposed intervention. Summary Under a policy perspective, the analysis must be sensitive to errors in measurement of factors that modify the effect of exposure on outcome, must consider whether policies operate on the true or measured exposures, and may increasingly need to account for potentially dependent measurement error of two or more exposures affected by the same policy or intervention. Incorporating approaches to account for measurement error into such a policy perspective will increase the impact of environmental epidemiology. PMID:28138941
Optimal joint measurements of complementary observables by a single trapped ion
NASA Astrophysics Data System (ADS)
Xiong, T. P.; Yan, L. L.; Ma, Z. H.; Zhou, F.; Chen, L.; Yang, W. L.; Feng, M.; Busch, P.
2017-06-01
The uncertainty relations, pioneered by Werner Heisenberg nearly 90 years ago, set a fundamental limitation on the joint measurability of complementary observables. This limitation has long been a subject of debate, which has been reignited recently due to new proposed forms of measurement uncertainty relations. The present work is associated with a new error trade-off relation for compatible observables approximating two incompatible observables, in keeping with the spirit of Heisenberg’s original ideas of 1927. We report the first direct test and confirmation of the tight bounds prescribed by such an error trade-off relation, based on an experimental realisation of optimal joint measurements of complementary observables using a single ultracold {}40{{{Ca}}}+ ion trapped in a harmonic potential. Our work provides a prototypical determination of ultimate joint measurement error bounds with potential applications in quantum information science for high-precision measurement and information security.
Palmer, Katherine A; Shane, Rita; Wu, Cindy N; Bell, Douglas S; Diaz, Frank; Cook-Wiens, Galen; Jackevicius, Cynthia A
2016-01-01
Objective We sought to assess the potential of a widely available source of electronic medication data to prevent medication history errors and resultant inpatient order errors. Methods We used admission medication history (AMH) data from a recent clinical trial that identified 1017 AMH errors and 419 resultant inpatient order errors among 194 hospital admissions of predominantly older adult patients on complex medication regimens. Among the subset of patients for whom we could access current Surescripts electronic pharmacy claims data (SEPCD), two pharmacists independently assessed error severity and our main outcome, which was whether SEPCD (1) was unrelated to the medication error; (2) probably would not have prevented the error; (3) might have prevented the error; or (4) probably would have prevented the error. Results Seventy patients had both AMH errors and current, accessible SEPCD. SEPCD probably would have prevented 110 (35%) of 315 AMH errors and 46 (31%) of 147 resultant inpatient order errors. When we excluded the least severe medication errors, SEPCD probably would have prevented 99 (47%) of 209 AMH errors and 37 (61%) of 61 resultant inpatient order errors. SEPCD probably would have prevented at least one AMH error in 42 (60%) of 70 patients. Conclusion When current SEPCD was available for older adult patients on complex medication regimens, it had substantial potential to prevent AMH errors and resultant inpatient order errors, with greater potential to prevent more severe errors. Further study is needed to measure the benefit of SEPCD in actual use at hospital admission. PMID:26911817
Mathematical Models for Doppler Measurements
NASA Technical Reports Server (NTRS)
Lear, William M.
1987-01-01
Error analysis increases precision of navigation. Report presents improved mathematical models of analysis of Doppler measurements and measurement errors of spacecraft navigation. To take advantage of potential navigational accuracy of Doppler measurements, precise equations relate measured cycle count to position and velocity. Drifts and random variations in transmitter and receiver oscillator frequencies taken into account. Mathematical models also adapted to aircraft navigation, radar, sonar, lidar, and interferometry.
Geometric Quality Assessment of LIDAR Data Based on Swath Overlap
NASA Astrophysics Data System (ADS)
Sampath, A.; Heidemann, H. K.; Stensaas, G. L.
2016-06-01
This paper provides guidelines on quantifying the relative horizontal and vertical errors observed between conjugate features in the overlapping regions of lidar data. The quantification of these errors is important because their presence quantifies the geometric quality of the data. A data set can be said to have good geometric quality if measurements of identical features, regardless of their position or orientation, yield identical results. Good geometric quality indicates that the data are produced using sensor models that are working as they are mathematically designed, and data acquisition processes are not introducing any unforeseen distortion in the data. High geometric quality also leads to high geolocation accuracy of the data when the data acquisition process includes coupling the sensor with geopositioning systems. Current specifications (e.g. Heidemann 2014) do not provide adequate means to quantitatively measure these errors, even though they are required to be reported. Current accuracy measurement and reporting practices followed in the industry and as recommended by data specification documents also potentially underestimate the inter-swath errors, including the presence of systematic errors in lidar data. Hence they pose a risk to the user in terms of data acceptance (i.e. a higher potential for Type II error indicating risk of accepting potentially unsuitable data). For example, if the overlap area is too small or if the sampled locations are close to the center of overlap, or if the errors are sampled in flat regions when there are residual pitch errors in the data, the resultant Root Mean Square Differences (RMSD) can still be small. To avoid this, the following are suggested to be used as criteria for defining the inter-swath quality of data: a) Median Discrepancy Angle b) Mean and RMSD of Horizontal Errors using DQM measured on sloping surfaces c) RMSD for sampled locations from flat areas (defined as areas with less than 5 degrees of slope) It is suggested that 4000-5000 points are uniformly sampled in the overlapping regions of the point cloud, and depending on the surface roughness, to measure the discrepancy between swaths. Care must be taken to sample only areas of single return points only. Point-to-Plane distance based data quality measures are determined for each sample point. These measurements are used to determine the above mentioned parameters. This paper details the measurements and analysis of measurements required to determine these metrics, i.e. Discrepancy Angle, Mean and RMSD of errors in flat regions and horizontal errors obtained using measurements extracted from sloping regions (slope greater than 10 degrees). The research is a result of an ad-hoc joint working group of the US Geological Survey and the American Society for Photogrammetry and Remote Sensing (ASPRS) Airborne Lidar Committee.
10 CFR 74.59 - Quality assurance and accounting requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... occurs which has the potential to affect a measurement result or when program data, generated by tests.../receiver differences, inventory differences, and process differences. (4) Utilize the data generated during... difference (SEID) and the standard error of the process differences. Calibration and measurement error data...
10 CFR 74.59 - Quality assurance and accounting requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... occurs which has the potential to affect a measurement result or when program data, generated by tests.../receiver differences, inventory differences, and process differences. (4) Utilize the data generated during... difference (SEID) and the standard error of the process differences. Calibration and measurement error data...
Effect of electrical coupling on ionic current and synaptic potential measurements.
Rabbah, Pascale; Golowasch, Jorge; Nadim, Farzan
2005-07-01
Recent studies have found electrical coupling to be more ubiquitous than previously thought, and coupling through gap junctions is known to play a crucial role in neuronal function and network output. In particular, current spread through gap junctions may affect the activation of voltage-dependent conductances as well as chemical synaptic release. Using voltage-clamp recordings of two strongly electrically coupled neurons of the lobster stomatogastric ganglion and conductance-based models of these neurons, we identified effects of electrical coupling on the measurement of leak and voltage-gated outward currents, as well as synaptic potentials. Experimental measurements showed that both leak and voltage-gated outward currents are recruited by gap junctions from neurons coupled to the clamped cell. Nevertheless, in spite of the strong coupling between these neurons, the errors made in estimating voltage-gated conductance parameters were relatively minor (<10%). Thus in many cases isolation of coupled neurons may not be required if a small degree of measurement error of the voltage-gated currents or the synaptic potentials is acceptable. Modeling results show, however, that such errors may be as high as 20% if the gap-junction position is near the recording site or as high as 90% when measuring smaller voltage-gated ionic currents. Paradoxically, improved space clamp increases the errors arising from electrical coupling because voltage control across gap junctions is poor for even the highest realistic coupling conductances. Furthermore, the common procedure of leak subtraction can add an extra error to the conductance measurement, the sign of which depends on the maximal conductance.
Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data
NASA Technical Reports Server (NTRS)
Voorhies, C. V.; Santana, J.; Sabaka, T.
1999-01-01
Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).
Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.
Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean
2017-06-01
Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.
Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R
2003-09-10
We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.
Quality Control of an OSCE Using Generalizability Theory and Many-Faceted Rasch Measurement
ERIC Educational Resources Information Center
Iramaneerat, Cherdsak; Yudkowsky, Rachel; Myford, Carol M.; Downing, Steven M.
2008-01-01
An Objective Structured Clinical Examination (OSCE) is an effective method for evaluating competencies. However, scores obtained from an OSCE are vulnerable to many potential measurement errors that cases, items, or standardized patients (SPs) can introduce. Monitoring these sources of errors is an important quality control mechanism to ensure…
Assessment of Spectral Doppler in Preclinical Ultrasound Using a Small-Size Rotating Phantom
Yang, Xin; Sun, Chao; Anderson, Tom; Moran, Carmel M.; Hadoke, Patrick W.F.; Gray, Gillian A.; Hoskins, Peter R.
2013-01-01
Preclinical ultrasound scanners are used to measure blood flow in small animals, but the potential errors in blood velocity measurements have not been quantified. This investigation rectifies this omission through the design and use of phantoms and evaluation of measurement errors for a preclinical ultrasound system (Vevo 770, Visualsonics, Toronto, ON, Canada). A ray model of geometric spectral broadening was used to predict velocity errors. A small-scale rotating phantom, made from tissue-mimicking material, was developed. True and Doppler-measured maximum velocities of the moving targets were compared over a range of angles from 10° to 80°. Results indicate that the maximum velocity was overestimated by up to 158% by spectral Doppler. There was good agreement (<10%) between theoretical velocity errors and measured errors for beam-target angles of 50°–80°. However, for angles of 10°–40°, the agreement was not as good (>50%). The phantom is capable of validating the performance of blood velocity measurement in preclinical ultrasound. PMID:23711503
A multi-site analysis of random error in tower-based measurements of carbon and energy fluxes
Andrew D. Richardson; David Y. Hollinger; George G. Burba; Kenneth J. Davis; Lawrence B. Flanagan; Gabriel G. Katul; J. William Munger; Daniel M. Ricciuto; Paul C. Stoy; Andrew E. Suyker; Shashi B. Verma; Steven C. Wofsy; Steven C. Wofsy
2006-01-01
Measured surface-atmosphere fluxes of energy (sensible heat, H, and latent heat, LE) and CO2 (FCO2) represent the ``true?? flux plus or minus potential random and systematic measurement errors. Here, we use data from seven sites in the AmeriFlux network, including five forested sites (two of which include ``tall tower?? instrumentation), one grassland site, and one...
Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K
2016-11-25
Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.
Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure
ERIC Educational Resources Information Center
Padilla, Miguel A.; Veprinsky, Anna
2012-01-01
Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Christ, Theodore J.
2006-01-01
Curriculum-based measurement of oral reading fluency (CBM-R) is an established procedure used to index the level and trend of student growth. A substantial literature base exists regarding best practices in the administration and interpretation of CBM-R; however, research has yet to adequately address the potential influence of measurement error.…
An automated microphysiological assay for toxicity evaluation.
Eggert, S; Alexander, F A; Wiest, J
2015-08-01
Screening a newly developed drug, food additive or cosmetic ingredient for toxicity is a critical preliminary step before it can move forward in the development pipeline. Due to the sometimes dire consequences when a harmful agent is overlooked, toxicologists work under strict guidelines to effectively catalogue and classify new chemical agents. Conventional assays involve long experimental hours and many manual steps that increase the probability of user error; errors that can potentially manifest as inaccurate toxicology results. Automated assays can overcome many potential mistakes that arise due to human error. In the presented work, we created and validated a novel, automated platform for a microphysiological assay that can examine cellular attributes with sensors measuring changes in cellular metabolic rate, oxygen consumption, and vitality mediated by exposure to a potentially toxic agent. The system was validated with low buffer culture medium with varied conductivities that caused changes in the measured impedance on integrated impedance electrodes.
COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.
Hromadka, T.V.; Yen, C.C.; Guymon, G.L.
1985-01-01
The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.
The cost of adherence mismeasurement in serious mental illness: a claims-based analysis.
Shafrin, Jason; Forma, Felicia; Scherer, Ethan; Hatch, Ainslie; Vytlacil, Edward; Lakdawalla, Darius
2017-05-01
To quantify how adherence mismeasurement affects the estimated impact of adherence on inpatient costs among patients with serious mental illness (SMI). Proportion of days covered (PDC) is a common claims-based measure of medication adherence. Because PDC does not measure medication ingestion, however, it may inaccurately measure adherence. We derived a formula to correct the bias that occurs in adherence-utilization studies resulting from errors in claims-based measures of adherence. We conducted a literature review to identify the correlation between gold-standard and claims-based adherence measures. We derived a bias-correction methodology to address claims-based medication adherence measurement error. We then applied this methodology to a case study of patients with SMI who initiated atypical antipsychotics in 2 large claims databases. Our literature review identified 6 studies of interest. The 4 most relevant ones measured correlations between 0.38 and 0.91. Our preferred estimate implies that the effect of adherence on inpatient spending estimated from claims data would understate the true effect by a factor of 5.3, if there were no other sources of bias. Although our procedure corrects for measurement error, such error also may amplify or mitigate other potential biases. For instance, if adherent patients are healthier than nonadherent ones, measurement error makes the resulting bias worse. On the other hand, if adherent patients are sicker, measurement error mitigates the other bias. Measurement error due to claims-based adherence measures is worth addressing, alongside other more widely emphasized sources of bias in inference.
Addressing Systematic Errors in Correlation Tracking on HMI Magnetograms
NASA Astrophysics Data System (ADS)
Mahajan, Sushant S.; Hathaway, David H.; Munoz-Jaramillo, Andres; Martens, Petrus C.
2017-08-01
Correlation tracking in solar magnetograms is an effective method to measure the differential rotation and meridional flow on the solar surface. However, since the tracking accuracy required to successfully measure meridional flow is very high, small systematic errors have a noticeable impact on measured meridional flow profiles. Additionally, the uncertainties of this kind of measurements have been historically underestimated, leading to controversy regarding flow profiles at high latitudes extracted from measurements which are unreliable near the solar limb.Here we present a set of systematic errors we have identified (and potential solutions), including bias caused by physical pixel sizes, center-to-limb systematics, and discrepancies between measurements performed using different time intervals. We have developed numerical techniques to get rid of these systematic errors and in the process improve the accuracy of the measurements by an order of magnitude.We also present a detailed analysis of uncertainties in these measurements using synthetic magnetograms and the quantification of an upper limit below which meridional flow measurements cannot be trusted as a function of latitude.
van de Plas, Afke; Slikkerveer, Mariëlle; Hoen, Saskia; Schrijnemakers, Rick; Driessen, Johanna; de Vries, Frank; van den Bemt, Patricia
2017-01-01
In this controlled before-after study the effect of improvements, derived from Lean Six Sigma strategy, on parenteral medication administration errors and the potential risk of harm was determined. During baseline measurement, on control versus intervention ward, at least one administration error occurred in 14 (74%) and 6 (46%) administrations with potential risk of harm in 6 (32%) and 1 (8%) administrations. Most administration errors with high potential risk of harm occurred in bolus injections: 8 (57%) versus 2 (67%) bolus injections were injected too fast with a potential risk of harm in 6 (43%) and 1 (33%) bolus injections on control and intervention ward. Implemented improvement strategies, based on major causes of too fast administration of bolus injections, were: Substitution of bolus injections by infusions, education, availability of administration information and drug round tabards. Post intervention, on the control ward in 76 (76%) administrations at least one error was made (RR 1.03; CI95:0.77-1.38), with a potential risk of harm in 14 (14%) administrations (RR 0.45; CI95:0.20-1.02). In 40 (68%) administrations on the intervention ward at least one error occurred (RR 1.47; CI95:0.80-2.71) but no administrations were associated with a potential risk of harm. A shift in wrong duration administration errors from bolus injections to infusions, with a reduction of potential risk of harm, seems to have occurred on the intervention ward. Although data are insufficient to prove an effect, Lean Six Sigma was experienced as a suitable strategy to select tailored improvements. Further studies are required to prove the effect of the strategy on parenteral medication administration errors.
van de Plas, Afke; Slikkerveer, Mariëlle; Hoen, Saskia; Schrijnemakers, Rick; Driessen, Johanna; de Vries, Frank; van den Bemt, Patricia
2017-01-01
In this controlled before-after study the effect of improvements, derived from Lean Six Sigma strategy, on parenteral medication administration errors and the potential risk of harm was determined. During baseline measurement, on control versus intervention ward, at least one administration error occurred in 14 (74%) and 6 (46%) administrations with potential risk of harm in 6 (32%) and 1 (8%) administrations. Most administration errors with high potential risk of harm occurred in bolus injections: 8 (57%) versus 2 (67%) bolus injections were injected too fast with a potential risk of harm in 6 (43%) and 1 (33%) bolus injections on control and intervention ward. Implemented improvement strategies, based on major causes of too fast administration of bolus injections, were: Substitution of bolus injections by infusions, education, availability of administration information and drug round tabards. Post intervention, on the control ward in 76 (76%) administrations at least one error was made (RR 1.03; CI95:0.77-1.38), with a potential risk of harm in 14 (14%) administrations (RR 0.45; CI95:0.20-1.02). In 40 (68%) administrations on the intervention ward at least one error occurred (RR 1.47; CI95:0.80-2.71) but no administrations were associated with a potential risk of harm. A shift in wrong duration administration errors from bolus injections to infusions, with a reduction of potential risk of harm, seems to have occurred on the intervention ward. Although data are insufficient to prove an effect, Lean Six Sigma was experienced as a suitable strategy to select tailored improvements. Further studies are required to prove the effect of the strategy on parenteral medication administration errors. PMID:28674608
Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric
2010-04-01
The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.
Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.
Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M
2018-01-01
Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.
Muthu, Satish; Childress, Amy; Brant, Jonathan
2014-08-15
Membrane fouling assessed from a fundamental standpoint within the context of the Derjaguin-Landau-Verwey-Overbeek (DLVO) model. The DLVO model requires that the properties of the membrane and foulant(s) be quantified. Membrane surface charge (zeta potential) and free energy values are characterized using streaming potential and contact angle measurements, respectively. Comparing theoretical assessments for membrane-colloid interactions between research groups requires that the variability of the measured inputs be established. The impact that such variability in input values on the outcome from interfacial models must be quantified to determine an acceptable variance in inputs. An interlaboratory study was conducted to quantify the variability in streaming potential and contact angle measurements when using standard protocols. The propagation of uncertainty from these errors was evaluated in terms of their impact on the quantitative and qualitative conclusions on extended DLVO (XDLVO) calculated interaction terms. The error introduced into XDLVO calculated values was of the same magnitude as the calculated free energy values at contact and at any given separation distance. For two independent laboratories to draw similar quantitative conclusions regarding membrane-foulant interfacial interactions the standard error in contact angle values must be⩽2.5°, while that for the zeta potential values must be⩽7 mV. Copyright © 2014 Elsevier Inc. All rights reserved.
The Impact of Measurement Error on the Accuracy of Individual and Aggregate SGP
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R.
2015-01-01
Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…
Fault tolerance with noisy and slow measurements and preparation.
Paz-Silva, Gerardo A; Brennen, Gavin K; Twamley, Jason
2010-09-03
It is not so well known that measurement-free quantum error correction protocols can be designed to achieve fault-tolerant quantum computing. Despite their potential advantages in terms of the relaxation of accuracy, speed, and addressing requirements, they have usually been overlooked since they are expected to yield a very bad threshold. We show that this is not the case. We design fault-tolerant circuits for the 9-qubit Bacon-Shor code and find an error threshold for unitary gates and preparation of p((p,g)thresh)=3.76×10(-5) (30% of the best known result for the same code using measurement) while admitting up to 1/3 error rates for measurements and allocating no constraints on measurement speed. We further show that demanding gate error rates sufficiently below the threshold pushes the preparation threshold up to p((p)thresh)=1/3.
Prescribing Errors Involving Medication Dosage Forms
Lesar, Timothy S
2002-01-01
CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138
The impact of estimation errors on evaluations of timber production opportunities.
Dennis L. Schweitzer
1970-01-01
Errors in estimating costs and return, the timing of harvests, and the cost of using funds can greatly affect the apparent desirability of investments in timber production. Partial derivatives are used to measure the impact of these errors on the predicted present net worth of potential investments in timber production. Graphs that illustrate the impact of each type...
Thyroid cancer following scalp irradiation: a reanalysis accounting for uncertainty in dosimetry.
Schafer, D W; Lubin, J H; Ron, E; Stovall, M; Carroll, R J
2001-09-01
In the 1940s and 1950s, over 20,000 children in Israel were treated for tinea capitis (scalp ringworm) by irradiation to induce epilation. Follow-up studies showed that the radiation exposure was associated with the development of malignant thyroid neoplasms. Despite this clear evidence of an effect, the magnitude of the dose-response relationship is much less clear because of probable errors in individual estimates of dose to the thyroid gland. Such errors have the potential to bias dose-response estimation, a potential that was not widely appreciated at the time of the original analyses. We revisit this issue, describing in detail how errors in dosimetry might occur, and we develop a new dose-response model that takes the uncertainties of the dosimetry into account. Our model for the uncertainty in dosimetry is a complex and new variant of the classical multiplicative Berkson error model, having components of classical multiplicative measurement error as well as missing data. Analysis of the tinea capitis data suggests that measurement error in the dosimetry has only a negligible effect on dose-response estimation and inference as well as on the modifying effect of age at exposure.
Meurier, C E
2000-07-01
Human errors are common in clinical practice, but they are under-reported. As a result, very little is known of the types, antecedents and consequences of errors in nursing practice. This limits the potential to learn from errors and to make improvement in the quality and safety of nursing care. The aim of this study was to use an Organizational Accident Model to analyse critical incidents of errors in nursing. Twenty registered nurses were invited to produce a critical incident report of an error (which had led to an adverse event or potentially could have led to an adverse event) they had made in their professional practice and to write down their responses to the error using a structured format. Using Reason's Organizational Accident Model, supplemental information was then collected from five of the participants by means of an individual in-depth interview to explore further issues relating to the incidents they had reported. The detailed analysis of one of the incidents is discussed in this paper, demonstrating the effectiveness of this approach in providing insight into the chain of events which may lead to an adverse event. The case study approach using critical incidents of clinical errors was shown to provide relevant information regarding the interaction of organizational factors, local circumstances and active failures (errors) in producing an adverse or potentially adverse event. It is suggested that more use should be made of this approach to understand how errors are made in practice and to take appropriate preventative measures.
A simulation study to quantify the impacts of exposure ...
BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health.MethodsZIP-code level estimates of exposure for six pollutants (CO, NOx, EC, PM2.5, SO4, O3) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error.Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs.ResultsSubstantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3–85% for population error, and 31–85% for total error. When CO, NOx or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copoll
Awareness of deficits and error processing after traumatic brain injury.
Larson, Michael J; Perlstein, William M
2009-10-28
Severe traumatic brain injury is frequently associated with alterations in performance monitoring, including reduced awareness of physical and cognitive deficits. We examined the relationship between awareness of deficits and electrophysiological indices of performance monitoring, including the error-related negativity and posterror positivity (Pe) components of the scalp-recorded event-related potential, in 16 traumatic brain injury survivors who completed a Stroop color-naming task while event-related potential measurements were recorded. Awareness of deficits was measured as the discrepancy between patient and significant-other ratings on the Frontal Systems Behavior Scale. The amplitude of the Pe, but not error-related negativity, was reliably associated with decreased awareness of deficits. Results indicate that Pe amplitude may serve as an electrophysiological indicator of awareness of abilities and deficits.
Human factors process failure modes and effects analysis (HF PFMEA) software tool
NASA Technical Reports Server (NTRS)
Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)
2011-01-01
Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.
Lane, Sandi J; Troyer, Jennifer L; Dienemann, Jacqueline A; Laditka, Sarah B; Blanchette, Christopher M
2014-01-01
Older adults are at greatest risk of medication errors during the transition period of the first 7 days after admission and readmission to a skilled nursing facility (SNF). The aim of this study was to evaluate structure- and process-related factors that contribute to medication errors and harm during transition periods at a SNF. Data for medication errors and potential medication errors during the 7-day transition period for residents entering North Carolina SNFs were from the Medication Error Quality Initiative-Individual Error database from October 2006 to September 2007. The impact of SNF structure and process measures on the number of reported medication errors and harm from errors were examined using bivariate and multivariate model methods. A total of 138 SNFs reported 581 transition period medication errors; 73 (12.6%) caused harm. Chain affiliation was associated with a reduction in the volume of errors during the transition period. One third of all reported transition errors occurred during the medication administration phase of the medication use process, where dose omissions were the most common type of error; however, dose omissions caused harm less often than wrong-dose errors did. Prescribing errors were much less common than administration errors but were much more likely to cause harm. Both structure and process measures of quality were related to the volume of medication errors.However, process quality measures may play a more important role in predicting harm from errors during the transition of a resident into an SNF. Medication errors during transition could be reduced by improving both prescribing processes and transcription and documentation of orders.
Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data
Zhao, Shanshan
2014-01-01
Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
Recommendations to Improve the Accuracy of Estimates of Physical Activity Derived from Self Report
Ainsworth, Barbara E; Caspersen, Carl J; Matthews, Charles E; Mâsse, Louise C; Baranowski, Tom; Zhu, Weimo
2013-01-01
Context Assessment of physical activity using self-report has the potential for measurement error that can lead to incorrect inferences about physical activity behaviors and bias study results. Objective To provide recommendations to improve the accuracy of physical activity derived from self report. Process We provide an overview of presentations and a compilation of perspectives shared by the authors of this paper and workgroup members. Findings We identified a conceptual framework for reducing errors using physical activity self-report questionnaires. The framework identifies six steps to reduce error: (1) identifying the need to measure physical activity, (2) selecting an instrument, (3) collecting data, (4) analyzing data, (5) developing a summary score, and (6) interpreting data. Underlying the first four steps are behavioral parameters of type, intensity, frequency, and duration of physical activities performed, activity domains, and the location where activities are performed. We identified ways to reduce measurement error at each step and made recommendations for practitioners, researchers, and organizational units to reduce error in questionnaire assessment of physical activity. Conclusions Self-report measures of physical activity have a prominent role in research and practice settings. Measurement error can be reduced by applying the framework discussed in this paper. PMID:22287451
Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta
2017-09-19
Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.
Marquardt, Lynn; Eichele, Heike; Lundervold, Astri J.; Haavik, Jan; Eichele, Tom
2018-01-01
Introduction: Attention-deficit hyperactivity disorder (ADHD) is one of the most frequent neurodevelopmental disorders in children and tends to persist into adulthood. Evidence from neuropsychological, neuroimaging, and electrophysiological studies indicates that alterations of error processing are core symptoms in children and adolescents with ADHD. To test whether adults with ADHD show persisting deficits and compensatory processes, we investigated performance monitoring during stimulus-evaluation and response-selection, with a focus on errors, as well as within-group correlations with symptom scores. Methods: Fifty-five participants (27 ADHD and 28 controls) aged 19–55 years performed a modified flanker task during EEG recording with 64 electrodes, and the ADHD and control groups were compared on measures of behavioral task performance, event-related potentials of performance monitoring (N2, P3), and error processing (ERN, Pe). Adult ADHD Self-Report Scale (ASRS) was used to assess ADHD symptom load. Results: Adults with ADHD showed higher error rates in incompatible trials, and these error rates correlated positively with the ASRS scores. Also, we observed lower P3 amplitudes in incompatible trials, which were inversely correlated with symptom load in the ADHD group. Adults with ADHD also displayed reduced error-related ERN and Pe amplitudes. There were no significant differences in reaction time (RT) and RT variability between the two groups. Conclusion: Our findings show deviations of electrophysiological measures, suggesting reduced effortful engagement of attentional and error-monitoring processes in adults with ADHD. Associations between ADHD symptom scores, event-related potential amplitudes, and poorer task performance in the ADHD group further support this notion. PMID:29706908
Arndt, Stefan K; Irawan, Andi; Sanders, Gregor J
2015-12-01
Relative water content (RWC) and the osmotic potential (π) of plant leaves are important plant traits that can be used to assess drought tolerance or adaptation of plants. We estimated the magnitude of errors that are introduced by dilution of π from apoplastic water in osmometry methods and the errors that occur during rehydration of leaves for RWC and π in 14 different plant species from trees, grasses and herbs. Our data indicate that rehydration technique and length of rehydration can introduce significant errors in both RWC and π. Leaves from all species were fully turgid after 1-3 h of rehydration and increasing the rehydration time resulted in a significant underprediction of RWC. Standing rehydration via the petiole introduced the least errors while rehydration via floating disks and submerging leaves for rehydration led to a greater underprediction of RWC. The same effect was also observed for π. The π values following standing rehydration could be corrected by applying a dilution factor from apoplastic water dilution using an osmometric method but not by using apoplastic water fraction (AWF) from pressure volume (PV) curves. The apoplastic water dilution error was between 5 and 18%, while the two other rehydration methods introduced much greater errors. We recommend the use of the standing rehydration method because (1) the correct rehydration time can be evaluated by measuring water potential, (2) overhydration effects were smallest, and (3) π can be accurately corrected by using osmometric methods to estimate apoplastic water dilution. © 2015 Scandinavian Plant Physiology Society.
2016-01-01
Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627
Systematic study of error sources in supersonic skin-friction balance measurements
NASA Technical Reports Server (NTRS)
Allen, J. M.
1976-01-01
An experimental study was performed to investigate potential error sources in data obtained with a self-nulling, moment-measuring, skin-friction balance. The balance was installed in the sidewall of a supersonic wind tunnel, and independent measurements of the three forces contributing to the balance output (skin friction, lip force, and off-center normal force) were made for a range of gap size and element protrusion. The relatively good agreement between the balance data and the sum of these three independently measured forces validated the three-term model used. No advantage to a small gap size was found; in fact, the larger gaps were preferable. Perfect element alignment with the surrounding test surface resulted in very small balance errors. However, if small protrusion errors are unavoidable, no advantage was found in having the element slightly below the surrounding test surface rather than above it.
Utilizing measure-based feedback in control-mastery theory: A clinical error.
Snyder, John; Aafjes-van Doorn, Katie
2016-09-01
Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Trattner, Sigal; Prinsen, Peter; Wiegert, Jens; Gerland, Elazar-Lars; Shefer, Efrat; Morton, Tom; Thompson, Carla M; Yagil, Yoad; Cheng, Bin; Jambawalikar, Sachin; Al-Senan, Rani; Amurao, Maxwell; Halliburton, Sandra S; Einstein, Andrew J
2017-12-01
Metal-oxide-semiconductor field-effect transistors (MOSFETs) serve as a helpful tool for organ radiation dosimetry and their use has grown in computed tomography (CT). While different approaches have been used for MOSFET calibration, those using the commonly available 100 mm pencil ionization chamber have not incorporated measurements performed throughout its length, and moreover, no previous work has rigorously evaluated the multiple sources of error involved in MOSFET calibration. In this paper, we propose a new MOSFET calibration approach to translate MOSFET voltage measurements into absorbed dose from CT, based on serial measurements performed throughout the length of a 100-mm ionization chamber, and perform an analysis of the errors of MOSFET voltage measurements and four sources of error in calibration. MOSFET calibration was performed at two sites, to determine single calibration factors for tube potentials of 80, 100, and 120 kVp, using a 100-mm-long pencil ion chamber and a cylindrical computed tomography dose index (CTDI) phantom of 32 cm diameter. The dose profile along the 100-mm ion chamber axis was sampled in 5 mm intervals by nine MOSFETs in the nine holes of the CTDI phantom. Variance of the absorbed dose was modeled as a sum of the MOSFET voltage measurement variance and the calibration factor variance, the latter being comprised of three main subcomponents: ionization chamber reading variance, MOSFET-to-MOSFET variation and a contribution related to the fact that the average calibration factor of a few MOSFETs was used as an estimate for the average value of all MOSFETs. MOSFET voltage measurement error was estimated based on sets of repeated measurements. The calibration factor overall voltage measurement error was calculated from the above analysis. Calibration factors determined were close to those reported in the literature and by the manufacturer (~3 mV/mGy), ranging from 2.87 to 3.13 mV/mGy. The error σ V of a MOSFET voltage measurement was shown to be proportional to the square root of the voltage V: σV=cV where c = 0.11 mV. A main contributor to the error in the calibration factor was the ionization chamber reading error with 5% error. The usage of a single calibration factor for all MOSFETs introduced an additional error of about 5-7%, depending on the number of MOSFETs that were used to determine the single calibration factor. The expected overall error in a high-dose region (~30 mGy) was estimated to be about 8%, compared to 6% when an individual MOSFET calibration was performed. For a low-dose region (~3 mGy), these values were 13% and 12%. A MOSFET calibration method was developed using a 100-mm pencil ion chamber and a CTDI phantom, accompanied by an absorbed dose error analysis reflecting multiple sources of measurement error. When using a single calibration factor, per tube potential, for different MOSFETs, only a small error was introduced into absorbed dose determinations, thus supporting the use of a single calibration factor for experiments involving many MOSFETs, such as those required to accurately estimate radiation effective dose. © 2017 American Association of Physicists in Medicine.
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Eas M.
2003-01-01
The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.
Matsumoto, Shokei; Jung, Kyoungwon; Smith, Alan; Coimbra, Raul
2018-06-23
To establish the preventable and potentially preventable death rates in a mature trauma center and to identify the causes of death and highlight the lessons learned from these cases. We analyzed data from a Level-1 Trauma Center Registry, collected over a 15-year period. Data on demographics, timing of death, and potential errors were collected. Deaths were judged as preventable (PD), potentially preventable (PPD), or non-preventable (NPD), following a strict external peer-review process. During the 15-year period, there were 874 deaths, 15 (1.7%) and 6 (0.7%) of which were considered PPDs and PDs, respectively. Patients in the PD and PPD groups were not sicker and had less severe head injury than those in the NPD group. The time-death distribution differed according to preventability. We identified 21 errors in the PD and PPD groups, but only 61 (7.3%) errors in the NPD group (n = 853). Errors in judgement accounted for the majority and for 90.5% of the PD and PPD group errors. Although the numbers of PDs and PPDs were low, denoting maturity of our trauma center, there are important lessons to be learned about how errors in judgment led to deaths that could have been prevented.
Water displacement leg volumetry in clinical studies - A discussion of error sources
2010-01-01
Background Water displacement leg volumetry is a highly reproducible method, allowing the confirmation of efficacy of vasoactive substances. Nevertheless errors of its execution and the selection of unsuitable patients are likely to negatively affect the outcome of clinical studies in chronic venous insufficiency (CVI). Discussion Placebo controlled double-blind drug studies in CVI were searched (Cochrane Review 2005, MedLine Search until December 2007) and assessed with regard to efficacy (volume reduction of the leg), patient characteristics, and potential methodological error sources. Almost every second study reported only small drug effects (≤ 30 mL volume reduction). As the most relevant error source the conduct of volumetry was identified. Because the practical use of available equipment varies, volume differences of more than 300 mL - which is a multifold of a potential treatment effect - have been reported between consecutive measurements. Other potential error sources were insufficient patient guidance or difficulties with the transition from the Widmer CVI classification to the CEAP (Clinical Etiological Anatomical Pathophysiological) grading. Summary Patients should be properly diagnosed with CVI and selected for stable oedema and further clinical symptoms relevant for the specific study. Centres require a thorough training on the use of the volumeter and on patient guidance. Volumetry should be performed under constant conditions. The reproducibility of short term repeat measurements has to be ensured. PMID:20070899
Development and Assessment of a Medication Safety Measurement Program in a Long-Term Care Pharmacy.
Hertig, John B; Hultgren, Kyle E; Parks, Scott; Rondinelli, Rick
2016-02-01
Medication errors continue to be a major issue in the health care system, including in long-term care facilities. While many hospitals and health systems have developed methods to identify, track, and prevent these errors, long-term care facilities historically have not invested in these error-prevention strategies. The objective of this study was two-fold: 1) to develop a set of medication-safety process measures for dispensing in a long-term care pharmacy, and 2) to analyze the data from those measures to determine the relative safety of the process. The study was conducted at In Touch Pharmaceuticals in Valparaiso, Indiana. To assess the safety of the medication-use system, each step was documented using a comprehensive flowchart (process flow map) tool. Once completed and validated, the flowchart was used to complete a "failure modes and effects analysis" (FMEA) identifying ways a process may fail. Operational gaps found during FMEA were used to identify points of measurement. The research identified a set of eight measures as potential areas of failure; data were then collected on each one of these. More than 133,000 medication doses (opportunities for errors) were included in the study during the research time frame (April 1, 2014, and ended on June 4, 2014). Overall, there was an approximate order-entry error rate of 15.26%, with intravenous errors at 0.37%. A total of 21 errors migrated through the entire medication-use system. These 21 errors in 133,000 opportunities resulted in a final check error rate of 0.015%. A comprehensive medication-safety measurement program was designed and assessed. This study demonstrated the ability to detect medication errors in a long-term pharmacy setting, thereby making process improvements measureable. Future, larger, multi-site studies should be completed to test this measurement program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew
Currently, cup anemometers on meteorological towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability; however, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install meteorological towers at potential sites. As a result, remote-sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. Although lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount ofmore » uncertainty surrounding the measurement of turbulence using these devices. Errors in lidar turbulence estimates are caused by a variety of factors, including instrument noise, volume averaging, and variance contamination, in which the magnitude of these factors is highly dependent on measurement height and atmospheric stability. As turbulence has a large impact on wind power production, errors in turbulence measurements will translate into errors in wind power prediction. The impact of using lidars rather than cup anemometers for wind power prediction must be understood if lidars are to be considered a viable alternative to cup anemometers.In this poster, the sensitivity of power prediction error to typical lidar turbulence measurement errors is assessed. Turbulence estimates from a vertically profiling WINDCUBE v2 lidar are compared to high-resolution sonic anemometer measurements at field sites in Oklahoma and Colorado to determine the degree of lidar turbulence error that can be expected under different atmospheric conditions. These errors are then incorporated into a power prediction model to estimate the sensitivity of power prediction error to turbulence measurement error. Power prediction models, including the standard binning method and a random forest method, were developed using data from the aeroelastic simulator FAST for a 1.5 MW turbine. The impact of lidar turbulence error on the predicted power from these different models is examined to determine the degree of turbulence measurement accuracy needed for accurate power prediction.« less
Measurement error in environmental epidemiology and the shape of exposure-response curves.
Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E
2011-09-01
Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health.
Preparations for Global Precipitation Measurement(GPM)Ground Validation
NASA Technical Reports Server (NTRS)
Bidwell, S. W.; Bibyk, I. K.; Duming, J. F.; Everett, D. F.; Smith, E. A.; Wolff, D. B.
2004-01-01
The Global Precipitation Measurement (GPM) program is an international partnership led by the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM will improve climate, weather, and hydro-meterorological forecasts through more frequent and more accurate measurement of precipitation across the globe. This paper describes the concept and the preparations for Ground Validation within the GPM program. Ground Validation (GV) plays a critical role in the program by investigating and quantitatively assessing the errors within the satellite retrievals. These quantitative estimates of retrieval errors will assist the scientific community by bounding the errors within their research products. The two fundamental requirements of the GPM Ground Validation program are: (1) error characterization of the precipitation retrievals and (2) continual improvement of the satellite retrieval algorithms. These two driving requirements determine the measurements, instrumentation, and location for ground observations. This paper describes GV plans for estimating the systematic and random components of retrieval error and for characterizing the spatial and temporal structure of the error. This paper describes the GPM program for algorithm improvement in which error models are developed and experimentally explored to uncover the physical causes of errors within the retrievals. GPM will ensure that information gained through Ground Validation is applied to future improvements in the spaceborne retrieval algorithms. This paper discusses the potential locations for validation measurement and research, the anticipated contributions of GPM's international partners, and the interaction of Ground Validation with other GPM program elements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbee, D; McCarthy, A; Galavis, P
Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
Measuring the Lense-Thirring precession using a second Lageos satellite
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Ciufolini, I.
1989-01-01
A complete numerical simulation and error analysis was performed for the proposed experiment with the objective of establishing an accurate assessment of the feasibility and the potential accuracy of the measurement of the Lense-Thirring precession. Consideration was given to identifying the error sources which limit the accuracy of the experiment and proposing procedures for eliminating or reducing the effect of these errors. Analytic investigations were conducted to study the effects of major error sources with the objective of providing error bounds on the experiment. The analysis of realistic simulated data is used to demonstrate that satellite laser ranging of two Lageos satellites, orbiting with supplemental inclinations, collected for a period of 3 years or more, can be used to verify the Lense-Thirring precession. A comprehensive covariance analysis for the solution was also developed.
Masked and unmasked error-related potentials during continuous control and feedback
NASA Astrophysics Data System (ADS)
Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.
2018-06-01
The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR = 81.8% and average TNR = 96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR = 60.9% and average TNR = 58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.
NASA Astrophysics Data System (ADS)
Yang, Pan; Ng, Tze Ling
2017-11-01
Accurate rainfall measurement at high spatial and temporal resolutions is critical for the modeling and management of urban storm water. In this study, we conduct computer simulation experiments to test the potential of a crowd-sourcing approach, where smartphones, surveillance cameras, and other devices act as precipitation sensors, as an alternative to the traditional approach of using rain gauges to monitor urban rainfall. The crowd-sourcing approach is promising as it has the potential to provide high-density measurements, albeit with relatively large individual errors. We explore the potential of this approach for urban rainfall monitoring and the subsequent implications for storm water modeling through a series of simulation experiments involving synthetically generated crowd-sourced rainfall data and a storm water model. The results show that even under conservative assumptions, crowd-sourced rainfall data lead to more accurate modeling of storm water flows as compared to rain gauge data. We observe the relative superiority of the crowd-sourcing approach to vary depending on crowd participation rate, measurement accuracy, drainage area, choice of performance statistic, and crowd-sourced observation type. A possible reason for our findings is the differences between the error structures of crowd-sourced and rain gauge rainfall fields resulting from the differences between the errors and densities of the raw measurement data underlying the two field types.
Hsieh, Shulan; Li, Tzu-Hsien; Tsai, Ling-Ling
2010-04-01
To examine whether monetary incentives attenuate the negative effects of sleep deprivation on cognitive performance in a flanker task that requires higher-level cognitive-control processes, including error monitoring. Twenty-four healthy adults aged 18 to 23 years were randomly divided into 2 subject groups: one received and the other did not receive monetary incentives for performance accuracy. Both subject groups performed a flanker task and underwent electroencephalographic recordings for event-related brain potentials after normal sleep and after 1 night of total sleep deprivation in a within-subject, counterbalanced, repeated-measures study design. Monetary incentives significantly enhanced the response accuracy and reaction time variability under both normal sleep and sleep-deprived conditions, and they reduced the effects of sleep deprivation on the subjective effort level, the amplitude of the error-related negativity (an error-related event-related potential component), and the latency of the P300 (an event-related potential variable related to attention processes). However, monetary incentives could not attenuate the effects of sleep deprivation on any measures of behavior performance, such as the response accuracy, reaction time variability, or posterror accuracy adjustments; nor could they reduce the effects of sleep deprivation on the amplitude of the Pe, another error-related event-related potential component. This study shows that motivation incentives selectively reduce the effects of total sleep deprivation on some brain activities, but they cannot attenuate the effects of sleep deprivation on performance decrements in tasks that require high-level cognitive-control processes. Thus, monetary incentives and sleep deprivation may act through both common and different mechanisms to affect cognitive performance.
10 CFR 74.45 - Measurements and measurement control.
Code of Federal Regulations, 2011 CFR
2011-01-01
... to calculate bias corrections and measurement limits of error. (3) Ensure that potential sources of... determine significant contributors to the measurement uncertainties associated with inventory differences and shipper-receiver differences, so that if SEID exceeds the limits established in paragraph (c)(4...
The incidence and severity of errors in pharmacist-written discharge medication orders.
Onatade, Raliat; Sawieres, Sara; Veck, Alexandra; Smith, Lindsay; Gore, Shivani; Al-Azeib, Sumiah
2017-08-01
Background Errors in discharge prescriptions are problematic. When hospital pharmacists write discharge prescriptions improvements are seen in the quality and efficiency of discharge. There is limited information on the incidence of errors in pharmacists' medication orders. Objective To investigate the extent and clinical significance of errors in pharmacist-written discharge medication orders. Setting 1000-bed teaching hospital in London, UK. Method Pharmacists in this London hospital routinely write discharge medication orders as part of the clinical pharmacy service. Convenient days, based on researcher availability, between October 2013 and January 2014 were selected. Pre-registration pharmacists reviewed all discharge medication orders written by pharmacists on these days and identified discrepancies between the medication history, inpatient chart, patient records and discharge summary. A senior clinical pharmacist confirmed the presence of an error. Each error was assigned a potential clinical significance rating (based on the NCCMERP scale) by a physician and an independent senior clinical pharmacist, working separately. Main outcome measure Incidence of errors in pharmacist-written discharge medication orders. Results 509 prescriptions, written by 51 pharmacists, containing 4258 discharge medication orders were assessed (8.4 orders per prescription). Ten prescriptions (2%), contained a total of ten erroneous orders (order error rate-0.2%). The pharmacist considered that one error had the potential to cause temporary harm (0.02% of all orders). The physician did not rate any of the errors with the potential to cause harm. Conclusion The incidence of errors in pharmacists' discharge medication orders was low. The quality, safety and policy implications of pharmacists routinely writing discharge medication orders should be further explored.
Outliers: A Potential Data Problem.
ERIC Educational Resources Information Center
Douzenis, Cordelia; Rakow, Ernest A.
Outliers, extreme data values relative to others in a sample, may distort statistics that assume internal levels of measurement and normal distribution. The outlier may be a valid value or an error. Several procedures are available for identifying outliers, and each may be applied to errors of prediction from the regression lines for utility in a…
Performance-Based Measurement: Action for Organizations and HPT Accountability
ERIC Educational Resources Information Center
Larbi-Apau, Josephine A.; Moseley, James L.
2010-01-01
Basic measurements and applications of six selected general but critical operational performance-based indicators--effectiveness, efficiency, productivity, profitability, return on investment, and benefit-cost ratio--are presented. With each measurement, goals and potential impact are explored. Errors, risks, limitations to measurements, and a…
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.
Validity of body composition assessment methods for older men with cardiac disease.
Young, H; Porcari, J; Terry, L; Brice, G
1998-01-01
This study was designed to determine which of several body composition assessment methods was most accurate for patients with cardiac disease for the purpose of outcome measurement. Six body composition assessment methods were administered to each of 24 men with cardiac disease. Methods included circumference measurement, skinfold measurement, near-infrared interactance via the Futrex-5000, bioelectrical impedance via the BioAnalogics ElectroLipoGraph and Tanita TBF-150, and hydrostatic weighing, the criterion measure. A repeated measures analysis of variance indicated no significant (P > .05) difference between circumference and skinfold measurements compared to hydrostatic weighing. Near-infrared interactance presented the best standard error of estimates (3.5%) and the best correlation (r = .84) with hydrostatic weighing; however, the constant error was 3.76%. Bioelectrical impedance measured by the ElectroLipoGraph and TBF-150 instruments significantly underestimated percent body fat by 8.81% and 4.8%, respectively. In this study of middle-aged to older men with cardiac disease, the best method for determining body fat was circumferences. This technique was accurate, easy to administer, inexpensive, and had a lower error potential than the other techniques. Skinfold measurements were also closely related to hydrostatic weighing, but should be performed only by experienced practitioners because there is a greater potential for tester error in certain patients. In the future, near-infrared interactance measurements may be a viable technique for body composition assessment in patients with cardiac disease. However, algorithms specific to the population of patients with cardiac disease being tested must be developed before this technique can be routinely recommended for body composition assessment. Bioelectrical impedance assessment by either method is not recommended for patients with cardiac disease, as it consistently underestimated percent body fat when compared to hydrostatic weighing in this population.
Utilization of electrical impedance imaging for estimation of in-vivo tissue resistivities
NASA Astrophysics Data System (ADS)
Eyuboglu, B. Murat; Pilkington, Theo C.
1993-08-01
In order to determine in vivo resistivity of tissues in the thorax, the possibility of combining electrical impedance imaging (EII) techniques with (1) anatomical data extracted from high resolution images, (2) a prior knowledge of tissue resistivities, and (3) a priori noise information was assessed in this study. A Least Square Error Estimator (LSEE) and a statistically constrained Minimum Mean Square Error Estimator (MiMSEE) were implemented to estimate regional electrical resistivities from potential measurements made on the body surface. A two dimensional boundary element model of the human thorax, which consists of four different conductivity regions (the skeletal muscle, the heart, the right lung, and the left lung) was adopted to simulate the measured EII torso potentials. The calculated potentials were then perturbed by simulated instrumentation noise. The signal information used to form the statistical constraint for the MiMSEE was obtained from a prior knowledge of the physiological range of tissue resistivities. The noise constraint was determined from a priori knowledge of errors due to linearization of the forward problem and to the instrumentation noise.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
Chang, Wen-Pin; Davies, Patricia L; Gavin, William J
2010-10-01
Recent studies have investigated the relationship between psychological symptoms and personality traits and error monitoring measured by error-related negativity (ERN) and error positivity (Pe) event-related potential (ERP) components, yet there remains a paucity of studies examining the collective simultaneous effects of psychological symptoms and personality traits on error monitoring. This present study, therefore, examined whether measures of hyperactivity-impulsivity, depression, anxiety and antisocial personality characteristics could collectively account for significant interindividual variability of both ERN and Pe amplitudes, in 29 healthy adults with no known disorders, ages 18-30 years. The bivariate zero-order correlation analyses found that only the anxiety measure was significantly related to both ERN and Pe amplitudes. However, multiple regression analyses that included all four characteristic measures while controlling for number of segments in the ERP average revealed that both depression and antisocial personality characteristics were significant predictors for the ERN amplitudes whereas antisocial personality was the only significant predictor for the Pe amplitude. These findings suggest that psychological symptoms and personality traits are associated with individual variations in error monitoring in healthy adults, and future studies should consider these variables when comparing group difference in error monitoring between adults with and without disabilities. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality
Gaeuman, David; Jacobson, Robert B.
2005-01-01
When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.
Refining Field Measurements of Methane Flux Rates from Abandoned Oil and Gas Wells
NASA Astrophysics Data System (ADS)
Lagron, C. S.; Kang, M.; Riqueros, N. S.; Jackson, R. B.
2015-12-01
Recent studies in Pennsylvania demonstrate the potential for significant methane emissions from abandoned oil and gas wells. A subset of tested wells was high emitting, with methane flux rates up to seven orders of magnitude greater than natural fluxes (up to 105 mg CH4/hour, or about 2.5LPM). These wells contribute disproportionately to the total methane emissions from abandoned oil and gas wells. The principles guiding the chamber design have been developed for lower flux rates, typically found in natural environments, and chamber design modifications may reduce uncertainty in flux rates associated with high-emitting wells. Kang et al. estimate errors of a factor of two in measured values based on previous studies. We conduct controlled releases of methane to refine error estimates and improve chamber design with a focus on high-emitters. Controlled releases of methane are conducted at 0.05 LPM, 0.50 LPM, 1.0 LPM, 2.0 LPM, 3.0 LPM, and 5.0 LPM, and at two chamber dimensions typically used in field measurements studies of abandoned wells. As most sources of error tabulated by Kang et al. tend to bias the results toward underreporting of methane emissions, a flux-targeted chamber design modification can reduce error margins and/or provide grounds for a potential upward revision of emission estimates.
NASA Astrophysics Data System (ADS)
Wetterling, F.; Liehr, M.; Schimpf, P.; Liu, H.; Haueisen, J.
2009-09-01
The non-invasive localization of focal heart activity via body surface potential measurements (BSPM) could greatly benefit the understanding and treatment of arrhythmic heart diseases. However, the in vivo validation of source localization algorithms is rather difficult with currently available measurement techniques. In this study, we used a physical torso phantom composed of different conductive compartments and seven dipoles, which were placed in the anatomical position of the human heart in order to assess the performance of the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) algorithm. Electric potentials were measured on the torso surface for single dipoles with and without further uncorrelated or correlated dipole activity. The localization error averaged 11 ± 5 mm over 22 dipoles, which shows the ability of RAP-MUSIC to distinguish an uncorrelated dipole from surrounding sources activity. For the first time, real computational modelling errors could be included within the validation procedure due to the physically modelled heterogeneities. In conclusion, the introduced heterogeneous torso phantom can be used to validate state-of-the-art algorithms under nearly realistic measurement conditions.
NASA Astrophysics Data System (ADS)
Fernandez, Alvaro; Müller, Inigo A.; Rodríguez-Sanz, Laura; van Dijk, Joep; Looser, Nathan; Bernasconi, Stefano M.
2017-12-01
Carbonate clumped isotopes offer a potentially transformational tool to interpret Earth's history, but the proxy is still limited by poor interlaboratory reproducibility. Here, we focus on the uncertainties that result from the analysis of only a few replicate measurements to understand the extent to which unconstrained errors affect calibration relationships and paleoclimate reconstructions. We find that highly precise data can be routinely obtained with multiple replicate analyses, but this is not always done in many laboratories. For instance, using published estimates of external reproducibilities we find that typical clumped isotope measurements (three replicate analyses) have margins of error at the 95% confidence level (CL) that are too large for many applications. These errors, however, can be systematically reduced with more replicate measurements. Second, using a Monte Carlo-type simulation we demonstrate that the degree of disagreement on published calibration slopes is about what we should expect considering the precision of Δ47 data, the number of samples and replicate analyses, and the temperature range covered in published calibrations. Finally, we show that the way errors are typically reported in clumped isotope data can be problematic and lead to the impression that data are more precise than warranted. We recommend that uncertainties in Δ47 data should no longer be reported as the standard error of a few replicate measurements. Instead, uncertainties should be reported as margins of error at a specified confidence level (e.g., 68% or 95% CL). These error bars are a more realistic indication of the reliability of a measurement.
Determinants of Wealth Fluctuation: Changes in Hard-To-Measure Economic Variables in a Panel Study
Pfeffer, Fabian T.; Griffin, Jamie
2017-01-01
Measuring fluctuation in families’ economic conditions is the raison d’être of household panel studies. Accordingly, a particularly challenging critique is that extreme fluctuation in measured economic characteristics might indicate compounding measurement error rather than actual changes in families’ economic wellbeing. In this article, we address this claim by moving beyond the assumption that particularly large fluctuation in economic conditions might be too large to be realistic. Instead, we examine predictors of large fluctuation, capturing sources related to actual socio-economic changes as well as potential sources of measurement error. Using the Panel Study of Income Dynamics, we study between-wave changes in a dimension of economic wellbeing that is especially hard to measure, namely, net worth as an indicator of total family wealth. Our results demonstrate that even very large between-wave changes in net worth can be attributed to actual socio-economic and demographic processes. We do, however, also identify a potential source of measurement error that contributes to large wealth fluctuation, namely, the treatment of incomplete information, presenting a pervasive challenge for any longitudinal survey that includes questions on economic assets. Our results point to ways for improving wealth variables both in the data collection process (e.g., by measuring active savings) and in data processing (e.g., by improving imputation algorithms). PMID:28316752
The potential for geostationary remote sensing of NO2 to improve weather prediction
NASA Astrophysics Data System (ADS)
Liu, X.; Mizzi, A. P.; Anderson, J. L.; Fung, I. Y.; Cohen, R. C.
2017-12-01
Observations of surface winds remain sparse making it challenging to simulate and predict the weather in circumstances of light winds that are most important for poor air quality. Direct measurements of short-lived chemicals from space might be a solution to this challenge. Here we investigate the application of data assimilation of NO2 columns as will be observed from geostationary orbit to improve predictions and retrospective analysis of surface wind fields. Specifically, synthetic NO2 observations are sampled from a "nature run (NR)" regarded as the true atmosphere. Then NO2 observations are assimilated using EAKF methods into a "control run (CR)" which differs from the NR in the wind field. Wind errors are generated by introducing (1) errors in the initial conditions, (2) creating a model error by using two different formulations for the planetary boundary layer, (3) and by combining both of these effects. Assimilation of NO2 column observations succeeds in reducing wind errors, indicating the prospects for future geostationary atmospheric composition measurements to improve weather forecasting are substantial. We find that due to the temporal heterogeneity of wind errors, the success of this application favors chemical observations of high frequency, such as those from geostationary platform. We also show the potential to improve soil moisture field by assimilating NO2 columns.
Evaluation of the 3dMDface system as a tool for soft tissue analysis.
Hong, C; Choi, K; Kachroo, Y; Kwon, T; Nguyen, A; McComb, R; Moon, W
2017-06-01
To evaluate the accuracy of three-dimensional stereophotogrammetry by comparing values obtained from direct anthropometry and the 3dMDface system. To achieve a more comprehensive evaluation of the reliability of 3dMD, both linear and surface measurements were examined. UCLA Section of Orthodontics. Mannequin head as model for anthropometric measurements. Image acquisition and analysis were carried out on a mannequin head using 16 anthropometric landmarks and 21 measured parameters for linear and surface distances. 3D images using 3dMDface system were made at 0, 1 and 24 hours; 1, 2, 3 and 4 weeks. Error magnitude statistics used include mean absolute difference, standard deviation of error, relative error magnitude and root mean square error. Intra-observer agreement for all measurements was attained. Overall mean errors were lower than 1.00 mm for both linear and surface parameter measurements, except in 5 of the 21 measurements. The three longest parameter distances showed increased variation compared to shorter distances. No systematic errors were observed for all performed paired t tests (P<.05). Agreement values between two observers ranged from 0.91 to 0.99. Measurements on a mannequin confirmed the accuracy of all landmarks and parameters analysed in this study using the 3dMDface system. Results indicated that 3dMDface system is an accurate tool for linear and surface measurements, with potentially broad-reaching applications in orthodontics, surgical treatment planning and treatment evaluation. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Dysfunctional error-related processing in female psychopathy
Steele, Vaughn R.; Edwards, Bethany G.; Bernat, Edward M.; Calhoun, Vince D.; Kiehl, Kent A.
2016-01-01
Neurocognitive studies of psychopathy have predominantly focused on male samples. Studies have shown that female psychopaths exhibit similar affective deficits as their male counterparts, but results are less consistent across cognitive domains including response modulation. As such, there may be potential gender differences in error-related processing in psychopathic personality. Here we investigate response-locked event-related potential (ERP) components [the error-related negativity (ERN/Ne) related to early error-detection processes and the error-related positivity (Pe) involved in later post-error processing] in a sample of incarcerated adult female offenders (n = 121) who performed a response inhibition Go/NoGo task. Psychopathy was assessed using the Hare Psychopathy Checklist-Revised (PCL-R). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Consistent with previous research performed in psychopathic males, female psychopaths exhibited specific deficiencies in the neural correlates of post-error processing (as indexed by reduced Pe amplitude) but not in error monitoring (as indexed by intact ERN/Ne amplitude). Specifically, psychopathic traits reflecting interpersonal and affective dysfunction remained significant predictors of both time-domain and PCA measures reflecting reduced Pe mean amplitude. This is the first evidence to suggest that incarcerated female psychopaths exhibit similar dysfunctional post-error processing as male psychopaths. PMID:26060326
Linear error analysis of slope-area discharge determinations
Kirby, W.H.
1987-01-01
The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.
On the application of photogrammetry to the fitting of jawbone-anchored bridges.
Strid, K G
1985-01-01
Misfit between a jawbone-anchored bridge and the abutments in the patient's jaw may result in, for example, fixture fracture. To achieve improved alignment, the bridge base could be prepared in a numerically-controlled tooling machine using measured abutment coordinates as primary data. For each abutment, the measured values must comprise the coordinates of a reference surface as well as the spatial orientation of the fixture/abutment longitudinal axis. Stereophotogrammetry was assumed to be the measuring method of choice. To assess its potentials, a lower-jaw model with accurately positioned signals was stereophotographed and the films were measured in a stereocomparator. Model-space coordinates, computed from the image coordinates, were compared to the known signal coordinates. The root-mean-square error in position was determined to 0.03-0.08 mm, the maximum individual error amounting to 0.12 mm, whereas the r. m. s. error in axis direction was found to be 0.5-1.5 degrees with a maximum individual error of 1.8 degrees. These errors are of the same order as can be achieved by careful impression techniques. The method could be useful, but because of its complexity, stereophotogrammetry is not recommended as a standard procedure.
Measuring diagnoses: ICD code accuracy.
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-10-01
To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.
Image-based overlay measurement using subsurface ultrasonic resonance force microscopy
NASA Astrophysics Data System (ADS)
Tamer, M. S.; van der Lans, M. J.; Sadeghian, H.
2018-03-01
Image Based Overlay (IBO) measurement is one of the most common techniques used in Integrated Circuit (IC) manufacturing to extract the overlay error values. The overlay error is measured using dedicated overlay targets which are optimized to increase the accuracy and the resolution, but these features are much larger than the IC feature size. IBO measurements are realized on the dedicated targets instead of product features, because the current overlay metrology solutions, mainly based on optics, cannot provide sufficient resolution on product features. However, considering the fact that the overlay error tolerance is approaching 2 nm, the overlay error measurement on product features becomes a need for the industry. For sub-nanometer resolution metrology, Scanning Probe Microscopy (SPM) is widely used, though at the cost of very low throughput. The semiconductor industry is interested in non-destructive imaging of buried structures under one or more layers for the application of overlay and wafer alignment, specifically through optically opaque media. Recently an SPM technique has been developed for imaging subsurface features which can be potentially considered as a solution for overlay metrology. In this paper we present the use of Subsurface Ultrasonic Resonance Force Microscopy (SSURFM) used for IBO measurement. We used SSURFM for imaging the most commonly used overlay targets on a silicon substrate and photoresist. As a proof of concept we have imaged surface and subsurface structures simultaneously. The surface and subsurface features of the overlay targets are fabricated with programmed overlay errors of +/-40 nm, +/-20 nm, and 0 nm. The top layer thickness changes between 30 nm and 80 nm. Using SSURFM the surface and subsurface features were successfully imaged and the overlay errors were extracted, via a rudimentary image processing algorithm. The measurement results are in agreement with the nominal values of the programmed overlay errors.
Gillis, A; Miller, D R
2000-10-09
A series of controlled environment experiments were conducted to examine the use of a dynamic flux chamber to measure soil emission and absorption of total gaseous mercury (TGM). Uncertainty about the appropriate airflow rates through the chamber and chamber exposure to ambient wind are shown to be major sources of potential error. Soil surface mercury flux measurements over a range of chamber airflow rates showed a positive linear relationship between flux rates and airflow rate through the chamber. Mercury flux measurements using the chamber in an environmental wind tunnel showed that exposure of the system to ambient winds decreased the measured flux rates by 40% at a wind speed of 1.0 m s(-1) and 90% at a wind speed of 2 m s(-1). Wind tunnel measurements also showed that the chamber footprint was limited to the area of soil inside the chamber and there is little uncertainty of the footprint size in dry soil.
Error reduction in EMG signal decomposition
Kline, Joshua C.
2014-01-01
Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159
Impact and quantification of the sources of error in DNA pooling designs.
Jawaid, A; Sham, P
2009-01-01
The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.
Fellin, Francesco; Righetto, Roberto; Fava, Giovanni; Trevisan, Diego; Amelio, Dante; Farace, Paolo
2017-03-01
To investigate the range errors made in treatment planning due to the presence of the immobilization devices along the proton beam path. The measured water equivalent thickness (WET) of selected devices was measured by a high-energy spot and a multi-layer ionization chamber and compared with that predicted by treatment planning system (TPS). Two treatment couches, two thermoplastic masks (both un-stretched and stretched) and one headrest were selected. At TPS, every immobilization device was modelled as being part of the patient. The following parameters were assessed: CT acquisition protocol, dose-calculation grid-sizes (1.5 and 3.0mm) and beam-entrance with respect to the devices (coplanar and non-coplanar). Finally, the potential errors produced by a wrong manual separation between treatment couch and the CT table (not present during treatment) were investigated. In the thermoplastic mask, there was a clear effect due to beam entrance, a moderate effect due to the CT protocols and almost no effect due to TPS grid-size, with 1mm errors observed only when thick un-stretched portions were crossed by non-coplanar beams. In the treatment couches the WET errors were negligible (<0.3mm) regardless of the grid-size and CT protocol. The potential range errors produced in the manual separation between treatment couch and CT table were small with 1.5mm grid-size, but could be >0.5mm with a 3.0mm grid-size. In the headrest, WET errors were negligible (0.2mm). With only one exception (un-stretched mask, non-coplanar beams), the WET of all the immobilization devices was properly modelled by the TPS. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kaczmarczyk, Michał
2017-11-01
The basic source of information for determining the temperature distribution in the rock mass and thus the potential for thermal energy contained in geothermal water conversion to electricity, are: temperature measurements in stable geothermic conditions, temperature measurements in unstable conditions, measurements of maximum temperatures at the bottom of the well. Incorrect temperature estimation can lead to errors during thermodynamic parameters calculation and consequently economic viability of the project. The analysis was performed for the geothermal water temperature range of 86-100°C, for dry working fluid R245fa. As a result of the calculations, the data indicate an increase in geothermal power as the geothermal water temperature increases. At 86°C, the potential power is 817.48 kW, increases to 912.20 kW at 88°C and consequently to 1 493.34 kW at 100°C. These results are not surprising, but show a scale of error in assessing the potential that can result improper interpretation of the rock mass and geothermal waters temperature.
USDA-ARS?s Scientific Manuscript database
The importance of measurement uncertainty in terms of calculation of model evaluation error statistics has been recently stated in the literature. The impact of measurement uncertainty on calibration results indicates the potential vague zone in the field of watershed modeling where the assumption ...
NASA Technical Reports Server (NTRS)
Diak, George R.; Stewart, Tod R.
1989-01-01
A method is presented for evaluating the fluxes of sensible and latent heating at the land surface, using satellite-measured surface temperature changes in a composite surface layer-mixed layer representation of the planetary boundary layer. The basic prognostic model is tested by comparison with synoptic station information at sites where surface evaporation climatology is well known. The remote sensing version of the model, using satellite-measured surface temperature changes, is then used to quantify the sharp spatial gradient in surface heating/evaporation across the central United States. An error analysis indicates that perhaps five levels of evaporation are recognizable by these methods and that the chief cause of error is the interaction of errors in the measurement of surface temperature change with errors in the assigment of surface roughness character. Finally, two new potential methods for remote sensing of the land-surface energy balance are suggested which will relay on space-borne instrumentation planned for the 1990s.
Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T
2016-02-01
The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.
Measurement and validation of benchmark-quality thick-target tungsten X-ray spectra below 150 kVp.
Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M
2000-11-01
Pulse-height distributions of two constant potential X-ray tubes with fixed anode tungsten targets were measured and unfolded. The measurements employed quantitative alignment of the beam, the use of two different semiconductor detectors (high-purity germanium and cadmium-zinc-telluride), two different ion chamber systems with beam-specific calibration factors, and various filter and tube potential combinations. Monte Carlo response matrices were generated for each detector for unfolding the pulse-height distributions into spectra incident on the detectors. These response matrices were validated for the low error bars assigned to the data. A significant aspect of the validation of spectra, and a detailed characterization of the X-ray tubes, involved measuring filtered and unfiltered beams at multiple tube potentials (30-150 kVp). Full corrections to ion chamber readings were employed to convert normalized fluence spectra into absolute fluence spectra. The characterization of fixed anode pitting and its dominance over exit window plating and/or detector dead layer was determined. An Appendix of tabulated benchmark spectra with assigned error ranges was developed for future reference.
Phase measurement error in summation of electron holography series.
McLeod, Robert A; Bergen, Michael; Malac, Marek
2014-06-01
Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N
2015-07-01
The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this hypothesis.
Acute Respiratory Distress Syndrome Measurement Error. Potential Effect on Clinical Study Results
Cooke, Colin R.; Iwashyna, Theodore J.; Hofer, Timothy P.
2016-01-01
Rationale: Identifying patients with acute respiratory distress syndrome (ARDS) is a recognized challenge. Experts often have only moderate agreement when applying the clinical definition of ARDS to patients. However, no study has fully examined the implications of low reliability measurement of ARDS on clinical studies. Objectives: To investigate how the degree of variability in ARDS measurement commonly reported in clinical studies affects study power, the accuracy of treatment effect estimates, and the measured strength of risk factor associations. Methods: We examined the effect of ARDS measurement error in randomized clinical trials (RCTs) of ARDS-specific treatments and cohort studies using simulations. We varied the reliability of ARDS diagnosis, quantified as the interobserver reliability (κ-statistic) between two reviewers. In RCT simulations, patients identified as having ARDS were enrolled, and when measurement error was present, patients without ARDS could be enrolled. In cohort studies, risk factors as potential predictors were analyzed using reviewer-identified ARDS as the outcome variable. Measurements and Main Results: Lower reliability measurement of ARDS during patient enrollment in RCTs seriously degraded study power. Holding effect size constant, the sample size necessary to attain adequate statistical power increased by more than 50% as reliability declined, although the result was sensitive to ARDS prevalence. In a 1,400-patient clinical trial, the sample size necessary to maintain similar statistical power increased to over 1,900 when reliability declined from perfect to substantial (κ = 0.72). Lower reliability measurement diminished the apparent effectiveness of an ARDS-specific treatment from a 15.2% (95% confidence interval, 9.4–20.9%) absolute risk reduction in mortality to 10.9% (95% confidence interval, 4.7–16.2%) when reliability declined to moderate (κ = 0.51). In cohort studies, the effect on risk factor associations was similar. Conclusions: ARDS measurement error can seriously degrade statistical power and effect size estimates of clinical studies. The reliability of ARDS measurement warrants careful attention in future ARDS clinical studies. PMID:27159648
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, T.L.; Gee, G.W.; Heller, P.R.
1990-08-01
A commercially available thermocouple psychrometer sample changer (Decagon SC-10A) was used to measure the water potential of field soils ranging in texture from sand to silty clay loam over a range of {minus}0.5 to {minus}20.0 MPa. The standard error of prediction based on regression statistics was generally between 0.04 and 0.14 MPa at {minus}5 MPa. Replacing the measuring junction of the unit changed the calibration slightly; however, it did not significantly alter measurement accuracy. Calibration curves measured throughout a year of testing are consistent and indicate no systematic drift in calibration. Most measurement uncertainty is produced by shifts in themore » intercept in the calibration equation rather than the slope. Both the variability in intercept and the regression error seem to be random. Measurements taken with the SC-10A show that water potential in both sand and silt loam samples removed from 1.5-MPa pressure plates was often 0.5 to 1.0 MPa greater than the 1.5-MPa applied pressure. Limited data from 0.5-MPa pressure plates show close agreement between SC-10A measurements and pressure applied to these more permeable plates.« less
Huo, Ju; Zhang, Guiyang; Yang, Ming
2018-04-20
This paper is concerned with the anisotropic and non-identical gray distribution of feature points clinging to the curved surface, upon which a high precision and uncertainty-resistance algorithm for pose estimation is proposed. Weighted contribution of uncertainty to the objective function of feature points measuring error is analyzed. Then a novel error objective function based on the spatial collinear error is constructed by transforming the uncertainty into a covariance-weighted matrix, which is suitable for the practical applications. Further, the optimized generalized orthogonal iterative (GOI) algorithm is utilized for iterative solutions such that it avoids the poor convergence and significantly resists the uncertainty. Hence, the optimized GOI algorithm extends the field-of-view applications and improves the accuracy and robustness of the measuring results by the redundant information. Finally, simulation and practical experiments show that the maximum error of re-projection image coordinates of the target is less than 0.110 pixels. Within the space 3000 mm×3000 mm×4000 mm, the maximum estimation errors of static and dynamic measurement for rocket nozzle motion are superior to 0.065° and 0.128°, respectively. Results verify the high accuracy and uncertainty attenuation performance of the proposed approach and should therefore have potential for engineering applications.
Crawford, Charles H; Glassman, Steven D; Gum, Jeffrey L; Carreon, Leah Y
2017-01-01
Advancements in the understanding of adult spinal deformity have led to a greater awareness of the role of the pelvis in maintaining sagittal balance and alignment. Pelvic incidence has emerged as a key radiographic measure and should closely match lumbar lordosis. As proper measurement of the pelvic incidence requires accurate identification of the S-1 endplate, lumbosacral transitional anatomy may lead to errors. The purpose of this study is to demonstrate how lumbosacral transitional anatomy may lead to errors in the measurement of pelvic parameters. The current case highlights one of the potential complications that can be avoided with awareness. The authors report the case of a 61-year-old man who had undergone prior lumbar surgeries and then presented with symptomatic lumbar stenosis and sagittal malalignment. Radiographs showed a lumbarized S-1. Prior numbering of the segments in previous surgical and radiology reports led to a pelvic incidence calculation of 61°. Corrected numbering of the segments using the lumbarized S-1 endplate led to a pelvic incidence calculation of 48°. Without recognition of the lumbosacral anatomy, overcorrection of the lumbar lordosis might have led to negative sagittal balance and the propensity to develop proximal junction failure. This case illustrates that improper identification of lumbosacral transitional anatomy may lead to errors that could affect clinical outcome. Awareness of this potential error may help improve patient outcomes.
[Failure modes and effects analysis in the prescription, validation and dispensing process].
Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T
2012-01-01
To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.
Economic impact of medication error: a systematic review.
Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P
2017-05-01
Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ressel, Simon; Bill, Florian; Holtz, Lucas; Janshen, Niklas; Chica, Antonio; Flower, Thomas; Weidlich, Claudia; Struckmann, Thorsten
2018-02-01
The operation of vanadium redox flow batteries requires reliable in situ state of charge (SOC) monitoring. In this study, two SOC estimation approaches for the negative half cell are investigated. First, in situ open circuit potential measurements are combined with Coulomb counting in a one-step calibration of SOC and Nernst potential which doesn't need additional reference SOCs. In-sample and out-of-sample SOCs are estimated and analyzed, estimation errors ≤ 0.04 are obtained. In the second approach, temperature corrected in situ electrolyte density measurements are used for the first time in vanadium redox flow batteries for SOC estimation. In-sample and out-of-sample SOC estimation errors ≤ 0.04 demonstrate the feasibility of this approach. Both methods allow recalibration during battery operation. The actual capacity obtained from SOC calibration can be used in a state of health model.
Minimizing Artifacts and Biases in Chamber-Based Measurements of Soil Respiration
NASA Astrophysics Data System (ADS)
Davidson, E. A.; Savage, K.
2001-05-01
Soil respiration is one of the largest and most important fluxes of carbon in terrestrial ecosystems. The objectives of this paper are to review concerns about uncertainties of chamber-based measurements of CO2 emissions from soils, to evaluate the direction and magnitude of these potential errors, and to explain procedures that minimize these errors and biases. Disturbance of diffusion gradients cause underestimate of fluxes by less than 15% in most cases, and can be partially corrected for with curve fitting and/or can be minimized by using brief measurement periods. Under-pressurization or over-pressurization of the chamber caused by flow restrictions in air circulating designs can cause large errors, but can also be avoided with properly sized chamber vents and unrestricted flows. Somewhat larger pressure differentials are observed under windy conditions, and the accuracy of measurements made under such conditions needs more research. Spatial and temporal heterogeneity can be addressed with appropriate chamber sizes and numbers and frequency of sampling. For example, means of 8 randomly chosen flux measurements from a population of 36 measurements made with 300 cm2 chambers in tropical forests and pastures were within 25% of the full population mean 98% of the time and were within 10% of the full population mean 70% of the time. Comparisons of chamber-based measurements with tower-based measurements of total ecosystem respiration require analysis of the scale of variation within the purported tower footprint. In a forest at Howland, Maine, the differences in soil respiration rates among very poorly drained and well drained soils were large, but they mostly were fortuitously cancelled when evaluated for purported tower footprints of 600-2100 m length. While all of these potential sources of measurement error and sampling biases must be carefully considered, properly designed and deployed chambers provide a reliable means of accurately measuring soil respiration in terrestrial ecosystems.
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
NASA Astrophysics Data System (ADS)
Saad, Katherine M.; Wunch, Debra; Deutscher, Nicholas M.; Griffith, David W. T.; Hase, Frank; De Mazière, Martine; Notholt, Justus; Pollard, David F.; Roehl, Coleen M.; Schneider, Matthias; Sussmann, Ralf; Warneke, Thorsten; Wennberg, Paul O.
2016-11-01
Global and regional methane budgets are markedly uncertain. Conventionally, estimates of methane sources are derived by bridging emissions inventories with atmospheric observations employing chemical transport models. The accuracy of this approach requires correctly simulating advection and chemical loss such that modeled methane concentrations scale with surface fluxes. When total column measurements are assimilated into this framework, modeled stratospheric methane introduces additional potential for error. To evaluate the impact of such errors, we compare Total Carbon Column Observing Network (TCCON) and GEOS-Chem total and tropospheric column-averaged dry-air mole fractions of methane. We find that the model's stratospheric contribution to the total column is insensitive to perturbations to the seasonality or distribution of tropospheric emissions or loss. In the Northern Hemisphere, we identify disagreement between the measured and modeled stratospheric contribution, which increases as the tropopause altitude decreases, and a temporal phase lag in the model's tropospheric seasonality driven by transport errors. Within the context of GEOS-Chem, we find that the errors in tropospheric advection partially compensate for the stratospheric methane errors, masking inconsistencies between the modeled and measured tropospheric methane. These seasonally varying errors alias into source attributions resulting from model inversions. In particular, we suggest that the tropospheric phase lag error leads to large misdiagnoses of wetland emissions in the high latitudes of the Northern Hemisphere.
Eisenberg, Dan T A; Kuzawa, Christopher W; Hayes, M Geoffrey
2015-01-01
Telomere length (TL) is commonly measured using quantitative PCR (qPCR). Although, easier than the southern blot of terminal restriction fragments (TRF) TL measurement method, one drawback of qPCR is that it introduces greater measurement error and thus reduces the statistical power of analyses. To address a potential source of measurement error, we consider the effect of well position on qPCR TL measurements. qPCR TL data from 3,638 people run on a Bio-Rad iCycler iQ are reanalyzed here. To evaluate measurement validity, correspondence with TRF, age, and between mother and offspring are examined. First, we present evidence for systematic variation in qPCR TL measurements in relation to thermocycler well position. Controlling for these well-position effects consistently improves measurement validity and yields estimated improvements in statistical power equivalent to increasing sample sizes by 16%. We additionally evaluated the linearity of the relationships between telomere and single copy gene control amplicons and between qPCR and TRF measures. We find that, unlike some previous reports, our data exhibit linear relationships. We introduce the standard error in percent, a superior method for quantifying measurement error as compared to the commonly used coefficient of variation. Using this measure, we find that excluding samples with high measurement error does not improve measurement validity in our study. Future studies using block-based thermocyclers should consider well position effects. Since additional information can be gleaned from well position corrections, rerunning analyses of previous results with well position correction could serve as an independent test of the validity of these results. © 2015 Wiley Periodicals, Inc.
Doubková, Marcela; Van Dijk, Albert I.J.M.; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter
2012-01-01
The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have the same basic physical measurement characteristics, and therefore very similar retrieval error estimation method can be applied. Because of the expected improvements in radiometric resolution of the Sentinel-1 backscatter measurements, soil moisture estimation errors can be expected to be an order of magnitude less than those for ASAR GM. This opens the possibility for operationally available medium resolution soil moisture estimates with very well-specified errors that can be assimilated into hydrological or crop yield models, with potentially large benefits for land-atmosphere fluxes, crop growth, and water balance monitoring and modelling. PMID:23483015
Acetaminophen attenuates error evaluation in cortex
Kam, Julia W.Y.; Heine, Steven J.; Inzlicht, Michael; Handy, Todd C.
2016-01-01
Acetaminophen has recently been recognized as having impacts that extend into the affective domain. In particular, double blind placebo controlled trials have revealed that acetaminophen reduces the magnitude of reactivity to social rejection, frustration, dissonance and to both negatively and positively valenced attitude objects. Given this diversity of consequences, it has been proposed that the psychological effects of acetaminophen may reflect a widespread blunting of evaluative processing. We tested this hypothesis using event-related potentials (ERPs). Sixty-two participants received acetaminophen or a placebo in a double-blind protocol and completed the Go/NoGo task. Participants’ ERPs were observed following errors on the Go/NoGo task, in particular the error-related negativity (ERN; measured at FCz) and error-related positivity (Pe; measured at Pz and CPz). Results show that acetaminophen inhibits the Pe, but not the ERN, and the magnitude of an individual’s Pe correlates positively with omission errors, partially mediating the effects of acetaminophen on the error rate. These results suggest that recently documented affective blunting caused by acetaminophen may best be described as an inhibition of evaluative processing. They also contribute to the growing work suggesting that the Pe is more strongly associated with conscious awareness of errors relative to the ERN. PMID:26892161
ERIC Educational Resources Information Center
Demir, Ergul
2018-01-01
Purpose: The answer-copying tendency has the potential to detect suspicious answer patterns for prior distributions of statistical detection techniques. The aim of this study is to develop a valid and reliable measurement tool as a scale in order to observe the tendency of university students' copying of answers. Also, it is aimed to provide…
The potential for geostationary remote sensing of NO2 to improve weather prediction
NASA Astrophysics Data System (ADS)
Liu, X.; Mizzi, A. P.; Anderson, J. L.; Fung, I. Y.; Cohen, R. C.
2016-12-01
Observations of surface winds remain sparse making it challenging to simulate and predict the weather in circumstances of light winds that are most important for poor air quality. Direct measurements of short-lived chemicals from space might be a solution to this challenge. Here we investigate the application of data assimilation of NO2 columns as will be observed from geostationary orbit to improve predictions and retrospective analysis of surface wind fields. Specifically, synthetic NO2 observations are sampled from a "nature run (NR)" regarded as the true atmosphere. Then NO2 observations are assimilated using EAKF methods into a "control run (CR)" which differs from the NR in the wind field. Wind errors are generated by introducing (1) errors in the initial conditions, (2) creating a model error by using two different formulations for the planetary boundary layer, (3) and by combining both of these effects. The assimilation reduces wind errors by up to 50%, indicating the prospects for future geostationary atmospheric composition measurements to improve weather forecasting are substantial. We also examine the assimilation sensitivity to the data assimilation window length. We find that due to the temporal heterogeneity of wind errors, the success of this application favors chemical observations of high frequency, such as those from geostationary platform. We also show the potential to improve soil moisture field by assimilating NO2 columns.
#2 - An Empirical Assessment of Exposure Measurement Error ...
Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.
Surprise beyond prediction error
Chumbley, Justin R; Burke, Christopher J; Stephan, Klaas E; Friston, Karl J; Tobler, Philippe N; Fehr, Ernst
2014-01-01
Surprise drives learning. Various neural “prediction error” signals are believed to underpin surprise-based reinforcement learning. Here, we report a surprise signal that reflects reinforcement learning but is neither un/signed reward prediction error (RPE) nor un/signed state prediction error (SPE). To exclude these alternatives, we measured surprise responses in the absence of RPE and accounted for a host of potential SPE confounds. This new surprise signal was evident in ventral striatum, primary sensory cortex, frontal poles, and amygdala. We interpret these findings via a normative model of surprise. PMID:24700400
Absolute vs. relative error characterization of electromagnetic tracking accuracy
NASA Astrophysics Data System (ADS)
Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet
2010-02-01
Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the operator, the spatial distribution of localization errors are clustered and dynamically displayed as separate confidence zones within the operating region of the EM tracker space.
Online beam energy measurement of Beijing electron positron collider II linear accelerator
NASA Astrophysics Data System (ADS)
Wang, S.; Iqbal, M.; Liu, R.; Chi, Y.
2016-02-01
This paper describes online beam energy measurement of Beijing Electron Positron Collider upgraded version II linear accelerator (linac) adequately. It presents the calculation formula, gives the error analysis in detail, discusses the realization in practice, and makes some verification. The method mentioned here measures the beam energy by acquiring the horizontal beam position with three beam position monitors (BPMs), which eliminates the effect of orbit fluctuation, and is much better than the one using the single BPM. The error analysis indicates that this online measurement has further potential usage such as a part of beam energy feedback system. The reliability of this method is also discussed and demonstrated in this paper.
Online beam energy measurement of Beijing electron positron collider II linear accelerator.
Wang, S; Iqbal, M; Liu, R; Chi, Y
2016-02-01
This paper describes online beam energy measurement of Beijing Electron Positron Collider upgraded version II linear accelerator (linac) adequately. It presents the calculation formula, gives the error analysis in detail, discusses the realization in practice, and makes some verification. The method mentioned here measures the beam energy by acquiring the horizontal beam position with three beam position monitors (BPMs), which eliminates the effect of orbit fluctuation, and is much better than the one using the single BPM. The error analysis indicates that this online measurement has further potential usage such as a part of beam energy feedback system. The reliability of this method is also discussed and demonstrated in this paper.
NASA Technical Reports Server (NTRS)
Jekeli, C.
1980-01-01
Errors in the outer zone contribution to oceanic undulation differences computed from a finite set of potential coefficients based on satellite measurements of gravity anomalies and gravity disturbances are analyzed. Equations are derived for the truncation errors resulting from the lack of high-degree coefficients and the commission errors arising from errors in the available lower-degree coefficients, and it is assumed that the inner zone (spherical cap) is sufficiently covered by surface gravity measurements in conjunction with altimetry or by gravity anomaly data. Numerical computations of error for various observational conditions reveal undulation difference errors ranging from 13 to 15 cm and from 6 to 36 cm in the cases of gravity anomaly and gravity disturbance data, respectively for a cap radius of 10 deg and mean anomalies accurate to 10 mgal, with a reduction of errors in both cases to less than 10 cm as mean anomaly accuracy is increased to 1 mgal. In the absence of a spherical cap, both cases yield error estimates of 68 cm for an accuracy of 1 mgal and between 93 and 160 cm for the lesser accuracy, which can be reduced to about 110 cm by the introduction of a perfect 30-deg reference field.
NASA Technical Reports Server (NTRS)
Crowell, Sean M. R.; Kawa, S. Randolph; Browell, Edward V.; Hammerling, Dorit M.; Moore, Berrien; Schaefer, Kevin; Doney, Scott C.
2018-01-01
Space-borne observations of CO2 are vital to gaining understanding of the carbon cycle in regions of the world that are difficult to measure directly, such as the tropical terrestrial biosphere, the high northern and southern latitudes, and in developing nations such as China. Measurements from passive instruments such as GOSAT (Greenhouse Gases Observing Satellite) and OCO-2 (Orbiting Carbon Observatory 2), however, are constrained by solar zenith angle limitations as well as sensitivity to the presence of clouds and aerosols. Active measurements such as those in development for the Active Sensing of CO2 Emissions over Nights, Days and Seasons (ASCENDS) mission show strong potential for making measurements in the high-latitude winter and in cloudy regions. In this work we examine the enhanced flux constraint provided by the improved coverage from an active measurement such as ASCENDS. The simulation studies presented here show that with sufficient precision, ASCENDS will detect permafrost thaw and fossil fuel emissions shifts at annual and seasonal time scales, even in the presence of transport errors, representativeness errors, and biogenic flux errors. While OCO-2 can detect some of these perturbations at the annual scale, the seasonal sampling provided by ASCENDS provides the stronger constraint. Plain Language Summary: Active and passive remote sensors show the potential to provide unprecedented information on the carbon cycle. With the all-season sampling, active remote sensors are more capable of constraining high-latitude emissions. The reduced sensitivity to cloud and aerosol also makes active sensors more capable of providing information in cloudy and polluted scenes with sufficient accuracy. These experiments account for errors that are fundamental to the top-down approach for constraining emissions, and even including these sources of error, we show that satellite remote sensors are critical for understanding the carbon cycle.
Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias
Chambers, David A.; Glasgow, Russell E.
2014-01-01
Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853
Optics measurement algorithms and error analysis for the proton energy frontier
NASA Astrophysics Data System (ADS)
Langner, A.; Tomás, R.
2015-03-01
Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.
NASA Technical Reports Server (NTRS)
Starks, Patrick J.; Norman, John M.; Blad, Blaine L.; Walter-Shea, Elizabeth A.; Walthall, Charles L.
1991-01-01
An equation for estimating albedo from bidirectional reflectance data is proposed. The estimates of albedo are found to be greater than values obtained with simultaneous pyranometer measurements. Particular attention is given to potential sources of systematic errors including extrapolation of bidirectional reflectance data out to a view zenith angle of 90 deg, the use of inappropriate weighting coefficients in the numerator of the albedo equation, surface shadowing caused by the A-frame instrumentation used to measure the incoming and outgoing radiation fluxes, errors in estimates of the denominator of the proposed albedo equation, and a 'hot spot' contribution in bidirectional data measured by a modular multiband radiometer.
Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria
2017-08-01
Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.
Frequency-domain optical absorption spectroscopy of finite tissue volumes using diffusion theory.
Pogue, B W; Patterson, M S
1994-07-01
The goal of frequency-domain optical absorption spectroscopy is the non-invasive determination of the absorption coefficient of a specific tissue volume. Since this allows the concentration of endogenous and exogenous chromophores to be calculated, there is considerable potential for clinical application. The technique relies on the measurement of the phase and modulation of light, which is diffusely reflected or transmitted by the tissue when it is illuminated by an intensity-modulated source. A model of light propagation must then be used to deduce the absorption coefficient. For simplicity, it is usual to assume the tissue is either infinite in extent (for transmission measurements) or semi-infinite (for reflectance measurements). The goal of this paper is to examine the errors introduced by these assumptions when measurements are actually performed on finite volumes. Diffusion-theory calculations and experimental measurements were performed for slabs, cylinders and spheres with optical properties characteristic of soft tissues in the near infrared. The error in absorption coefficient is presented as a function of object size as a guideline to when the simple models may be used. For transmission measurements, the error is almost independent of the true absorption coefficient, which allows absolute changes in absorption to be measured accurately. The implications of these errors in absorption coefficient for two clinical problems--quantitation of an exogenous photosensitizer and measurement of haemoglobin oxygenation--are presented and discussed.
Rabinak, Christine A; Holman, Alexis; Angstadt, Mike; Kennedy, Amy E; Hajcak, Greg; Phan, Kinh Luan
2013-07-30
Post-traumatic stress disorder (PTSD) is characterized by sustained anxiety, hypervigilance for potential threat, and hyperarousal. These symptoms may enhance self-perception of one's actions, particularly the detection of errors, which may threaten safety. The error-related negativity (ERN) is an electrocortical response to the commission of errors, and previous studies have shown that other anxiety disorders associated with exaggerated anxiety and enhanced action monitoring exhibit an enhanced ERN. However, little is known about how traumatic experience and PTSD would affect the ERN. To address this gap, we measured the ERN in returning Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF) veterans with combat-related PTSD (PTSD group), combat-exposed OEF/OIF veterans without PTSD [combat-exposed control (CEC) group], and non-traumatized healthy participants [healthy control (HC) group]. Event-related potential and behavioral measures were recorded while 16 PTSD patients, 18 CEC, and 16 HC participants completed an arrow version of the flanker task. No difference in the magnitude of the ERN was observed between the PTSD and HC groups; however, in comparison with the PTSD and HC groups, the CEC group displayed a blunted ERN response. These findings suggest that (1) combat trauma itself does not affect the ERN response; (2) PTSD is not associated with an abnormal ERN response; and (3) an attenuated ERN in those previously exposed to combat trauma but who have not developed PTSD may reflect resilience to the disorder, less motivation to do the task, or a decrease in the significance or meaningfulness of 'errors,' which could be related to combat experience. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Coil motion effects in watt balances: a theoretical check
NASA Astrophysics Data System (ADS)
Li, Shisong; Schlamminger, Stephan; Haddad, Darine; Seifert, Frank; Chao, Leon; Pratt, Jon R.
2016-04-01
A watt balance is a precision apparatus for the measurement of the Planck constant that has been proposed as a primary method for realizing the unit of mass in a revised International System of Units. In contrast to an ampere balance, which was historically used to realize the unit of current in terms of the kilogram, the watt balance relates electrical and mechanical units through a virtual power measurement and has far greater precision. However, because the virtual power measurement requires the execution of a prescribed motion of a coil in a fixed magnetic field, systematic errors introduced by horizontal and rotational deviations of the coil from its prescribed path will compromise the accuracy. We model these potential errors using an analysis that accounts for the fringing field in the magnet, creating a framework for assessing the impact of this class of errors on the uncertainty of watt balance results.
Forster, Sarah E; Zirnheld, Patrick; Shekhar, Anantha; Steinhauer, Stuart R; O'Donnell, Brian F; Hetrick, William P
2017-09-01
Signals carried by the mesencephalic dopamine system and conveyed to anterior cingulate cortex are critically implicated in probabilistic reward learning and performance monitoring. A common evaluative mechanism purportedly subserves both functions, giving rise to homologous medial frontal negativities in feedback- and response-locked event-related brain potentials (the feedback-related negativity (FRN) and the error-related negativity (ERN), respectively), reflecting dopamine-dependent prediction error signals to unexpectedly negative events. Consistent with this model, the dopamine receptor antagonist, haloperidol, attenuates the ERN, but effects on FRN have not yet been evaluated. ERN and FRN were recorded during a temporal interval learning task (TILT) following randomized, double-blind administration of haloperidol (3 mg; n = 18), diphenhydramine (an active control for haloperidol; 25 mg; n = 20), or placebo (n = 21) to healthy controls. Centroparietal positivities, the Pe and feedback-locked P300, were also measured and correlations between ERP measures and behavioral indices of learning, overall accuracy, and post-error compensatory behavior were evaluated. We hypothesized that haloperidol would reduce ERN and FRN, but that ERN would uniquely track automatic, error-related performance adjustments, while FRN would be associated with learning and overall accuracy. As predicted, ERN was reduced by haloperidol and in those exhibiting less adaptive post-error performance; however, these effects were limited to ERNs following fast timing errors. In contrast, the FRN was not affected by drug condition, although increased FRN amplitude was associated with improved accuracy. Significant drug effects on centroparietal positivities were also absent. Our results support a functional and neurobiological dissociation between the ERN and FRN.
Reichert, Christof; Kästner, Sabine B R; Hopster, Klaus; Rohn, Karl; Rötting, Anna K
2014-11-01
To evaluate the use of a micro-lightguide tissue spectrophotometer for measurement of tissue oxygenation and blood flow in the small and large intestines of horses under anesthesia. 13 adult horses without gastrointestinal disease. Horses were anesthetized and placed in dorsal recumbency. Ventral midline laparotomy was performed. Intestinal segments were exteriorized to obtain measurements. Spectrophotometric measurements of tissue oxygenation and regional blood flow of the jejunum and pelvic flexure were obtained under various conditions that were considered to have a potential effect on measurement accuracy. In addition, arterial oxygen saturation at the measuring sites was determined by use of pulse oximetry. 12,791 single measurements of oxygen saturation, relative amount of hemoglobin, and blood flow were obtained. Errors occurred in 381 of 12,791 (2.98%) measurements. Most measurement errors occurred when surgical lights were directed at the measuring site; covering the probe with the surgeon's hand did not eliminate this error source. No measurement errors were observed when the probe was positioned on the intestinal wall with room light, at the mesenteric side, or between the mesenteric and antimesenteric side. Values for blood flow had higher variability, and this was most likely caused by motion artifacts of the intestines. The micro-lightguide spectrophotometry system was easy to use on the small and large intestines of horses and provided rapid evaluation of the microcirculation. Results indicated that measurements should be performed with room light only and intestinal motion should be minimized.
A Framework for Reconsidering the Lake Wobegon Effect
ERIC Educational Resources Information Center
Haley, M. Ryan; Johnson, Marianne F.; McGee, M. Kevin
2010-01-01
The "Lake Wobegon Effect" (LWE) describes the potential measurement-error bias introduced into survey-based analyses of education issues. Although this effect potentially applies to any student-report variable, the systematic overreporting of academic achievements such as grade point average is often of preeminent concern. This concern can be…
Qi, Yulin; Geib, Timon; Schorr, Pascal; Meier, Florian; Volmer, Dietrich A
2015-01-15
Isobaric interferences in human serum can potentially influence the measured concentration levels of 25-hydroxyvitamin D [25(OH)D], when low resolving power liquid chromatography/tandem mass spectrometry (LC/MS/MS) instruments and non-specific MS/MS product ions are employed for analysis. In this study, we provide a detailed characterization of these interferences and a technical solution to reduce the associated systematic errors. Detailed electrospray ionization Fourier transform ion cyclotron resonance (FTICR) high-resolution mass spectrometry (HRMS) experiments were used to characterize co-extracted isobaric components of 25(OH)D from human serum. Differential ion mobility spectrometry (DMS), as a gas-phase ion filter, was implemented on a triple quadrupole mass spectrometer for separation of the isobars. HRMS revealed the presence of multiple isobaric compounds in extracts of human serum for different sample preparation methods. Several of these isobars had the potential to increase the peak areas measured for 25(OH)D on low-resolution MS instruments. A major isobaric component was identified as pentaerythritol oleate, a technical lubricant, which was probably an artifact from the analytical instrumentation. DMS was able to remove several of these isobars prior to MS/MS, when implemented on the low-resolution triple quadrupole mass spectrometer. It was shown in this proof-of-concept study that DMS-MS has the potential to significantly decrease systematic errors, and thus improve accuracy of vitamin D measurements using LC/MS/MS. Copyright © 2014 John Wiley & Sons, Ltd.
Heterodyne range imaging as an alternative to photogrammetry
NASA Astrophysics Data System (ADS)
Dorrington, Adrian; Cree, Michael; Carnegie, Dale; Payne, Andrew; Conroy, Richard
2007-01-01
Solid-state full-field range imaging technology, capable of determining the distance to objects in a scene simultaneously for every pixel in an image, has recently achieved sub-millimeter distance measurement precision. With this level of precision, it is becoming practical to use this technology for high precision three-dimensional metrology applications. Compared to photogrammetry, range imaging has the advantages of requiring only one viewing angle, a relatively short measurement time, and simplistic fast data processing. In this paper we fist review the range imaging technology, then describe an experiment comparing both photogrammetric and range imaging measurements of a calibration block with attached retro-reflective targets. The results show that the range imaging approach exhibits errors of approximately 0.5 mm in-plane and almost 5 mm out-of-plane; however, these errors appear to be mostly systematic. We then proceed to examine the physical nature and characteristics of the image ranging technology and discuss the possible causes of these systematic errors. Also discussed is the potential for further system characterization and calibration to compensate for the range determination and other errors, which could possibly lead to three-dimensional measurement precision approaching that of photogrammetry.
NASA Astrophysics Data System (ADS)
Breitkopf, Sven; Lilienfein, Nikolai; Achtnich, Timon; Zwyssig, Christof; Tünnermann, Andreas; Pupeza, Ioachim; Limpert, Jens
2018-06-01
Compact, ultra-high-speed self-bearing permanent-magnet motors enable a wide scope of applications including an increasing number of optical ones. For implementation in an optical setup, the rotors have to satisfy high demands regarding their velocity and pointing errors. Only a restricted number of measurements of these parameters exist and only at relatively low velocities. This manuscript presents the measurement of the velocity and pointing errors at rotation frequencies up to 5 kHz. The acquired data allow us to identify the rotor drive as the main source of velocity variations with fast fluctuations of up to 3.4 ns (RMS) and slow drifts of 23 ns (RMS) over ˜120 revolutions at 5 kHz in vacuum. At the same rotation frequency, the pointing fluctuated by 12 μrad (RMS) and 33 μrad (peak-to-peak) over ˜10 000 round trips. To our best knowledge, this states the first measurement of velocity and pointing errors at multi-kHz rotation frequencies and will allow potential adopters to evaluate the feasibility of such rotor drives for their application.
The relationship between hand hygiene and health care-associated infection: it’s complicated
McLaws, Mary-Louise
2015-01-01
The reasoning that improved hand hygiene compliance contributes to the prevention of health care-associated infections is widely accepted. It is also accepted that high hand hygiene alone cannot impact formidable risk factors, such as older age, immunosuppression, admission to the intensive care unit, longer length of stay, and indwelling devices. When hand hygiene interventions are concurrently undertaken with other routine or special preventive strategies, there is a potential for these concurrent strategies to confound the effect of the hand hygiene program. The result may be an overestimation of the hand hygiene intervention unless the design of the intervention or analysis controls the effect of the potential confounders. Other epidemiologic principles that may also impact the result of a hand hygiene program include failure to consider measurement error of the content of the hand hygiene program and the measurement error of compliance. Some epidemiological errors in hand hygiene programs aimed at reducing health care-associated infections are inherent and not easily controlled. Nevertheless, the inadvertent omission by authors to report these common epidemiological errors, including concurrent infection prevention strategies, suggests to readers that the effect of hand hygiene is greater than the sum of all infection prevention strategies. Worse still, this omission does not assist evidence-based practice. PMID:25678805
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gladstone, D. J.; Li, S.; Jarvis, L. A.
2011-07-15
Purpose: The authors hereby notify the Radiation Oncology community of a potentially lethal error due to improper implementation of linear units of measure in a treatment planning system. The authors report an incident in which a patient was nearly mistreated during a stereotactic radiotherapy procedure due to inappropriate reporting of stereotactic coordinates by the radiation therapy treatment planning system in units of centimeter rather than in millimeter. The authors suggest a method to detect such errors during treatment planning so they are caught and corrected prior to the patient positioning for treatment on the treatment machine. Methods: Using pretreatment imaging,more » the authors found that stereotactic coordinates are reported with improper linear units by a treatment planning system. The authors have implemented a redundant, independent method of stereotactic coordinate calculation. Results: Implementation of a double check of stereotactic coordinates via redundant, independent calculation is simple and accurate. Use of this technique will avoid any future error in stereotactic treatment coordinates due to improper linear units, transcription, or other similar errors. Conclusions: The authors recommend an independent double check of stereotactic treatment coordinates during the treatment planning process in order to avoid potential mistreatment of patients.« less
Measurement error: Implications for diagnosis and discrepancy models of developmental dyslexia.
Cotton, Sue M; Crewther, David P; Crewther, Sheila G
2005-08-01
The diagnosis of developmental dyslexia (DD) is reliant on a discrepancy between intellectual functioning and reading achievement. Discrepancy-based formulae have frequently been employed to establish the significance of the difference between 'intelligence' and 'actual' reading achievement. These formulae, however, often fail to take into consideration test reliability and the error associated with a single test score. This paper provides an illustration of the potential effects that test reliability and measurement error can have on the diagnosis of dyslexia, with particular reference to discrepancy models. The roles of reliability and standard error of measurement (SEM) in classic test theory are also briefly reviewed. This is followed by illustrations of how SEM and test reliability can aid with the interpretation of a simple discrepancy-based formula of DD. It is proposed that a lack of consideration of test theory in the use of discrepancy-based models of DD can lead to misdiagnosis (both false positives and false negatives). Further, misdiagnosis in research samples affects reproducibility and generalizability of findings. This in turn, may explain current inconsistencies in research on the perceptual, sensory, and motor correlates of dyslexia.
Effects of uncertainty and variability on population declines and IUCN Red List classifications.
Rueda-Cediel, Pamela; Anderson, Kurt E; Regan, Tracey J; Regan, Helen M
2018-01-22
The International Union for Conservation of Nature (IUCN) Red List Categories and Criteria is a quantitative framework for classifying species according to extinction risk. Population models may be used to estimate extinction risk or population declines. Uncertainty and variability arise in threat classifications through measurement and process error in empirical data and uncertainty in the models used to estimate extinction risk and population declines. Furthermore, species traits are known to affect extinction risk. We investigated the effects of measurement and process error, model type, population growth rate, and age at first reproduction on the reliability of risk classifications based on projected population declines on IUCN Red List classifications. We used an age-structured population model to simulate true population trajectories with different growth rates, reproductive ages and levels of variation, and subjected them to measurement error. We evaluated the ability of scalar and matrix models parameterized with these simulated time series to accurately capture the IUCN Red List classification generated with true population declines. Under all levels of measurement error tested and low process error, classifications were reasonably accurate; scalar and matrix models yielded roughly the same rate of misclassifications, but the distribution of errors differed; matrix models led to greater overestimation of extinction risk than underestimations; process error tended to contribute to misclassifications to a greater extent than measurement error; and more misclassifications occurred for fast, rather than slow, life histories. These results indicate that classifications of highly threatened taxa (i.e., taxa with low growth rates) under criterion A are more likely to be reliable than for less threatened taxa when assessed with population models. Greater scrutiny needs to be placed on data used to parameterize population models for species with high growth rates, particularly when available evidence indicates a potential transition to higher risk categories. © 2018 Society for Conservation Biology.
Clinical Research Methodology 2: Observational Clinical Research.
Sessler, Daniel I; Imrey, Peter B
2015-10-01
Case-control and cohort studies are invaluable research tools and provide the strongest feasible research designs for addressing some questions. Case-control studies usually involve retrospective data collection. Cohort studies can involve retrospective, ambidirectional, or prospective data collection. Observational studies are subject to errors attributable to selection bias, confounding, measurement bias, and reverse causation-in addition to errors of chance. Confounding can be statistically controlled to the extent that potential factors are known and accurately measured, but, in practice, bias and unknown confounders usually remain additional potential sources of error, often of unknown magnitude and clinical impact. Causality-the most clinically useful relation between exposure and outcome-can rarely be definitively determined from observational studies because intentional, controlled manipulations of exposures are not involved. In this article, we review several types of observational clinical research: case series, comparative case-control and cohort studies, and hybrid designs in which case-control analyses are performed on selected members of cohorts. We also discuss the analytic issues that arise when groups to be compared in an observational study, such as patients receiving different therapies, are not comparable in other respects.
Intra-rater reliability of hallux flexor strength measures using the Nintendo Wii Balance Board.
Quek, June; Treleaven, Julia; Brauer, Sandra G; O'Leary, Shaun; Clark, Ross A
2015-01-01
The purpose of this study was to investigate the intra-rater reliability of a new method in combination with the Nintendo Wii Balance Board (NWBB) to measure the strength of hallux flexor muscle. Thirty healthy individuals (age: 34.9 ± 12.9 years, height: 170.4 ± 10.5 cm, weight: 69.3 ± 15.3 kg, female = 15) participated. Repeated testing was completed within 7 days. Participants performed strength testing in sitting using a wooden platform in combination with the NWBB. This new method was set up to selectively recruit an intrinsic muscle of the foot, specifically the flexor hallucis brevis muscle. Statistical analysis was performed using intra-class coefficients and ordinary least product analysis. To estimate measurement error, standard error of measurement (SEM), minimal detectable change (MDC) and percentage error were calculated. Results indicate excellent intra-rater reliability (ICC = 0.982, CI = 0.96-0.99) with an absence of systematic bias. SEM, MDC and percentage error value were 0.5, 1.4 and 12 % respectively. This study demonstrates that a new method in combination with the NWBB application is reliable to measure hallux flexor strength and has potential to be used for future research and clinical application.
Measurement of tokamak error fields using plasma response and its applicability to ITER
Strait, Edward J.; Buttery, Richard J.; Casper, T. A.; ...
2014-04-17
The nonlinear response of a low-beta tokamak plasma to non-axisymmetric fields offers an alternative to direct measurement of the non-axisymmetric part of the vacuum magnetic fields, often termed “error fields”. Possible approaches are discussed for determination of error fields and the required current in non-axisymmetric correction coils, with an emphasis on two relatively new methods: measurement of the torque balance on a saturated magnetic island, and measurement of the braking of plasma rotation in the absence of an island. The former is well suited to ohmically heated discharges, while the latter is more appropriate for discharges with a modest amountmore » of neutral beam heating to drive rotation. Both can potentially provide continuous measurements during a discharge, subject to the limitation of a minimum averaging time. The applicability of these methods to ITER is discussed, and an estimate is made of their uncertainties in light of the specifications of ITER’s diagnostic systems. Furthermore, the use of plasma response-based techniques in normal ITER operational scenarios may allow identification of the error field contributions by individual central solenoid coils, but identification of the individual contributions by the outer poloidal field coils or other sources is less likely to be feasible.« less
Acetaminophen attenuates error evaluation in cortex.
Randles, Daniel; Kam, Julia W Y; Heine, Steven J; Inzlicht, Michael; Handy, Todd C
2016-06-01
Acetaminophen has recently been recognized as having impacts that extend into the affective domain. In particular, double blind placebo controlled trials have revealed that acetaminophen reduces the magnitude of reactivity to social rejection, frustration, dissonance and to both negatively and positively valenced attitude objects. Given this diversity of consequences, it has been proposed that the psychological effects of acetaminophen may reflect a widespread blunting of evaluative processing. We tested this hypothesis using event-related potentials (ERPs). Sixty-two participants received acetaminophen or a placebo in a double-blind protocol and completed the Go/NoGo task. Participants' ERPs were observed following errors on the Go/NoGo task, in particular the error-related negativity (ERN; measured at FCz) and error-related positivity (Pe; measured at Pz and CPz). Results show that acetaminophen inhibits the Pe, but not the ERN, and the magnitude of an individual's Pe correlates positively with omission errors, partially mediating the effects of acetaminophen on the error rate. These results suggest that recently documented affective blunting caused by acetaminophen may best be described as an inhibition of evaluative processing. They also contribute to the growing work suggesting that the Pe is more strongly associated with conscious awareness of errors relative to the ERN. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Production and detection of atomic hexadecapole at Earth's magnetic field.
Acosta, V M; Auzinsh, M; Gawlik, W; Grisins, P; Higbie, J M; Jackson Kimball, D F; Krzemien, L; Ledbetter, M P; Pustelny, S; Rochester, S M; Yashchuk, V V; Budker, D
2008-07-21
Optical magnetometers measure magnetic fields with extremely high precision and without cryogenics. However, at geomagnetic fields, important for applications from landmine removal to archaeology, they suffer from nonlinear Zeeman splitting, leading to systematic dependence on sensor orientation. We present experimental results on a method of eliminating this systematic error, using the hexadecapole atomic polarization moment. In particular, we demonstrate selective production of the atomic hexadecapole moment at Earth's magnetic field and verify its immunity to nonlinear Zeeman splitting. This technique promises to eliminate directional errors in all-optical atomic magnetometers, potentially improving their measurement accuracy by several orders of magnitude.
NASA Astrophysics Data System (ADS)
Davis, K. J.; Bakwin, P. S.; Yi, C.; Cook, B. D.; Wang, W.; Denning, A. S.; Teclaw, R.; Isebrands, J. G.
2001-05-01
Long-term, tower-based measurements using the eddy-covariance method have revealed a wealth of detail about the temporal dynamics of netecosystem-atmosphere exchange (NEE) of CO2. The data also provide a measure of the annual net CO2 exchange. The area represented by these flux measurements, however, is limited, and doubts remain about possible systematic errors that may bias the annual net exchange measurements. Flux and mixing ratio measurements conducted at the WLEF tall tower as part of the Chequamegon Ecosystem-Atmosphere Study (ChEAS) allow for unique assessment of the uncertainties in NEE of CO2. The synergy between flux and mixing ratio observations shows the potential for comparing inverse and eddy-covariance methods of estimating NEE of CO2. Such comparisons may strengthen confidence in both results and begin to bridge the huge gap in spatial scales (at least 3 orders of magnitude) between continental or hemispheric scale inverse studies and kilometer-scale eddy covariance flux measurements. Data from WLEF and Willow Creek, another ChEAS tower, are used to estimate random and systematic errors in NEE of CO2. Random uncertainty in seasonal exchange rates and the annual integrated NEE, including both turbulent sampling errors and variability in enviromental conditions, is small. Systematic errors are identified by examining changes in flux as a function of atmospheric stability and wind direction, and by comparing the multiple level flux measurements on the WLEF tower. Nighttime drainage is modest but evident. Systematic horizontal advection occurs during the morning turbulence transition. The potential total systematic error appears to be larger than random uncertainty, but still modest. The total systematic error, however, is difficult to assess. It appears that the WLEF region ecosystems were a small net sink of CO2 in 1997. It is clear that the summer uptake rate at WLEF is much smaller than that at most deciduous forest sites, including the nearby Willow Creek site. The WLEF tower also allows us to study the potential for monitoring continental CO2 mixing ratios from tower sites. Despite concerns about the proximity to ecosystem sources and sinks, it is clear that boundary layer CO2 mixing ratios can be monitored using typical surface layer towers. Seasonal and annual land-ocean mixing ratio gradients are readily detectable, providing the motivation for a flux-tower based mixing ratio observation network that could greatly improve the accuracy of inversion-based estimates of NEE of CO2, and enable inversions to be applied on smaller temporal and spatial scales. Results from the WLEF tower illustrate the degree to which local flux measurements represent interannual, seasonal and synoptic CO2 mixing ratio trends. This coherence between fluxes and mixing ratios serves to "regionalize" the eddy-covariance based local NEE observations.
Can eye-tracking technology improve situational awareness in paramedic clinical education?
Williams, Brett; Quested, Andrew; Cooper, Simon
2013-01-01
Human factors play a significant part in clinical error. Situational awareness (SA) means being aware of one's surroundings, comprehending the present situation, and being able to predict outcomes. It is a key human skill that, when properly applied, is associated with reducing medical error: eye-tracking technology can be used to provide an objective and qualitative measure of the initial perception component of SA. Feedback from eye-tracking technology can be used to improve the understanding and teaching of SA in clinical contexts, and consequently, has potential for reducing clinician error and the concomitant adverse events.
Metrics for Business Process Models
NASA Astrophysics Data System (ADS)
Mendling, Jan
Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.
NASA Technical Reports Server (NTRS)
Antonille, Scott
2004-01-01
For potential use on the SHARPI mission, Eastman Kodak has delivered a 50.8cm CA f/1.25 ultra-lightweight UV parabolic mirror with a surface figure error requirement of 6nm RMS. We address the challenges involved in verifying and mapping the surface error of this large lightweight mirror to +/-3nm using a diffractive CGH null lens. Of main concern is removal of large systematic errors resulting from surface deflections of the mirror due to gravity as well as smaller contributions from system misalignment and reference optic errors. We present our efforts to characterize these errors and remove their wavefront error contribution in post-processing as well as minimizing the uncertainty these calculations introduce. Data from Kodak and preliminary measurements from NASA Goddard will be included.
Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J.; Kujawa, Autumn J.; Laptook, Rebecca S.; Torpey, Dana C.; Klein, Daniel N.
2017-01-01
The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission—although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children’s ERN approximately three years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately three years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children’s error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this hypothesis. PMID:25092483
Papadelis, Christos; Chen, Zhe; Kourtidou-Papadeli, Chrysoula; Bamidis, Panagiotis D; Chouvarda, Ioanna; Bekiaris, Evangelos; Maglaveras, Nikos
2007-09-01
The objective of this study is the development and evaluation of efficient neurophysiological signal statistics, which may assess the driver's alertness level and serve as potential indicators of sleepiness in the design of an on-board countermeasure system. Multichannel EEG, EOG, EMG, and ECG were recorded from sleep-deprived subjects exposed to real field driving conditions. A number of severe driving errors occurred during the experiments. The analysis was performed in two main dimensions: the macroscopic analysis that estimates the on-going temporal evolution of physiological measurements during the driving task, and the microscopic event analysis that focuses on the physiological measurements' alterations just before, during, and after the driving errors. Two independent neurophysiologists visually interpreted the measurements. The EEG data were analyzed by using both linear and non-linear analysis tools. We observed the occurrence of brief paroxysmal bursts of alpha activity and an increased synchrony among EEG channels before the driving errors. The alpha relative band ratio (RBR) significantly increased, and the Cross Approximate Entropy that quantifies the synchrony among channels also significantly decreased before the driving errors. Quantitative EEG analysis revealed significant variations of RBR by driving time in the frequency bands of delta, alpha, beta, and gamma. Most of the estimated EEG statistics, such as the Shannon Entropy, Kullback-Leibler Entropy, Coherence, and Cross-Approximate Entropy, were significantly affected by driving time. We also observed an alteration of eyes blinking duration by increased driving time and a significant increase of eye blinks' number and duration before driving errors. EEG and EOG are promising neurophysiological indicators of driver sleepiness and have the potential of monitoring sleepiness in occupational settings incorporated in a sleepiness countermeasure device. The occurrence of brief paroxysmal bursts of alpha activity before severe driving errors is described in detail for the first time. Clear evidence is presented that eye-blinking statistics are sensitive to the driver's sleepiness and should be considered in the design of an efficient and driver-friendly sleepiness detection countermeasure device.
Improving Localization Accuracy: Successive Measurements Error Modeling
Abu Ali, Najah; Abu-Elkheir, Mervat
2015-01-01
Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345
Virtual tape measure for the operating microscope: system specifications and performance evaluation.
Kim, M Y; Drake, J M; Milgram, P
2000-01-01
The Virtual Tape Measure for the Operating Microscope (VTMOM) was created to assist surgeons in making accurate 3D measurements of anatomical structures seen in the surgical field under the operating microscope. The VTMOM employs augmented reality techniques by combining stereoscopic video images with stereoscopic computer graphics, and functions by relying on an operator's ability to align a 3D graphic pointer, which serves as the end-point of the virtual tape measure, with designated locations on the anatomical structure being measured. The VTMOM was evaluated for its baseline and application performances as well as its application efficacy. Baseline performance was determined by measuring the mean error (bias) and standard deviation of error (imprecision) in measurements of non-anatomical objects. Application performance was determined by comparing the error in measuring the dimensions of aneurysm models with and without the VTMOM. Application efficacy was determined by comparing the error in selecting the appropriate aneurysm clip size with and without the VTMOM. Baseline performance indicated a bias of 0.3 mm and an imprecision of 0.6 mm. Application bias was 3.8 mm and imprecision was 2.8 mm for aneurysm diameter. The VTMOM did not improve aneurysm clip size selection accuracy. The VTMOM is a potentially accurate tool for use under the operating microscope. However, its performance when measuring anatomical objects is highly dependent on complex visual features of the object surfaces. Copyright 2000 Wiley-Liss, Inc.
Detecting drift bias and exposure errors in solar and photosynthetically active radiation data
USDA-ARS?s Scientific Manuscript database
All-black thermopile pyranometers are commonly used to measure solar radiation. Ensuring that the sensors are stable and free of drift is critical to accurately measure small variations in global solar irradiance (K'), which is a potential driver of changes in surface temperature. We demonstrate tha...
Pruitt, Sandi L; Jeffe, Donna B; Yan, Yan; Schootman, Mario
2012-04-01
Limited psychometric research has examined the reliability of self-reported measures of neighbourhood conditions, the effect of measurement error on associations between neighbourhood conditions and health, and potential differences in the reliabilities between neighbourhood strata (urban vs rural and low vs high poverty). We assessed overall and stratified reliability of self-reported perceived neighbourhood conditions using five scales (social and physical disorder, social control, social cohesion, fear) and four single items (multidimensional neighbouring). We also assessed measurement error-corrected associations of these conditions with self-rated health. Using random-digit dialling, 367 women without breast cancer (matched controls from a larger study) were interviewed twice, 2-3 weeks apart. Test-retest (intraclass correlation coefficients (ICC)/weighted κ) and internal consistency reliability (Cronbach's α) were assessed. Differences in reliability across neighbourhood strata were tested using bootstrap methods. Regression calibration corrected estimates for measurement error. All measures demonstrated satisfactory internal consistency (α ≥ 0.70) and either moderate (ICC/κ=0.41-0.60) or substantial (ICC/κ=0.61-0.80) test-retest reliability in the full sample. Internal consistency did not differ by neighbourhood strata. Test-retest reliability was significantly lower among rural (vs urban) residents for two scales (social control, physical disorder) and two multidimensional neighbouring items; test-retest reliability was higher for physical disorder and lower for one multidimensional neighbouring item among the high (vs low) poverty strata. After measurement error correction, the magnitude of associations between neighbourhood conditions and self-rated health were larger, particularly in the rural population. Research is needed to develop and test reliable measures of perceived neighbourhood conditions relevant to the health of rural populations.
MRI of bone marrow in the distal radius: in vivo precision of effective transverse relaxation times
NASA Technical Reports Server (NTRS)
Grampp, S.; Majumdar, S.; Jergas, M.; Lang, P.; Gies, A.; Genant, H. K.
1995-01-01
The effective transverse relaxation time T2* is influenced by the presence of trabecular bone, and can potentially provide a measure of bone density as well as bone structure. We determined the in vivo precision of T2* in repeated bone marrow measurements. The T2* measurements of the bone marrow of the distal radius were performed twice within 2 weeks in six healthy young volunteers using a modified water-presaturated 3D Gradient-Recalled Acquisition at Steady State (GRASS) sequence with TE 7, 10, 12, 20, and 30; TR 67; flip angle (FA) 90 degrees. An axial volume covering a length of 5.6 cm in the distal radius was measured. Regions of interest (ROIs) were determined manually and consisted of the entire trabecular bone cross-section extending proximally from the radial subchondral endplate. Reproducibility of T2* and area measurements was expressed as the absolute precision error (standard deviation [SD] in ms or mm2) or as the relative precision error (SD/mean x 100, or coefficient of variation [CV] in %) between the two-point measurements. Short-term precision of T2* and area measurements varied depending on section thickness and location of the ROI in the distal radius. Absolute precision errors for T2* times were between 1.3 and 2.9 ms (relative precision errors 3.8-9.5 %) and for area measurements between 20 and 55 mm2 (relative precision errors 5.1-16.4%). This MR technique for quantitative assessment of trabecular bone density showed reasonable reproducibility in vivo and is a promising future tool for the assessment of osteoporosis.
Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.
Masalski, Marcin; Kręcicki, Tomasz
2013-04-12
Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.
A design approach for improving the performance of single-grid planar retarding potential analyzers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, R. L.; Earle, G. D.
2011-01-15
Planar retarding potential analyzers (RPAs) have a long flight history and have been included on numerous spaceflight missions including Dynamics Explorer, the Defense Meteorological Satellite Program, and the Communications/Navigation Outage Forecast System. RPAs allow for simultaneous measurement of plasma composition, density, temperature, and the component of the velocity vector normal to the aperture plane. Internal conductive grids are used to approximate ideal potential planes within the instrument, but these grids introduce perturbations to the potential map inside the RPA and cause errors in the measurement of the parameters listed above. A numerical technique is presented herein for minimizing these gridmore » errors for a specific mission by varying the depth and spacing of the grid wires. The example mission selected concentrates on plasma dynamics near the sunset terminator in the equatorial region. The international reference ionosphere model is used to discern the average conditions expected for this mission, and a numerical model of the grid-particle interaction is used to choose a grid design that will best fulfill the mission goals.« less
Results of the first complete static calibration of the RSRA rotor-load-measurement system
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
1984-01-01
The compound Rotor System Research Aircraft (RSRA) is designed to make high-accuracy, simultaneous measurements of all rotor forces and moments in flight. Physical calibration of the rotor force- and moment-measurement system when installed in the aircraft is required to account for known errors and to ensure that measurement-system accuracy is traceable to the National Bureau of Standards. The first static calibration and associated analysis have been completed with good results. Hysteresis was a potential cause of static calibration errors, but was found to be negligible in flight compared to full-scale loads, and analytical methods have been devised to eliminate hysteresis effects on calibration data. Flight tests confirmed that the calibrated rotor-load-measurement system performs as expected in flight and that it can dependably make direct measurements of fuselage vertical drag in hover.
Dual-wavelengths photoacoustic temperature measurement
NASA Astrophysics Data System (ADS)
Liao, Yu; Jian, Xiaohua; Dong, Fenglin; Cui, Yaoyao
2017-02-01
Thermal therapy is an approach applied in cancer treatment by heating local tissue to kill the tumor cells, which requires a high sensitivity of temperature monitoring during therapy. Current clinical methods like fMRI near infrared or ultrasound for temperature measurement still have limitations on penetration depth or sensitivity. Photoacoustic temperature sensing is a newly developed temperature sensing method that has a potential to be applied in thermal therapy, which usually employs a single wavelength laser for signal generating and temperature detecting. Because of the system disturbances including laser intensity, ambient temperature and complexity of target, the accidental errors of measurement is unavoidable. For solving these problems, we proposed a new method of photoacoustic temperature sensing by using two wavelengths to reduce random error and increase the measurement accuracy in this paper. Firstly a brief theoretical analysis was deduced. Then in the experiment, a temperature measurement resolution of about 1° in the range of 23-48° in ex vivo pig blood was achieved, and an obvious decrease of absolute error was observed with averagely 1.7° in single wavelength pattern while nearly 1° in dual-wavelengths pattern. The obtained results indicates that dual-wavelengths photoacoustic sensing of temperature is able to reduce random error and improve accuracy of measuring, which could be a more efficient method for photoacoustic temperature sensing in thermal therapy of tumor.
McKendrick, Ryan; Parasuraman, Raja; Murtza, Rabia; Formwalt, Alice; Baccus, Wendy; Paczynski, Martin; Ayaz, Hasan
2016-01-01
Highly mobile computing devices promise to improve quality of life, productivity, and performance. Increased situation awareness and reduced mental workload are two potential means by which this can be accomplished. However, it is difficult to measure these concepts in the "wild". We employed ultra-portable battery operated and wireless functional near infrared spectroscopy (fNIRS) to non-invasively measure hemodynamic changes in the brain's Prefrontal cortex (PFC). Measurements were taken during navigation of a college campus with either a hand-held display, or an Augmented reality wearable display (ARWD). Hemodynamic measures were also paired with secondary tasks of visual perception and auditory working memory to provide behavioral assessment of situation awareness and mental workload. Navigating with an augmented reality wearable display produced the least workload during the auditory working memory task, and a trend for improved situation awareness in our measures of prefrontal hemodynamics. The hemodynamics associated with errors were also different between the two devices. Errors with an augmented reality wearable display were associated with increased prefrontal activity and the opposite was observed for the hand-held display. This suggests that the cognitive mechanisms underlying errors between the two devices differ. These findings show fNIRS is a valuable tool for assessing new technology in ecologically valid settings and that ARWDs offer benefits with regards to mental workload while navigating, and potentially superior situation awareness with improved display design.
McKendrick, Ryan; Parasuraman, Raja; Murtza, Rabia; Formwalt, Alice; Baccus, Wendy; Paczynski, Martin; Ayaz, Hasan
2016-01-01
Highly mobile computing devices promise to improve quality of life, productivity, and performance. Increased situation awareness and reduced mental workload are two potential means by which this can be accomplished. However, it is difficult to measure these concepts in the “wild”. We employed ultra-portable battery operated and wireless functional near infrared spectroscopy (fNIRS) to non-invasively measure hemodynamic changes in the brain’s Prefrontal cortex (PFC). Measurements were taken during navigation of a college campus with either a hand-held display, or an Augmented reality wearable display (ARWD). Hemodynamic measures were also paired with secondary tasks of visual perception and auditory working memory to provide behavioral assessment of situation awareness and mental workload. Navigating with an augmented reality wearable display produced the least workload during the auditory working memory task, and a trend for improved situation awareness in our measures of prefrontal hemodynamics. The hemodynamics associated with errors were also different between the two devices. Errors with an augmented reality wearable display were associated with increased prefrontal activity and the opposite was observed for the hand-held display. This suggests that the cognitive mechanisms underlying errors between the two devices differ. These findings show fNIRS is a valuable tool for assessing new technology in ecologically valid settings and that ARWDs offer benefits with regards to mental workload while navigating, and potentially superior situation awareness with improved display design. PMID:27242480
NASA Astrophysics Data System (ADS)
Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang; Liu, Ming
2016-12-01
An integrated inertial/celestial navigation system (INS/CNS) has wide applicability in lunar rovers as it provides accurate and autonomous navigational information. Initialization is particularly vital for a INS. This paper proposes a two-position initialization method based on a standard Kalman filter. The difference between the computed star vector and the measured star vector is measured. With the aid of a star sensor and the two positions, the attitudinal and positional errors can be greatly reduced, and the biases of three gyros and accelerometers can also be estimated. The semi-physical simulation results show that the positional and attitudinal errors converge within 0.07″ and 0.1 m, respectively, when the given initial positional error is 1 km and the attitudinal error is 10°. These good results show that the proposed method can accomplish alignment, positioning and calibration functions simultaneously. Thus the proposed two-position initialization method has the potential for application in lunar rover navigation.
Error Modeling of Multibaseline Optical Truss: Part 1: Modeling of System Level Performance
NASA Technical Reports Server (NTRS)
Milman, Mark H.; Korechoff, R. E.; Zhang, L. D.
2004-01-01
Global astrometry is the measurement of stellar positions and motions. These are typically characterized by five parameters, including two position parameters, two proper motion parameters, and parallax. The Space Interferometry Mission (SIM) will derive these parameters for a grid of approximately 1300 stars covering the celestial sphere to an accuracy of approximately 4uas, representing a two orders of magnitude improvement over the most precise current star catalogues. Narrow angle astrometry will be performed to a 1uas accuracy. A wealth of scientific information will be obtained from these accurate measurements encompassing many aspects of both galactic (and extragalactic science. SIM will be subject to a number of instrument errors that can potentially degrade performance. Many of these errors are systematic in that they are relatively static and repeatable with respect to the time frame and direction of the observation. This paper and its companion define the modeling of the, contributing factors to these errors and the analysis of how they impact SIM's ability to perform astrometric science.
Introduction to total- and partial-pressure measurements in vacuum systems
NASA Technical Reports Server (NTRS)
Outlaw, R. A.; Kern, F. A.
1989-01-01
An introduction to the fundamentals of total and partial pressure measurement in the vacuum regime (760 x 10 to the -16th power Torr) is presented. The instrument most often used in scientific fields requiring vacuum measurement are discussed with special emphasis on ionization type gauges and quadrupole mass spectrometers. Some attention is also given to potential errors in measurement as well as calibration techniques.
Understanding diagnostic errors in medicine: a lesson from aviation
Singh, H; Petersen, L A; Thomas, E J
2006-01-01
The impact of diagnostic errors on patient safety in medicine is increasingly being recognized. Despite the current progress in patient safety research, the understanding of such errors and how to prevent them is inadequate. Preliminary research suggests that diagnostic errors have both cognitive and systems origins. Situational awareness is a model that is primarily used in aviation human factors research that can encompass both the cognitive and the systems roots of such errors. This conceptual model offers a unique perspective in the study of diagnostic errors. The applicability of this model is illustrated by the analysis of a patient whose diagnosis of spinal cord compression was substantially delayed. We suggest how the application of this framework could lead to potential areas of intervention and outline some areas of future research. It is possible that the use of such a model in medicine could help reduce errors in diagnosis and lead to significant improvements in patient care. Further research is needed, including the measurement of situational awareness and correlation with health outcomes. PMID:16751463
Brain Potentials Measured During a Go/NoGo Task Predict Completion of Substance Abuse Treatment
Steele, Vaughn R.; Fink, Brandi C.; Maurer, J. Michael; Arbabshirani, Mohammad R.; Wilber, Charles H.; Jaffe, Adam J.; Sidz, Anna; Pearlson, Godfrey D.; Calhoun, Vince D.; Clark, Vincent P.; Kiehl, Kent A.
2014-01-01
Background US nationwide estimates indicate 50–80% of prisoners have a history of substance abuse or dependence. Tailoring substance abuse treatment to specific needs of incarcerated individuals could improve effectiveness of treating substance dependence and preventing drug abuse relapse. The purpose of the present study was to test the hypothesis that pre-treatment neural measures of a Go/NoGo task would predict which individuals would or would not complete a 12-week cognitive behavioral substance abuse treatment program. Methods Adult incarcerated participants (N=89; Females=55) who volunteered for substance abuse treatment performed a response inhibition (Go/NoGo) task while event-related potentials (ERP) were recorded. Stimulus- and response-locked ERPs were compared between individuals who completed (N=68; Females=45) and discontinued (N=21; Females=10) treatment. Results As predicted, stimulus-locked P2, response-locked error-related negativity (ERN/Ne), and response-locked error positivity (Pe), measured with windowed time-domain and principal component analysis, differed between groups. Using logistic regression and support-vector machine (i.e., pattern classifiers) models, P2 and Pe predicted treatment completion above and beyond other measures (i.e., N2, P300, ERN/Ne, age, sex, IQ, impulsivity, and self-reported depression, anxiety, motivation for change, and years of drug abuse). Conclusions We conclude individuals who discontinue treatment exhibited deficiencies in sensory gating, as indexed by smaller P2, error-monitoring, as indexed by smaller ERN/Ne, and adjusting response strategy post-error, as indexed by larger Pe. However, the combination of P2 and Pe reliably predicted 83.33% of individuals who discontinued treatment. These results may help in the development of individualized therapies, which could lead to more favorable, long-term outcomes. PMID:24238783
A new model of Ishikawa diagram for quality assessment
NASA Astrophysics Data System (ADS)
Liliana, Luca
2016-11-01
The paper presents the results of a study concerning the use of the Ishikawa diagram in analyzing the causes that determine errors in the evaluation of theparts precision in the machine construction field. The studied problem was"errors in the evaluation of partsprecision” and this constitutes the head of the Ishikawa diagram skeleton.All the possible, main and secondary causes that could generate the studied problem were identified. The most known Ishikawa models are 4M, 5M, 6M, the initials being in order: materials, methods, man, machines, mother nature, measurement. The paper shows the potential causes of the studied problem, which were firstly grouped in three categories, as follows: causes that lead to errors in assessing the dimensional accuracy, causes that determine errors in the evaluation of shape and position abnormalities and causes for errors in roughness evaluation. We took into account the main components of parts precision in the machine construction field. For each of the three categories of causes there were distributed potential secondary causes on groups of M (man, methods, machines, materials, environment/ medio ambiente-sp.). We opted for a new model of Ishikawa diagram, resulting from the composition of three fish skeletons corresponding to the main categories of parts accuracy.
This project summary highlights recent findings from research undertaken to develop improved methods to assess potential human health risks related to drinking water disinfection byproduct (DBP) exposures.
Determination of Earth orientation using the Global Positioning System
NASA Technical Reports Server (NTRS)
Freedman, A. P.
1989-01-01
Modern spacecraft tracking and navigation require highly accurate Earth-orientation parameters. For near-real-time applications, errors in these quantities and their extrapolated values are a significant error source. A globally distributed network of high-precision receivers observing the full Global Positioning System (GPS) configuration of 18 or more satellites may be an efficient and economical method for the rapid determination of short-term variations in Earth orientation. A covariance analysis using the JPL Orbit Analysis and Simulation Software (OASIS) was performed to evaluate the errors associated with GPS measurements of Earth orientation. These GPS measurements appear to be highly competitive with those from other techniques and can potentially yield frequent and reliable centimeter-level Earth-orientation information while simultaneously allowing the oversubscribed Deep Space Network (DSN) antennas to be used more for direct project support.
Oldland, Alan R.; May, Sondra K.; Barber, Gerard R.; Stolpman, Nancy M.
2015-01-01
Purpose: To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. Methods: During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Results: Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Conclusions: Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training. PMID:25684799
Oldland, Alan R; Golightly, Larry K; May, Sondra K; Barber, Gerard R; Stolpman, Nancy M
2015-01-01
To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training.
Electromagnetic Flow Meter Having a Driver Circuit Including a Current Transducer
NASA Technical Reports Server (NTRS)
Patel, Sandeep K. (Inventor); Karon, David M. (Inventor); Cushing, Vincent (Inventor)
2014-01-01
An electromagnetic flow meter (EMFM) accurately measures both the complete flow rate and the dynamically fluctuating flow rate of a fluid by applying a unipolar DC voltage to excitation coils for a predetermined period of time, measuring the electric potential at a pair of electrodes, determining a complete flow rate and independently measuring the dynamic flow rate during the "on" cycle of the DC excitation, and correcting the measurements for errors resulting from galvanic drift and other effects on the electric potential. The EMFM can also correct for effects from the excitation circuit induced during operation of the EMFM.
Bedini, José Luis; Wallace, Jane F; Pardo, Scott; Petruschke, Thorsten
2015-10-07
Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients' health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. © 2015 Diabetes Technology Society.
A dual-phantom system for validation of velocity measurements in stenosis models under steady flow.
Blake, James R; Easson, William J; Hoskins, Peter R
2009-09-01
A dual-phantom system is developed for validation of velocity measurements in stenosis models. Pairs of phantoms with identical geometry and flow conditions are manufactured, one for ultrasound and one for particle image velocimetry (PIV). The PIV model is made from silicone rubber, and a new PIV fluid is made that matches the refractive index of 1.41 of silicone. Dynamic scaling was performed to correct for the increased viscosity of the PIV fluid compared with that of the ultrasound blood mimic. The degree of stenosis in the models pairs agreed to less than 1%. The velocities in the laminar flow region up to the peak velocity location agreed to within 15%, and the difference could be explained by errors in ultrasound velocity estimation. At low flow rates and in mild stenoses, good agreement was observed in the distal flow fields, excepting the maximum velocities. At high flow rates, there was considerable difference in velocities in the poststenosis flow field (maximum centreline differences of 30%), which would seem to represent real differences in hydrodynamic behavior between the two models. Sources of error included: variation of viscosity because of temperature (random error, which could account for differences of up to 7%); ultrasound velocity estimation errors (systematic errors); and geometry effects in each model, particularly because of imperfect connectors and corners (systematic errors, potentially affecting the inlet length and flow stability). The current system is best placed to investigate measurement errors in the laminar flow region rather than the poststenosis turbulent flow region.
The power of vertical geolocation of atmospheric profiles from GNSS radio occultation.
Scherllin-Pirscher, Barbara; Steiner, Andrea K; Kirchengast, Gottfried; Schwärz, Marc; Leroy, Stephen S
2017-02-16
High-resolution measurements from Global Navigation Satellite System (GNSS) radio occultation (RO) provide atmospheric profiles with independent information on altitude and pressure. This unique property is of crucial advantage when analyzing atmospheric characteristics that require joint knowledge of altitude and pressure or other thermodynamic atmospheric variables. Here we introduce and demonstrate the utility of this independent information from RO and discuss the computation, uncertainty, and use of RO atmospheric profiles on isohypsic coordinates-mean sea level altitude and geopotential height-as well as on thermodynamic coordinates (pressure and potential temperature). Using geopotential height as vertical grid, we give information on errors of RO-derived temperature, pressure, and potential temperature profiles and provide an empirical error model which accounts for seasonal and latitudinal variations. The observational uncertainty of individual temperature/pressure/potential temperature profiles is about 0.7 K/0.15%/1.4 K in the tropopause region. It gradually increases into the stratosphere and decreases toward the lower troposphere. This decrease is due to the increasing influence of background information. The total climatological error of mean atmospheric fields is, in general, dominated by the systematic error component. We use sampling error-corrected climatological fields to demonstrate the power of having different and accurate vertical coordinates available. As examples we analyze characteristics of the location of the tropopause for geopotential height, pressure, and potential temperature coordinates as well as seasonal variations of the midlatitude jet stream core. This highlights the broad applicability of RO and the utility of its versatile vertical geolocation for investigating the vertical structure of the troposphere and stratosphere.
IMRT QA: Selecting gamma criteria based on error detection sensitivity.
Steers, Jennifer M; Fraass, Benedick A
2016-04-01
The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.
NASA Astrophysics Data System (ADS)
Zeng, Y. Y.; Guo, J. Y.; Shang, K.; Shum, C. K.; Yu, J. H.
2015-09-01
Two methods for computing gravitational potential difference (GPD) between the GRACE satellites using orbit data have been formulated based on energy integral; one in geocentric inertial frame (GIF) and another in Earth fixed frame (EFF). Here we present a rigorous theoretical formulation in EFF with particular emphasis on necessary approximations, provide a computational approach to mitigate the approximations to negligible level, and verify our approach using simulations. We conclude that a term neglected or ignored in all former work without verification should be retained. In our simulations, 2 cycle per revolution (CPR) errors are present in the GPD computed using our formulation, and empirical removal of the 2 CPR and lower frequency errors can improve the precisions of Stokes coefficients (SCs) of degree 3 and above by 1-2 orders of magnitudes. This is despite of the fact that the result without removing these errors is already accurate enough. Furthermore, the relation between data errors and their influences on GPD is analysed, and a formal examination is made on the possible precision that real GRACE data may attain. The result of removing 2 CPR errors may imply that, if not taken care of properly, the values of SCs computed by means of the energy integral method using real GRACE data may be seriously corrupted by aliasing errors from possibly very large 2 CPR errors based on two facts: (1) errors of bar C_{2,0} manifest as 2 CPR errors in GPD and (2) errors of bar C_{2,0} in GRACE data-the differences between the CSR monthly values of bar C_{2,0} independently determined using GRACE and SLR are a reasonable measure of their magnitude-are very large. Our simulations show that, if 2 CPR errors in GPD vary from day to day as much as those corresponding to errors of bar C_{2,0} from month to month, the aliasing errors of degree 15 and above SCs computed using a month's GPD data may attain a level comparable to the magnitude of gravitational potential variation signal that GRACE was designed to recover. Consequently, we conclude that aliasing errors from 2 CPR errors in real GRACE data may be very large if not properly handled; and therefore, we propose an approach to reduce aliasing errors from 2 CPR and lower frequency errors for computing SCs above degree 2.
Simulating and assessing boson sampling experiments with phase-space representations
NASA Astrophysics Data System (ADS)
Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.
2018-04-01
The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.
Outpatient Prescribing Errors and the Impact of Computerized Prescribing
Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W
2005-01-01
Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
Lystrom, David J.
1972-01-01
Various methods of verifying real-time streamflow data are outlined in part II. Relatively large errors (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller errors can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-time data users can choose a suitable level of verification.
Video camera system for locating bullet holes in targets at a ballistics tunnel
NASA Technical Reports Server (NTRS)
Burner, A. W.; Rummler, D. R.; Goad, W. K.
1990-01-01
A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.
Intraobserver reliability of contact pachymetry in children.
Weise, Katherine K; Kaminski, Brett; Melia, Michele; Repka, Michael X; Bradfield, Yasmin S; Davitt, Bradley V; Johnson, David A; Kraker, Raymond T; Manny, Ruth E; Matta, Noelle S; Schloff, Susan
2013-04-01
Central corneal thickness (CCT) is an important measurement in the treatment and management of pediatric glaucoma and potentially of refractive error, but data regarding reliability of CCT measurement in children are limited. The purpose of this study was to evaluate the reliability of CCT measurement with the use of handheld contact pachymetry in children. We conducted a multicenter intraobserver test-retest reliability study of more than 3,400 healthy eyes in children aged from newborn to 17 years by using a handheld contact pachymeter (Pachmate DGH55; DGH Technology Inc, Exton, PA) in 2 clinical settings--with the use of topical anesthesia in the office and with the patient under general anesthesia in a surgical facility. The overall standard error of measurement, including only measurements with standard deviation ≤5 μm, was 8 μm; the corresponding coefficient of repeatability, or limits within which 95% of test-retest differences fell, was ±22.3 μm. However, standard error of measurement increased as CCT increased, from 6.8 μm for CCT less than 525 μm, to 12.9 μm for CCT 625 μm and greater. The standard error of measurement including measurements with standard deviation >5 μm was 10.5 μm. Age, sex, race/ethnicity group, and examination setting did not influence the magnitude of test-retest differences. CCT measurement reliability in children via the Pachmate DGH55 handheld contact pachymeter is similar to that reported for adults. Because thicker CCT measurements are less reliable than thinner measurements, a second measure may be helpful when the first exceeds 575 μm. Reliability is also improved by disregarding measurements with instrument-reported standard deviations >5 μm. Copyright © 2013 American Association for Pediatric Ophthalmology and Strabismus. Published by Mosby, Inc. All rights reserved.
Shack-Hartmann Phasing of Segmented Telescopes: Systematic Effects from Lenslet Arrays
NASA Technical Reports Server (NTRS)
Troy, Mitchell; Chanan, Gary; Roberts, Jennifer
2010-01-01
The segments in the Keck telescopes are routinely phased using a Shack-Hartmann wavefront sensor with sub-apertures that span adjacent segments. However, one potential limitation to the absolute accuracy of this technique is that it relies on a lenslet array (or a single lens plus a prism array) to form the subimages. These optics have the potential to introduce wavefront errors and stray reflections at the subaperture level that will bias the phasing measurement. We present laboratory data to quantify this effect, using measured errors from Keck and two other lenslet arrays. In addition, as part of the design of the Thirty Meter Telescope Alignment and Phasing System we present a preliminary investigation of a lenslet-free approach that relies on Fresnel diffraction to form the subimages at the CCD. Such a technique has several advantages, including the elimination of lenslet aberrations.
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.
Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T
2016-03-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology.
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting
Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.
2016-01-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518
NASA Astrophysics Data System (ADS)
Möhler, Christian; Russ, Tom; Wohlfahrt, Patrick; Elter, Alina; Runz, Armin; Richter, Christian; Greilich, Steffen
2018-01-01
An experimental setup for consecutive measurement of ion and x-ray absorption in tissue or other materials is introduced. With this setup using a 3D-printed sample container, the reference stopping-power ratio (SPR) of materials can be measured with an uncertainty of below 0.1%. A total of 65 porcine and bovine tissue samples were prepared for measurement, comprising five samples each of 13 tissue types representing about 80% of the total body mass (three different muscle and fatty tissues, liver, kidney, brain, heart, blood, lung and bone). Using a standard stoichiometric calibration for single-energy CT (SECT) as well as a state-of-the-art dual-energy CT (DECT) approach, SPR was predicted for all tissues and then compared to the measured reference. With the SECT approach, the SPRs of all tissues were predicted with a mean error of (-0.84 ± 0.12)% and a mean absolute error of (1.27 ± 0.12)%. In contrast, the DECT-based SPR predictions were overall consistent with the measured reference with a mean error of (-0.02 ± 0.15)% and a mean absolute error of (0.10 ± 0.15)%. Thus, in this study, the potential of DECT to decrease range uncertainty could be confirmed in biological tissue.
Logan, Dustin M.; Hill, Kyle R.; Larson, Michael J.
2015-01-01
Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI. PMID:26217212
NASA Technical Reports Server (NTRS)
Colombo, Oscar L. (Editor)
1992-01-01
This symposium on space and airborne techniques for measuring gravity fields, and related theory, contains papers on gravity modeling of Mars and Venus at NASA/GSFC, an integrated laser Doppler method for measuring planetary gravity fields, observed temporal variations in the earth's gravity field from 16-year Starlette orbit analysis, high-resolution gravity models combining terrestrial and satellite data, the effect of water vapor corrections for satellite altimeter measurements of the geoid, and laboratory demonstrations of superconducting gravity and inertial sensors for space and airborne gravity measurements. Other papers are on airborne gravity measurements over the Kelvin Seamount; the accuracy of GPS-derived acceleration from moving platform tests; airborne gravimetry, altimetry, and GPS navigation errors; controlling common mode stabilization errors in airborne gravity gradiometry, GPS/INS gravity measurements in space and on a balloon, and Walsh-Fourier series expansion of the earth's gravitational potential.
Errors in radiation oncology: A study in pathways and dosimetric impact
Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff
2005-01-01
As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793
Gerencser, Akos A; Chinopoulos, Christos; Birket, Matthew J; Jastroch, Martin; Vitelli, Cathy; Nicholls, David G; Brand, Martin D
2012-01-01
Mitochondrial membrane potential (ΔΨM) is a central intermediate in oxidative energy metabolism. Although ΔΨM is routinely measured qualitatively or semi-quantitatively using fluorescent probes, its quantitative assay in intact cells has been limited mostly to slow, bulk-scale radioisotope distribution methods. Here we derive and verify a biophysical model of fluorescent potentiometric probe compartmentation and dynamics using a bis-oxonol-type indicator of plasma membrane potential (ΔΨP) and the ΔΨM probe tetramethylrhodamine methyl ester (TMRM) using fluorescence imaging and voltage clamp. Using this model we introduce a purely fluorescence-based quantitative assay to measure absolute values of ΔΨM in millivolts as they vary in time in individual cells in monolayer culture. The ΔΨP-dependent distribution of the probes is modelled by Eyring rate theory. Solutions of the model are used to deconvolute ΔΨP and ΔΨM in time from the probe fluorescence intensities, taking into account their slow, ΔΨP-dependent redistribution and Nernstian behaviour. The calibration accounts for matrix:cell volume ratio, high- and low-affinity binding, activity coefficients, background fluorescence and optical dilution, allowing comparisons of potentials in cells or cell types differing in these properties. In cultured rat cortical neurons, ΔΨM is −139 mV at rest, and is regulated between −108 mV and −158 mV by concerted increases in ATP demand and Ca2+-dependent metabolic activation. Sensitivity analysis showed that the standard error of the mean in the absolute calibrated values of resting ΔΨM including all biological and systematic measurement errors introduced by the calibration parameters is less than 11 mV. Between samples treated in different ways, the typical equivalent error is ∼5 mV. PMID:22495585
Application of Monte-Carlo Analyses for the Microwave Anisotropy Probe (MAP) Mission
NASA Technical Reports Server (NTRS)
Mesarch, Michael A.; Rohrbaugh, David; Schiff, Conrad; Bauer, Frank H. (Technical Monitor)
2001-01-01
The Microwave Anisotropy Probe (MAP) is the third launch in the National Aeronautics and Space Administration's (NASA's) a Medium Class Explorers (MIDEX) program. MAP will measure, in greater detail, the cosmic microwave background radiation from an orbit about the Sun-Earth-Moon L2 Lagrangian point. Maneuvers will be required to transition MAP from it's initial highly elliptical orbit to a lunar encounter which will provide the remaining energy to send MAP out to a lissajous orbit about L2. Monte-Carlo analysis methods were used to evaluate the potential maneuver error sources and determine their effect of the fixed MAP propellant budget. This paper will discuss the results of the analyses on three separate phases of the MAP mission - recovering from launch vehicle errors, responding to phasing loop maneuver errors, and evaluating the effect of maneuver execution errors and orbit determination errors on stationkeeping maneuvers at L2.
Hincapie, Ana L; Slack, Marion; Malone, Daniel C; MacKinnon, Neil J; Warholak, Terri L
2016-01-01
Patients may be the most reliable reporters of some aspects of the health care process; their perspectives should be considered when pursuing changes to improve patient safety. The authors evaluated the association between patients' perceived health care quality and self-reported medical, medication, and laboratory errors in a multinational sample. The analysis was conducted using the 2010 Commonwealth Fund International Health Policy Survey, a multinational consumer survey conducted in 11 countries. Quality of care was measured by a multifaceted construct developed using Rasch techniques. After adjusting for potentially important confounding variables, an increase in respondents' perceptions of care coordination decreased the odds of self-reporting medical errors, medication errors, and laboratory errors (P < .001). As health care stakeholders continue to search for initiatives that improve care experiences and outcomes, this study's results emphasize the importance of guaranteeing integrated care.
NASA Astrophysics Data System (ADS)
Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric
2013-04-01
Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.
Wilberg, Dale E.; Stolp, Bernard J.
2005-01-01
This report contains the results of an October 2001 seepage investigation conducted along a reach of the Escalante River in Utah extending from the U.S. Geological Survey streamflow-gaging station near Escalante to the mouth of Stevens Canyon. Discharge was measured at 16 individual sites along 15 consecutive reaches. Total reach length was about 86 miles. A reconnaissance-level sampling of water for tritium and chlorofluorcarbons was also done. In addition, hydrologic and water-quality data previously collected and published by the U.S. Geological Survey for the 2,020-square-mile Escalante River drainage basin was compiled and is presented in 12 tables. These data were collected from 64 surface-water sites and 28 springs from 1909 to 2002.None of the 15 consecutive reaches along the Escalante River had a measured loss or gain that exceeded the measurement error. All discharge measurements taken during the seepage investigation were assigned a qualitative rating of accuracy that ranged from 5 percent to greater than 8 percent of the actual flow. Summing the potential error for each measurement and dividing by the maximum of either the upstream discharge and any tributary inflow, or the downstream discharge, determined the normalized error for a reach. This was compared to the computed loss or gain that also was normalized to the maximum discharge. A loss or gain for a specified reach is considered significant when the loss or gain (normalized percentage difference) is greater than the measurement error (normalized percentage error). The percentage difference and percentage error were normalized to allow comparison between reaches with different amounts of discharge.The plate that accompanies the report is 36" by 40" and can be printed in 16 tiles, 8.5 by 11 inches. An index for the tiles is located on the lower left-hand side of the plate. Using Adobe Acrobat, the plate can be viewed independent of the report; all Acrobat functions are available.
The Measurement of Ammonia in Human Breath and its Potential in Clinical Diagnostics.
Brannelly, N T; Hamilton-Shield, J P; Killard, A J
2016-11-01
Ammonia is an important component of metabolism and is involved in many physiological processes. During normal physiology, levels of blood ammonia are between 11 and 50 µM. Elevated blood ammonia levels are associated with a variety of pathological conditions such as liver and kidney dysfunction, Reye's syndrome and a variety of inborn errors of metabolism including urea cycle disorders (UCD), organic acidaemias and hyperinsulinism/hyperammonaemia syndrome in which ammonia may reach levels in excess of 1 mM. It is highly neurotoxic and so effective measurement is critical for assessing and monitoring disease severity and treatment. Ammonia is also a potential biomarker in exercise physiology and studies of drug metabolism. Current ammonia testing is based on blood sampling, which is inconvenient and can be subject to significant analytical errors due to the quality of the sample draw, its handling and preparation for analysis. Blood ammonia is in gaseous equilibrium with the lungs. Recent research has demonstrated the potential use of breath ammonia as a non-invasive means of measuring systemic ammonia. This requires measurement of ammonia in real breath samples with associated temperature, humidity and gas characteristics at concentrations between 50 and several thousand parts per billion. This review explores the diagnostic applications of ammonia measurement and the impact that the move from blood to breath analysis could have on how these processes and diseases are studied and managed.
NASA Astrophysics Data System (ADS)
Palmesi, P.; Abert, C.; Bruckner, F.; Suess, D.
2018-05-01
Fast stray field calculation is commonly considered of great importance for micromagnetic simulations, since it is the most time consuming part of the simulation. The Fast Multipole Method (FMM) has displayed linear O(N) parallelization behavior on many cores. This article investigates the error of a recent FMM approach approximating sources using linear—instead of constant—finite elements in the singular integral for calculating the stray field and the corresponding potential. After measuring performance in an earlier manuscript, this manuscript investigates the convergence of the relative L2 error for several FMM simulation parameters. Various scenarios either calculating the stray field directly or via potential are discussed.
Veenstra, Alma; Antoniou, Kyriakos; Katsos, Napoleon; Kissine, Mikhail
2018-04-19
We propose that attraction errors in agreement production (e.g., the key to the cabinets are missing) are related to two components of executive control: working memory and inhibitory control. We tested 138 children aged 10 to 12, an age when children are expected to produce high rates of errors. To increase the potential of individual variation in executive control skills, participants came from monolingual, bilingual, and bidialectal language backgrounds. Attraction errors were elicited with a picture description task in Dutch and executive control was measured with a digit span task, Corsi blocks task, switching task, and attentional networks task. Overall, higher rates of attraction errors were negatively associated with higher verbal working memory and, independently, with higher inhibitory control. To our knowledge, this is the first demonstration of the role of both working memory and inhibitory control in attraction errors in production. Implications for memory- and grammar-based models are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Long-term academic stress increases the late component of error processing: an ERP study.
Wu, Jianhui; Yuan, Yiran; Duan, Hongxia; Qin, Shaozheng; Buchanan, Tony W; Zhang, Kan; Zhang, Liang
2014-05-01
Exposure to long-term stress has a variety of consequences on the brain and cognition. Few studies have examined the influence of long-term stress on event related potential (ERP) indices of error processing. The current study investigated how long-term academic stress modulates the error related negativity (Ne or ERN) and the error positivity (Pe) components of error processing. Forty-one male participants undergoing preparation for a major academic examination and 20 non-exam participants completed a Go-NoGo task while ERP measures were collected. The exam group reported higher perceived stress levels and showed increased Pe amplitude compared with the non-exam group. Participants' rating of the importance of the exam was positively associated with the amplitude of Pe, but these effects were not found for the Ne/ERN. These results suggest that long-term academic stress leads to greater motivational assessment of and higher emotional response to errors. Copyright © 2014 Elsevier B.V. All rights reserved.
Long-term care physical environments--effect on medication errors.
Mahmood, Atiya; Chaudhury, Habib; Gaumont, Alana; Rust, Tiana
2012-01-01
Few studies examine physical environmental factors and their effects on staff health, effectiveness, work errors and job satisfaction. To address this gap, this study aims to examine environmental features and their role in medication and nursing errors in long-term care facilities. A mixed methodological strategy was used. Data were collected via focus groups, observing medication preparation and administration, and a nursing staff survey in four facilities. The paper reveals that, during the medication preparation phase, physical design, such as medication room layout, is a major source of potential errors. During medication administration, social environment is more likely to contribute to errors. Interruptions, noise and staff shortages were particular problems. The survey's relatively small sample size needs to be considered when interpreting the findings. Also, actual error data could not be included as existing records were incomplete. The study offers several relatively low-cost recommendations to help staff reduce medication errors. Physical environmental factors are important when addressing measures to reduce errors. The findings of this study underscore the fact that the physical environment's influence on the possibility of medication errors is often neglected. This study contributes to the scarce empirical literature examining the relationship between physical design and patient safety.
Schwappach, David L B; Frank, Olga; Davis, Rachel E
2013-10-01
Various authorities recommend the participation of patients in promoting patient safety, but little is known about health care professionals' (HCPs') attitudes towards patients' involvement in safety-related behaviours. To investigate how HCPs evaluate patients' behaviours and HCP responses to patient involvement in the behaviour, relative to different aspects of the patient, the involved HCP and the potential error. Cross-sectional fractional factorial survey with seven factors embedded in two error scenarios (missed hand hygiene, medication error). Each survey included two randomized vignettes that described the potential error, a patient's reaction to that error and the HCP response to the patient. Twelve hospitals in Switzerland. A total of 1141 HCPs (response rate 45%). Approval of patients' behaviour, HCP response to the patient, anticipated effects on the patient-HCP relationship, HCPs' support for being asked the question, affective response to the vignettes. Outcomes were measured on 7-point scales. Approval of patients' safety-related interventions was generally high and largely affected by patients' behaviour and correct identification of error. Anticipated effects on the patient-HCP relationship were much less positive, little correlated with approval of patients' behaviour and were mainly determined by the HCP response to intervening patients. HCPs expressed more favourable attitudes towards patients intervening about a medication error than about hand sanitation. This study provides the first insights into predictors of HCPs' attitudes towards patient engagement in safety. Future research is however required to assess the generalizability of the findings into practice before training can be designed to address critical issues. © 2012 John Wiley & Sons Ltd.
Concomitant prescribing and dispensing errors at a Brazilian hospital: a descriptive study
Silva, Maria das Dores Graciano; Rosa, Mário Borges; Franklin, Bryony Dean; Reis, Adriano Max Moreira; Anchieta, Lêni Márcia; Mota, Joaquim Antônio César
2011-01-01
OBJECTIVE: To analyze the prevalence and types of prescribing and dispensing errors occurring with high-alert medications and to propose preventive measures to avoid errors with these medications. INTRODUCTION: The prevalence of adverse events in health care has increased, and medication errors are probably the most common cause of these events. Pediatric patients are known to be a high-risk group and are an important target in medication error prevention. METHODS: Observers collected data on prescribing and dispensing errors occurring with high-alert medications for pediatric inpatients in a university hospital. In addition to classifying the types of error that occurred, we identified cases of concomitant prescribing and dispensing errors. RESULTS: One or more prescribing errors, totaling 1,632 errors, were found in 632 (89.6%) of the 705 high-alert medications that were prescribed and dispensed. We also identified at least one dispensing error in each high-alert medication dispensed, totaling 1,707 errors. Among these dispensing errors, 723 (42.4%) content errors occurred concomitantly with the prescribing errors. A subset of dispensing errors may have occurred because of poor prescription quality. The observed concomitancy should be examined carefully because improvements in the prescribing process could potentially prevent these problems. CONCLUSION: The system of drug prescribing and dispensing at the hospital investigated in this study should be improved by incorporating the best practices of medication safety and preventing medication errors. High-alert medications may be used as triggers for improving the safety of the drug-utilization system. PMID:22012039
Relative entropy as a universal metric for multiscale errors
NASA Astrophysics Data System (ADS)
Chaimovich, Aviel; Shell, M. Scott
2010-06-01
We show that the relative entropy, Srel , suggests a fundamental indicator of the success of multiscale studies, in which coarse-grained (CG) models are linked to first-principles (FP) ones. We demonstrate that Srel inherently measures fluctuations in the differences between CG and FP potential energy landscapes, and develop a theory that tightly and generally links it to errors associated with coarse graining. We consider two simple case studies substantiating these results, and suggest that Srel has important ramifications for evaluating and designing coarse-grained models.
Relative entropy as a universal metric for multiscale errors.
Chaimovich, Aviel; Shell, M Scott
2010-06-01
We show that the relative entropy, Srel, suggests a fundamental indicator of the success of multiscale studies, in which coarse-grained (CG) models are linked to first-principles (FP) ones. We demonstrate that Srel inherently measures fluctuations in the differences between CG and FP potential energy landscapes, and develop a theory that tightly and generally links it to errors associated with coarse graining. We consider two simple case studies substantiating these results, and suggest that Srel has important ramifications for evaluating and designing coarse-grained models.
Wood, J L; Moreton, R B
1978-12-01
1. The conventional, two-electrode method for measuring potential difference across an epithelium is subject to error due to potential gradients caused by current flow in the bathing medium. Mathematical analysis shows that the error in measuring short-circuit current is proportional to the resistivity of the bathing medium and to the separation of the two recording electrodes. It is particularly serious for the insect larval midgut, where the resistivity of the medium is high, and that of the tissue is low. 2. A system has been devised, which uses a third recording electrode to monitor directly the potential gradient in the bathing medium. By suitable electrical connexions, the gradient can be automatically compensated, leaving a residual error which depends on the thickness of the tissue, but not on the electrode separation. Because the thicknesses of most epithelia are smaller than the smallest practical electrode spacing, this error is smaller than that inherent in a two-electrode system. 3. Since voltage-gradients are automatically compensated, it is possible to obtain continuous readings of potential and current. A 'voltage-clamp' circuit is described, which allows the time-course of the short-circuit current to be studied. 4.The three-electrode system has been used to study the larval midgut of Hyalophora cecropia. The average results from five experiments were: initial potential difference (open-circuit): 98+/-11 mV (S.E.M.); short-circuit current at time 60 min: 498+/-160 microA cm=2; 'steady-state' resistance at 60 min: 150+/-26 omega cm2. The current is equivalent to a net potassium transport of 18.6 mu-equiv cm-2 h-1. 5. The electrical parameters of the midgut change rapidly with time. The potential difference decays with a half-time of about 158 min, the resistance increases with a half-time of about 16 min, and the short-circuit current decays as the sum of two exponential terms, with half-times of about 16 and 158 min respectively. In addition, potential and short-circuit current show transient responses to step changes. 6. The properties of the midgut are compared with those of other transporting epithelia, and their dependence on the degree of folding of the preparation is discussed. Their time-dependence is discussed in the context of changes in potassium content of the tissue, and the implications for measurements depending on the assumption of a steady state are outlined.
Monitoring gait in multiple sclerosis with novel wearable motion sensors.
Moon, Yaejin; McGinnis, Ryan S; Seagers, Kirsten; Motl, Robert W; Sheth, Nirav; Wright, John A; Ghaffari, Roozbeh; Sosnoff, Jacob J
2017-01-01
Mobility impairment is common in people with multiple sclerosis (PwMS) and there is a need to assess mobility in remote settings. Here, we apply a novel wireless, skin-mounted, and conformal inertial sensor (BioStampRC, MC10 Inc.) to examine gait characteristics of PwMS under controlled conditions. We determine the accuracy and precision of BioStampRC in measuring gait kinematics by comparing to contemporary research-grade measurement devices. A total of 45 PwMS, who presented with diverse walking impairment (Mild MS = 15, Moderate MS = 15, Severe MS = 15), and 15 healthy control subjects participated in the study. Participants completed a series of clinical walking tests. During the tests participants were instrumented with BioStampRC and MTx (Xsens, Inc.) sensors on their shanks, as well as an activity monitor GT3X (Actigraph, Inc.) on their non-dominant hip. Shank angular velocity was simultaneously measured with the inertial sensors. Step number and temporal gait parameters were calculated from the data recorded by each sensor. Visual inspection and the MTx served as the reference standards for computing the step number and temporal parameters, respectively. Accuracy (error) and precision (variance of error) was assessed based on absolute and relative metrics. Temporal parameters were compared across groups using ANOVA. Mean accuracy±precision for the BioStampRC was 2±2 steps error for step number, 6±9ms error for stride time and 6±7ms error for step time (0.6-2.6% relative error). Swing time had the least accuracy±precision (25±19ms error, 5±4% relative error) among the parameters. GT3X had the least accuracy±precision (8±14% relative error) in step number estimate among the devices. Both MTx and BioStampRC detected significantly distinct gait characteristics between PwMS with different disability levels (p<0.01). BioStampRC sensors accurately and precisely measure gait parameters in PwMS across diverse walking impairment levels and detected differences in gait characteristics by disability level in PwMS. This technology has the potential to provide granular monitoring of gait both inside and outside the clinic.
Errors in MR-based attenuation correction for brain imaging with PET/MR scanners
NASA Astrophysics Data System (ADS)
Rota Kops, Elena; Herzog, Hans
2013-02-01
AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal cavity yielded an overestimation in cerebellum up to 5%. ConclusionsThe present error analysis confirms that our template-based attenuation method provides reliable attenuation corrections of PET brain imaging measured in PET/MR scanners.
Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark
2016-01-01
Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.
Classification based upon gene expression data: bias and precision of error rates.
Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L
2007-06-01
Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp
Frequency domain measurement systems
NASA Technical Reports Server (NTRS)
Eischer, M. C.
1978-01-01
Stable frequency sources and signal processing blocks were characterized by their noise spectra, both discrete and random, in the frequency domain. Conventional measures are outlined, and systems for performing the measurements are described. Broad coverage of system configurations which were found useful is given. Their functioning and areas of application are discussed briefly. Particular attention is given to some of the potential error sources in the measurement procedures, system configurations, double-balanced-mixer-phase-detectors, and application of measuring instruments.
Electrocortical measures of information processing biases in social anxiety disorder: A review.
Harrewijn, Anita; Schmidt, Louis A; Westenberg, P Michiel; Tang, Alva; van der Molen, Melle J W
2017-10-01
Social anxiety disorder (SAD) is characterized by information processing biases, however, their underlying neural mechanisms remain poorly understood. The goal of this review was to give a comprehensive overview of the most frequently studied EEG spectral and event-related potential (ERP) measures in social anxiety during rest, anticipation, stimulus processing, and recovery. A Web of Science search yielded 35 studies reporting on electrocortical measures in individuals with social anxiety or related constructs. Social anxiety was related to increased delta-beta cross-frequency correlation during anticipation and recovery, and information processing biases during early processing of faces (P1) and errors (error-related negativity). These electrocortical measures are discussed in relation to the persistent cycle of information processing biases maintaining SAD. Future research should further investigate the mechanisms of this persistent cycle and study the utility of electrocortical measures in early detection, prevention, treatment and endophenotype research. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nishiyama, Randall T.; Bedard, Alfred J., Jr.
1991-09-01
There are many areas of need for accurate measurements of atmospheric static pressure. These include observations of surface meteorology, airport altimeter settings, pressure distributions around buildings, moving measurement platforms, as well as basic measurements of fluctuating pressures in turbulence. Most of these observations require long-term observations in adverse environments (e.g., rain, dust, or snow). Currently, many pressure measurements are made, of necessity, within buildings, thus involving potential errors of several millibars in mean pressure during moderate winds, accompanied by large fluctuating pressures induced by the structure. In response to these needs, a 'Quad-Disk' pressure probe for continuous, outdoor monitoring purposes was designed which is inherently weather-protected. This Quad-Disk probe has the desirable features of omnidirectional response and small error in pitch. A review of past static pressure probes contrasts design approaches and capabilities.
Optimizing Hybrid Metrology: Rigorous Implementation of Bayesian and Combined Regression.
Henn, Mark-Alexander; Silver, Richard M; Villarrubia, John S; Zhang, Nien Fan; Zhou, Hui; Barnes, Bryan M; Ming, Bin; Vladár, András E
2015-01-01
Hybrid metrology, e.g., the combination of several measurement techniques to determine critical dimensions, is an increasingly important approach to meet the needs of the semiconductor industry. A proper use of hybrid metrology may yield not only more reliable estimates for the quantitative characterization of 3-D structures but also a more realistic estimation of the corresponding uncertainties. Recent developments at the National Institute of Standards and Technology (NIST) feature the combination of optical critical dimension (OCD) measurements and scanning electron microscope (SEM) results. The hybrid methodology offers the potential to make measurements of essential 3-D attributes that may not be otherwise feasible. However, combining techniques gives rise to essential challenges in error analysis and comparing results from different instrument models, especially the effect of systematic and highly correlated errors in the measurement on the χ 2 function that is minimized. Both hypothetical examples and measurement data are used to illustrate solutions to these challenges.
Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie
2014-01-01
This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).
Nurses' role in medication safety.
Choo, Janet; Hutchinson, Alison; Bucknall, Tracey
2010-10-01
To explore the nurse's role in the process of medication management and identify the challenges associated with safe medication management in contemporary clinical practice. Medication errors have been a long-standing factor affecting consumer safety. The nursing profession has been identified as essential to the promotion of patient safety. A review of literature on medication errors and the use of electronic prescribing in medication errors. Medication management requires a multidisciplinary approach and interdisciplinary communication is essential to reduce medication errors. Information technologies can help to reduce some medication errors through eradication of transcription and dosing errors. Nurses must play a major role in the design of computerized medication systems to ensure a smooth transition to such as system. The nurses' roles in medication management cannot be over-emphasized. This is particularly true when designing a computerized medication system. The adoption of safety measures during decision making that parallel those of the aviation industry safety procedures can provide some strategies to prevent medication error. Innovations in information technology offer potential mechanisms to avert adverse events in medication management for nurses. © 2010 The Authors. Journal compilation © 2010 Blackwell Publishing Ltd.
Errors in the Extra-Analytical Phases of Clinical Chemistry Laboratory Testing.
Zemlin, Annalise E
2018-04-01
The total testing process consists of various phases from the pre-preanalytical to the post-postanalytical phase, the so-called brain-to-brain loop. With improvements in analytical techniques and efficient quality control programmes, most laboratory errors now occur in the extra-analytical phases. There has been recent interest in these errors with numerous publications highlighting their effect on service delivery, patient care and cost. This interest has led to the formation of various working groups whose mission is to develop standardized quality indicators which can be used to measure the performance of service of these phases. This will eventually lead to the development of external quality assessment schemes to monitor these phases in agreement with ISO15189:2012 recommendations. This review focuses on potential errors in the extra-analytical phases of clinical chemistry laboratory testing, some of the studies performed to assess the severity and impact of these errors and processes that are in place to address these errors. The aim of this review is to highlight the importance of these errors for the requesting clinician.
Proton upsets in LSI memories in space
NASA Technical Reports Server (NTRS)
Mcnulty, P. J.; Wyatt, R. C.; Filz, R. C.; Rothwell, P. L.; Farrell, G. E.
1980-01-01
Two types of large scale integrated dynamic random access memory devices were tested and found to be subject to soft errors when exposed to protons incident at energies between 18 and 130 MeV. These errors are shown to differ significantly from those induced in the same devices by alphas from an Am-241 source. There is considerable variation among devices in their sensitivity to proton-induced soft errors, even among devices of the same type. For protons incident at 130 MeV, the soft error cross sections measured in these experiments varied from 10 to the -8th to 10 to the -6th sq cm/proton. For individual devices, however, the soft error cross section consistently increased with beam energy from 18-130 MeV. Analysis indicates that the soft errors induced by energetic protons result from spallation interactions between the incident protons and the nuclei of the atoms comprising the device. Because energetic protons are the most numerous of both the galactic and solar cosmic rays and form the inner radiation belt, proton-induced soft errors have potentially serious implications for many electronic systems flown in space.
Spectral estimates of net radiation and soil heat flux
Daughtry, C.S.T.; Kustas, William P.; Moran, M.S.; Pinter, P. J.; Jackson, R. D.; Brown, P.W.; Nichols, W.D.; Gay, L.W.
1990-01-01
Conventional methods of measuring surface energy balance are point measurements and represent only a small area. Remote sensing offers a potential means of measuring outgoing fluxes over large areas at the spatial resolution of the sensor. The objective of this study was to estimate net radiation (Rn) and soil heat flux (G) using remotely sensed multispectral data acquired from an aircraft over large agricultural fields. Ground-based instruments measured Rn and G at nine locations along the flight lines. Incoming fluxes were also measured by ground-based instruments. Outgoing fluxes were estimated using remotely sensed data. Remote Rn, estimated as the algebraic sum of incoming and outgoing fluxes, slightly underestimated Rn measured by the ground-based net radiometers. The mean absolute errors for remote Rn minus measured Rn were less than 7%. Remote G, estimated as a function of a spectral vegetation index and remote Rn, slightly overestimated measured G; however, the mean absolute error for remote G was 13%. Some of the differences between measured and remote values of Rn and G are associated with differences in instrument designs and measurement techniques. The root mean square error for available energy (Rn - G) was 12%. Thus, methods using both ground-based and remotely sensed data can provide reliable estimates of the available energy which can be partitioned into sensible and latent heat under nonadvective conditions. ?? 1990.
IMRT QA: Selecting gamma criteria based on error detection sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steers, Jennifer M.; Fraass, Benedick A., E-mail: benedick.fraass@cshs.org
Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique,more » and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. Conclusions: We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.« less
NASA Astrophysics Data System (ADS)
Kawa, S. R.; Baker, D. F.; Chatterjee, A.; Crowell, S.
2016-12-01
The measurement of atmospheric greenhouse gases (GHG), principally CO2 and CH4, from space using active (lidar) sensing techniques has several potentially significant advantages in comparison to missions using passive instrument approaches. A great deal of progress has been made in development of the active methods since the US National Academy of Sciences (NAS) 2007 Decadal Survey recommended the ASCENDS mission (Active Sensing of Carbon Emissions, Nights, Days, and Seasons) for NASA's next generation CO2 observing system. Active GHG missions remain in consideration by the current NAS Decadal Survey for Earth Science 2017. In this presentation, we update the measurement characteristics expected for active GHG sensing, test how these measurements will enhance our ability to quantify GHG surface fluxes, and examine the potential role of active sensing to address carbon cycle issues as required for confident projection of carbon-climate interactions. Over the past decade, laser CO2 instrument concepts, retrieval approaches, and measurement techniques have matured significantly, driven by technology advances and by analysis of data from airborne simulators. Performance simulations updated to match the latest developments show substantially lower random errors, better spatial resolution, and more information content for global XCO2 data than just a few years ago. Observing System Simulation Experiments using global flux inversion models show corresponding improvements in resolving surface fluxes and reducing flux uncertainties for the expected lidar data. Simulations including prospective systematic (bias) errors, which are expected to be lesser for the lidar system compared to passive measurements, provide guidance for instrument design requirements. We will comment on the impact of errors in knowledge of the atmospheric state including the need for coincident measurements of O2 column in order to normalize the column abundances to dry air mole fraction. We will also comment on the potential impact of future active missions for CH4. The results indicate that active systems will provide GHG measurements of high quality and spatial sampling that will contribute substantially to knowledge of carbon flux distributions and their dependence on underlying physical processes in critical regions.
Schiffinger, Michael; Latzke, Markus; Steyrer, Johannes
2016-01-01
Safety climate (SC) and more recently patient engagement (PE) have been identified as potential determinants of patient safety, but conceptual and empirical studies combining both are lacking. On the basis of extant theories and concepts in safety research, this study investigates the effect of PE in conjunction with SC on perceived error occurrence (pEO) in hospitals, controlling for various staff-, patient-, and hospital-related variables as well as the amount of stress and (lack of) organizational support experienced by staff. Besides the main effects of PE and SC on error occurrence, their interaction is examined, too. In 66 hospital units, 4,345 patients assessed the degree of PE, and 811 staff assessed SC and pEO. PE was measured with a new instrument, capturing its core elements according to a recent literature review: Information Provision (both active and passive) and Activation and Collaboration. SC and pEO were measured with validated German-language questionnaires. Besides standard regression and correlational analyses, partial least squares analysis was employed to model the main and interaction effects of PE and SC on pEO, also controlling for stress and (lack of) support perceived by staff, various staff and patient attributes, and potential single-source bias. Both PE and SC are associated with lower pEO, to a similar extent. The joint effect of these predictors suggests a substitution rather than mutually reinforcing interaction. Accounting for control variables and/or potential single-source bias slightly attenuates some effects without altering the results. Ignoring PE potentially amounts to forgoing a potential source of additional safety. On the other hand, despite the abovementioned substitution effect and conjectures of SC being inert, PE should not be considered as a replacement for SC.
NASA Astrophysics Data System (ADS)
Burton, S. P.; Liu, X.; Chemyakin, E.; Hostetler, C. A.; Stamnes, S.; Moore, R.; Sawamura, P.; Ferrare, R. A.; Knobelspiesse, K. D.
2015-12-01
There is considerable interest in retrieving aerosol effective radius, number concentration and refractive index from lidar measurements of extinction and backscatter at several wavelengths. The 3 backscatter + 2 extinction (3β+2α) combination is particularly important since the planned NASA Aerosol-Clouds-Ecosystem (ACE) mission recommends this combination of measurements. The 2nd-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β+2α measurements since 2012. Here we develop a deeper understanding of the information content and sensitivities of the 3β+2α system in terms of aerosol microphysical parameters of interest. We determine best case results using a retrieval-free methodology. We calculate information content and uncertainty metrics from Optimal Estimation techniques using only a simplified forward model look-up table, with no explicit inversion. Simplifications include spherical particles, mono-modal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, our results are applicable as a best case for all existing retrievals. Retrieval-dependent errors due to mismatch between the assumptions and true atmospheric aerosols are not included. The sensitivity metrics allow for identifying (1) information content of the measurements versus a priori information; (2) best-case error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. These results suggest that even in the best case, this retrieval system is underdetermined. Recommendations are given for addressing cross-talk between effective radius and number concentration. A potential solution to the under-determination problem is a combined active (lidar) and passive (polarimeter) retrieval, which is the subject of a new funded NASA project by our team.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weston, Louise Marie
2007-09-01
A recent report on criticality accidents in nuclear facilities indicates that human error played a major role in a significant number of incidents with serious consequences and that some of these human errors may be related to the emotional state of the individual. A pre-shift test to detect a deleterious emotional state could reduce the occurrence of such errors in critical operations. The effectiveness of pre-shift testing is a challenge because of the need to gather predictive data in a relatively short test period and the potential occurrence of learning effects due to a requirement for frequent testing. This reportmore » reviews the different types of reliability and validity methods and testing and statistical analysis procedures to validate measures of emotional state. The ultimate value of a validation study depends upon the percentage of human errors in critical operations that are due to the emotional state of the individual. A review of the literature to identify the most promising predictors of emotional state for this application is highly recommended.« less
NASA Technical Reports Server (NTRS)
Strangman, Gary; Franceschini, Maria Angela; Boas, David A.; Sutton, J. P. (Principal Investigator)
2003-01-01
Near-infrared spectroscopy (NIRS) can be used to noninvasively measure changes in the concentrations of oxy- and deoxyhemoglobin in tissue. We have previously shown that while global changes can be reliably measured, focal changes can produce erroneous estimates of concentration changes (NeuroImage 13 (2001), 76). Here, we describe four separate sources for systematic error in the calculation of focal hemoglobin changes from NIRS data and use experimental methods and Monte Carlo simulations to examine the importance and mitigation methods of each. The sources of error are: (1). the absolute magnitudes and relative differences in pathlength factors as a function of wavelength, (2). the location and spatial extent of the absorption change with respect to the optical probe, (3). possible differences in the spatial distribution of hemoglobin species, and (4). the potential for simultaneous monitoring of multiple regions of activation. We found wavelength selection and optode placement to be important variables in minimizing such errors, and our findings indicate that appropriate experimental procedures could reduce each of these errors to a small fraction (<10%) of the observed concentration changes.
Probability of misclassifying biological elements in surface waters.
Loga, Małgorzata; Wierzchołowska-Dziedzic, Anna
2017-11-24
Measurement uncertainties are inherent to assessment of biological indices of water bodies. The effect of these uncertainties on the probability of misclassification of ecological status is the subject of this paper. Four Monte-Carlo (M-C) models were applied to simulate the occurrence of random errors in the measurements of metrics corresponding to four biological elements of surface waters: macrophytes, phytoplankton, phytobenthos, and benthic macroinvertebrates. Long series of error-prone measurement values of these metrics, generated by M-C models, were used to identify cases in which values of any of the four biological indices lay outside of the "true" water body class, i.e., outside the class assigned from the actual physical measurements. Fraction of such cases in the M-C generated series was used to estimate the probability of misclassification. The method is particularly useful for estimating the probability of misclassification of the ecological status of surface water bodies in the case of short sequences of measurements of biological indices. The results of the Monte-Carlo simulations show a relatively high sensitivity of this probability to measurement errors of the river macrophyte index (MIR) and high robustness to measurement errors of the benthic macroinvertebrate index (MMI). The proposed method of using Monte-Carlo models to estimate the probability of misclassification has significant potential for assessing the uncertainty of water body status reported to the EC by the EU member countries according to WFD. The method can be readily applied also in risk assessment of water management decisions before adopting the status dependent corrective actions.
NASA Astrophysics Data System (ADS)
Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia
Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.
NASA Technical Reports Server (NTRS)
Balla, R. Jeffrey; Miller, Corey A.
2008-01-01
This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.
Continued investigation of potential application of Omega navigation to civil aviation
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.
1978-01-01
Major attention is given to an analysis of receiver repeatability in measuring OMEGA phase data. Repeatability is defined as the ability of two like receivers which are co-located to achieve the same LOP phase readings. Specific data analysis is presented. A propagation model is described which has been used in the analysis of propagation anomalies. Composite OMEGA analysis is presented in terms of carrier phase correlation analysis and the determination of carrier phase weighting coefficients for minimizing composite phase variation. Differential OMEGA error analysis is presented for receiver separations. Three frequency analysis includes LOP error and position error based on three and four OMEGA transmissions. Results of phase amplitude correlation studies are presented.
Performance Evaluation of Three Blood Glucose Monitoring Systems Using ISO 15197
Bedini, José Luis; Wallace, Jane F.; Pardo, Scott; Petruschke, Thorsten
2015-01-01
Background: Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients’ health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Methods: Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. Results: All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. Conclusions: All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. PMID:26445813
NASA Astrophysics Data System (ADS)
Fisher, L. E.; Lynch, K. A.; Fernandes, P. A.; Bekkeng, T. A.; Moen, J.; Zettergren, M.; Miceli, R. J.; Powell, S.; Lessard, M. R.; Horak, P.
2016-04-01
The interpretation of planar retarding potential analyzers (RPA) during ionospheric sounding rocket missions requires modeling the thick 3D plasma sheath. This paper overviews the theory of RPAs with an emphasis placed on the impact of the sheath on current-voltage (I-V) curves. It then describes the Petite Ion Probe (PIP) which has been designed to function in this difficult regime. The data analysis procedure for this instrument is discussed in detail. Data analysis begins by modeling the sheath with the Spacecraft Plasma Interaction System (SPIS), a particle-in-cell code. Test particles are traced through the sheath and detector to determine the detector's response. A training set is constructed from these simulated curves for a support vector regression analysis which relates the properties of the I-V curve to the properties of the plasma. The first in situ use of the PIPs occurred during the MICA sounding rocket mission which launched from Poker Flat, Alaska in February of 2012. These data are presented as a case study, providing valuable cross-instrument comparisons. A heritage top-hat thermal ion electrostatic analyzer, called the HT, and a multi-needle Langmuir probe have been used to validate both the PIPs and the data analysis method. Compared to the HT, the PIP ion temperature measurements agree with a root-mean-square error of 0.023 eV. These two instruments agree on the parallel-to-B plasma flow velocity with a root-mean-square error of 130 m/s. The PIP with its field of view aligned perpendicular-to-B provided a density measurement with an 11% error compared to the multi-needle Langmuir Probe. Higher error in the other PIP's density measurement is likely due to simplifications in the SPIS model geometry.
Digital Holography for in Situ Real-Time Measurement of Plasma-Facing-Component Erosion
DOE Office of Scientific and Technical Information (OSTI.GOV)
ThomasJr., C. E.; Granstedt, E. M.; Biewer, Theodore M
2014-01-01
In situ, real time measurement of net plasma-facing-component (PFC) erosion/deposition in a real plasma device is challenging due to the need for good spatial and temporal resolution, sufficient sensitivity, and immunity to fringe-jump errors. Design of a high-sensitivity, potentially high-speed, dual-wavelength CO2 laser digital holography system (nominally immune to fringe jumps) for PFC erosion measurement is discussed.
Sun Glint and Sea Surface Salinity Remote Sensing
NASA Technical Reports Server (NTRS)
Dinnat, Emmanuel P.; LeVine, David M.
2007-01-01
A new mission in space, called Aquarius/SAC-D, is being built to measure the salinity of the world's oceans. Salinity is an important parameter for understanding movement of the ocean water. This circulation results in the transportation of heat and is important for understanding climate and climate change. Measuring salinity from space requires precise instruments and a careful accounting for potential sources of error. One of these sources of error is radiation from the sun that is reflected from the ocean surface to the sensor in space. This paper examines this reflected radiation and presents an advanced model for describing this effect that includes the effects of ocean waves on the reflection.
Kimel-Naor, Shani; Abboud, Shimon; Arad, Marina
2016-08-01
Osteoporosis is defined as bone microstructure deterioration resulting a decrease of bone's strength. Measured bone mineral density (BMD) constitutes the main tool for Osteoporosis diagnosis, management, and defines patient's fracture risk. In the present study, parametric electrical impedance tomography (pEIT) method was examined for monitoring BMD, using a computerized simulation model and preliminary real measurements. A numerical solver was developed to simulate surface potentials measured over a 3D computerized pelvis model. Varying cortical and cancellous BMD were simulated by changing bone conductivity and permittivity. Up to 35% and 16% change was found in the real and imaginary modules of the calculated potential, respectively, while BMD changes from 100% (normal) to 60% (Osteoporosis). Negligible BMD relative error was obtained with SNR>60 [dB]. Position changes errors indicate that for long term monitoring, measurement should be taken at the same geometrical configuration with great accuracy. The numerical simulations were compared to actual measurements that were acquired from a healthy male subject using a five electrodes belt bioimpedance device. The results suggest that pEIT may provide an inexpensive easy to use tool for frequent monitoring BMD in small clinics during pharmacological treatment, as a complementary method to DEXA test. Copyright © 2016. Published by Elsevier Ltd.
Mullan, F; Bartlett, D; Austin, R S
2017-06-01
To investigate the measurement performance of a chromatic confocal profilometer for quantification of surface texture of natural human enamel in vitro. Contributions to the measurement uncertainty from all potential sources of measurement error using a chromatic confocal profilometer and surface metrology software were quantified using a series of surface metrology calibration artifacts and pre-worn enamel samples. The 3D surface texture analysis protocol was optimized across 0.04mm 2 of natural and unpolished enamel undergoing dietary acid erosion (pH 3.2, titratable acidity 41.3mmolOH/L). Flatness deviations due to the x, y stage mechanical movement were the major contribution to the measurement uncertainty; with maximum Sz flatness errors of 0.49μm. Whereas measurement noise; non-linearity's in x, y, z and enamel sample dimensional instability contributed minimal errors. The measurement errors were propagated into an uncertainty budget following a Type B uncertainty evaluation in order to calculate the Standard Combined Uncertainty (u c ), which was ±0.28μm. Statistically significant increases in the median (IQR) roughness (Sa) of the polished samples occurred after 15 (+0.17 (0.13)μm), 30 (+0.12 (0.09)μm) and 45 (+0.18 (0.15)μm) min of erosion (P<0.001 vs. baseline). In contrast, natural unpolished enamel samples revealed a statistically significant decrease in Sa roughness of -0.14 (0.34) μm only after 45min erosion (P<0.05s vs. baseline). The main contribution to measurement uncertainty using chromatic confocal profilometry was from flatness deviations however by optimizing measurement protocols the profilometer successfully characterized surface texture changes in enamel from erosive wear in vitro. Copyright © 2017 The Academy of Dental Materials. All rights reserved.
Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A
2010-07-01
Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.
Measurement error affects risk estimates for recruitment to the Hudson River stock of striped bass.
Dunning, Dennis J; Ross, Quentin E; Munch, Stephan B; Ginzburg, Lev R
2002-06-07
We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years). Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%). However, the risk decreased almost tenfold (0.032) if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009) and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006)--an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.
Adverse Drug Events caused by Serious Medication Administration Errors
Sawarkar, Abhivyakti; Keohane, Carol A.; Maviglia, Saverio; Gandhi, Tejal K; Poon, Eric G
2013-01-01
OBJECTIVE To determine how often serious or life-threatening medication administration errors with the potential to cause patient harm (or potential adverse drug events) result in actual patient harm (or adverse drug events (ADEs)) in the hospital setting. DESIGN Retrospective chart review of clinical events that transpired following observed medication administration errors. BACKGROUND Medication errors are common at the medication administration stage for hospitalized patients. While many of these errors are considered capable of causing patient harm, it is not clear how often patients are actually harmed by these errors. METHODS In a previous study where 14,041 medication administrations in an acute-care hospital were directly observed, investigators discovered 1271 medication administration errors, of which 133 had the potential to cause serious or life-threatening harm to patients and were considered serious or life-threatening potential ADEs. In the current study, clinical reviewers conducted detailed chart reviews of cases where a serious or life-threatening potential ADE occurred to determine if an actual ADE developed following the potential ADE. Reviewers further assessed the severity of the ADE and attribution to the administration error. RESULTS Ten (7.5% [95% C.I. 6.98, 8.01]) actual adverse drug events or ADEs resulted from the 133 serious and life-threatening potential ADEs, of which 6 resulted in significant, three in serious, and one life threatening injury. Therefore 4 (3% [95% C.I. 2.12, 3.6]) serious and life threatening potential ADEs led to serious or life threatening ADEs. Half of the ten actual ADEs were caused by dosage or monitoring errors for anti-hypertensives. The life threatening ADE was caused by an error that was both a transcription and a timing error. CONCLUSION Potential ADEs at the medication administration stage can cause serious patient harm. Given previous estimates of serious or life-threatening potential ADE of 1.33 per 100 medication doses administered, in a hospital where 6 million doses are administered per year, about 4000 preventable ADEs would be attributable to medication administration errors annually. PMID:22791691
Measurement System Characterization in the Presence of Measurement Errors
NASA Technical Reports Server (NTRS)
Commo, Sean A.
2012-01-01
In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.
Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, E. M. C.; Reu, P. L.
“Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less
Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves
Jones, E. M. C.; Reu, P. L.
2017-11-28
“Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less
Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G
2014-10-01
Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Typpo, Katri V.; Tcharmtchi, M. Hossein; Thomas, Eric J.; Kelly, P. Adam; Castillo, Leticia D.; Singh, Hardeep
2011-01-01
Objective Resident duty-hour regulations potentially shift workload from resident to attending physicians. We sought to understand how current or future regulatory changes might impact safety in academic pediatric and neonatal intensive care units (ICUs). Design Web-based survey Setting US academic pediatric and neonatal ICUs Subjects Attending pediatric and neonatal intensivists Interventions We evaluated perceptions on four ICU safety-related risk measures potentially affected by current duty-hour regulations: 1) Attending physician and resident fatigue, 2) Attending physician work-load, 3) Errors (self-reported rates by attending physicians or perceived resident error rates), and 4) Safety culture. We also evaluated perceptions of how these risks would change with further duty hour restrictions. Measurements and Main Results We administered our survey between February and April 2010 to 688 eligible physicians, of which 360 (52.3%) responded. Most believed that resident error rates were unchanged or worse (91.9%) and safety culture was unchanged or worse (84.4%) with current duty-hour regulations. Of respondents, 61.9% believed their own work-hours providing direct patient care increased and 55.8% believed they were more fatigued while providing direct patient care. Most (85.3%) perceived no increase in their own error rates currently, but in the scenario of further reduction in resident duty-hours, over half (53.3%) believed that safety culture would worsen and a significant proportion (40.3%) believed that their own error rates would increase. Conclusions Pediatric intensivists do not perceive improved patient safety from current resident duty hour restrictions. Policies to further restrict resident duty hours should consider unintended consequences of worsening certain aspects of ICU safety. PMID:22614570
Pressure Probe and Isopiestic Psychrometer Measure Similar Turgor 1
Nonami, Hiroshi; Boyer, John S.; Steudle, Ernst
1987-01-01
Turgor measured with a miniature pressure probe was compared to that measured with an isopiestic thermocouple psychrometer in mature regions of soybean (Glycine max [L.] Merr.) stems. The probe measured turgor directly in cells of intact stems whereas the psychrometer measured the water potential and osmotic potential of excised stem segments and turgor was calculated by difference. When care was taken to prevent dehydration when working with the pressure probe, and diffusive resistance and dilution errors with the psychrometer, both methods gave similar values of turgor whether the plants were dehydrating or rehydrating. This finding, together with the previously demonstrated similarity in turgor measured with the isopiestic psychrometer and a pressure chamber, indicates that the pressure probe provides accurate measurements of turgor despite the need to penetrate the cell. On the other hand, it suggests that as long as precautions are taken to obtain accurate values for the water potential and osmotic potential, turgor can be determined by isopiestic psychrometry in tissues not accessible to the pressure probe for physical reasons. PMID:16665293
Pressure probe and isopiestic psychrometer measure similar turgor.
Nonami, H; Boyer, J S; Steudle, E
1987-03-01
Turgor measured with a miniature pressure probe was compared to that measured with an isopiestic thermocouple psychrometer in mature regions of soybean (Glycine max [L.] Merr.) stems. The probe measured turgor directly in cells of intact stems whereas the psychrometer measured the water potential and osmotic potential of excised stem segments and turgor was calculated by difference. When care was taken to prevent dehydration when working with the pressure probe, and diffusive resistance and dilution errors with the psychrometer, both methods gave similar values of turgor whether the plants were dehydrating or rehydrating. This finding, together with the previously demonstrated similarity in turgor measured with the isopiestic psychrometer and a pressure chamber, indicates that the pressure probe provides accurate measurements of turgor despite the need to penetrate the cell. On the other hand, it suggests that as long as precautions are taken to obtain accurate values for the water potential and osmotic potential, turgor can be determined by isopiestic psychrometry in tissues not accessible to the pressure probe for physical reasons.
Kim, Matthew H; Marulis, Loren M; Grammer, Jennie K; Morrison, Frederick J; Gehring, William J
2017-03-01
Motivational beliefs and values influence how children approach challenging activities. The current study explored motivational processes from an expectancy-value theory framework by studying children's mistakes and their responses to them by focusing on two event-related potential (ERP) components: the error-related negativity (ERN) and the error positivity (Pe). Motivation was assessed using a child-friendly challenge puzzle task and a brief interview measure prior to ERP testing. Data from 50 4- to 6-year-old children revealed that greater perceived competence beliefs were related to a larger Pe, whereas stronger intrinsic task value beliefs were associated with a smaller Pe. Motivation was unrelated to the ERN. Individual differences in early motivational processes may reflect electrophysiological activity related to conscious error awareness. Copyright © 2016 Elsevier Inc. All rights reserved.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An Error-Reduction Algorithm to Improve Lidar Turbulence Estimates for Wind Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew
2016-08-01
Currently, cup anemometers on meteorological (met) towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability. However, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install met towers at potential sites. As a result, remote sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. While lidars can accurately estimate mean wind speeds and wind directions, there is still a largemore » amount of uncertainty surrounding the measurement of turbulence with lidars. This uncertainty in lidar turbulence measurements is one of the key roadblocks that must be overcome in order to replace met towers with lidars for wind energy applications. In this talk, a model for reducing errors in lidar turbulence estimates is presented. Techniques for reducing errors from instrument noise, volume averaging, and variance contamination are combined in the model to produce a corrected value of the turbulence intensity (TI), a commonly used parameter in wind energy. In the next step of the model, machine learning techniques are used to further decrease the error in lidar TI estimates.« less
A drifting GPS buoy for retrieving effective riverbed bathymetry
NASA Astrophysics Data System (ADS)
Hostache, R.; Matgen, P.; Giustarini, L.; Teferle, F. N.; Tailliez, C.; Iffly, J.-F.; Corato, G.
2015-01-01
Spatially distributed riverbed bathymetry information are rarely available but mandatory for accurate hydrodynamic modeling. This study aims at evaluating the potential of the Global Navigation Satellite System (GNSS), like for instance Global Positioning System (GPS), for retrieving such data. Drifting buoys equipped with navigation systems such as GPS enable the quasi-continuous measurement of water surface elevation, from virtually any point in the world. The present study investigates the potential of assimilating GNSS-derived water surface elevation measurements into hydraulic models in order to retrieve effective riverbed bathymetry. First tests with a GPS dual-frequency receiver show that the root mean squared error (RMSE) on the elevation measurement equals 30 cm provided that a differential post processing is performed. Next, synthetic observations of a drifting buoy were generated assuming a 30 cm average error of Water Surface Elevation (WSE) measurements. By assimilating the synthetic observation into a 1D-Hydrodynamic model, we show that the riverbed bathymetry can be retrieved with an accuracy of 36 cm. Moreover, the WSEs simulated by the hydrodynamic model using the retrieved bathymetry are in good agreement with the synthetic "truth", exhibiting an RMSE of 27 cm.
Cosmological measurements with general relativistic galaxy correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Montanari, Francesco; Durrer, Ruth
We investigate the cosmological dependence and the constraining power of large-scale galaxy correlations, including all redshift-distortions, wide-angle, lensing and gravitational potential effects on linear scales. We analyze the cosmological information present in the lensing convergence and in the gravitational potential terms describing the so-called ''relativistic effects'', and we find that, while smaller than the information contained in intrinsic galaxy clustering, it is not negligible. We investigate how neglecting them does bias cosmological measurements performed by future spectroscopic and photometric large-scale surveys such as SKA and Euclid. We perform a Fisher analysis using the CLASS code, modified to include scale-dependent galaxymore » bias and redshift-dependent magnification and evolution bias. Our results show that neglecting relativistic terms, especially lensing convergence, introduces an error in the forecasted precision in measuring cosmological parameters of the order of a few tens of percent, in particular when measuring the matter content of the Universe and primordial non-Gaussianity parameters. The analysis suggests a possible substantial systematic error in cosmological parameter constraints. Therefore, we argue that radial correlations and integrated relativistic terms need to be taken into account when forecasting the constraining power of future large-scale number counts of galaxy surveys.« less
Potential, velocity, and density fields from sparse and noisy redshift-distance samples - Method
NASA Technical Reports Server (NTRS)
Dekel, Avishai; Bertschinger, Edmund; Faber, Sandra M.
1990-01-01
A method for recovering the three-dimensional potential, velocity, and density fields from large-scale redshift-distance samples is described. Galaxies are taken as tracers of the velocity field, not of the mass. The density field and the initial conditions are calculated using an iterative procedure that applies the no-vorticity assumption at an initial time and uses the Zel'dovich approximation to relate initial and final positions of particles on a grid. The method is tested using a cosmological N-body simulation 'observed' at the positions of real galaxies in a redshift-distance sample, taking into account their distance measurement errors. Malmquist bias and other systematic and statistical errors are extensively explored using both analytical techniques and Monte Carlo simulations.
Increased instrument intelligence--can it reduce laboratory error?
Jekelis, Albert W
2005-01-01
Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.
Personal identification based on prescription eyewear.
Berg, Gregory E; Collins, Randall S
2007-03-01
This study presents a web-based tool that can be used to assist in identification of unknown individuals using spectacle prescriptions. Currently, when lens prescriptions are used in forensic identifications, investigators are constrained to a simple "match" or "no-match" judgment with an antemortem prescription. It is not possible to evaluate the strength of the conclusion, or rather, the potential or real error rates associated with the conclusion. Three databases totaling over 385,000 individual prescriptions are utilized in this study to allow forensic analysts to easily determine the strength of individuation of a spectacle match to antemortem records by calculating the frequency at which the observed prescription occurs in various U.S. populations. Optical refractive errors are explained, potential states and combinations of refractive errors are described, measuring lens corrections is discussed, and a detailed description of the databases is presented. The practical application of this system is demonstrated using two recent forensic identifications. This research provides a valuable personal identification tool that can be used in cases where eyeglass portions are recovered in forensic contexts.
Development and content validation of performance assessments for endoscopic third ventriculostomy.
Breimer, Gerben E; Haji, Faizal A; Hoving, Eelco W; Drake, James M
2015-08-01
This study aims to develop and establish the content validity of multiple expert rating instruments to assess performance in endoscopic third ventriculostomy (ETV), collectively called the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). The important aspects of ETV were identified through a review of current literature, ETV videos, and discussion with neurosurgeons, fellows, and residents. Three assessment measures were subsequently developed: a procedure-specific checklist (CL), a CL of surgical errors, and a global rating scale (GRS). Neurosurgeons from various countries, all identified as experts in ETV, were then invited to participate in a modified Delphi survey to establish the content validity of these instruments. In each Delphi round, experts rated their agreement including each procedural step, error, and GRS item in the respective instruments on a 5-point Likert scale. Seventeen experts agreed to participate in the study and completed all Delphi rounds. After item generation, a total of 27 procedural CL items, 26 error CL items, and 9 GRS items were posed to Delphi panelists for rating. An additional 17 procedural CL items, 12 error CL items, and 1 GRS item were added by panelists. After three rounds, strong consensus (>80% agreement) was achieved on 35 procedural CL items, 29 error CL items, and 10 GRS items. Moderate consensus (50-80% agreement) was achieved on an additional 7 procedural CL items and 1 error CL item. The final procedural and error checklist contained 42 and 30 items, respectively (divided into setup, exposure, navigation, ventriculostomy, and closure). The final GRS contained 10 items. We have established the content validity of three ETV assessment measures by iterative consensus of an international expert panel. Each measure provides unique assessment information and thus can be used individually or in combination, depending on the characteristics of the learner and the purpose of the assessment. These instruments must now be evaluated in both the simulated and operative settings, to determine their construct validity and reliability. Ultimately, the measures contained in the NEVAT may prove suitable for formative assessment during ETV training and potentially as summative assessment measures during certification.
Two ultraviolet radiation datasets that cover China
NASA Astrophysics Data System (ADS)
Liu, Hui; Hu, Bo; Wang, Yuesi; Liu, Guangren; Tang, Liqin; Ji, Dongsheng; Bai, Yongfei; Bao, Weikai; Chen, Xin; Chen, Yunming; Ding, Weixin; Han, Xiaozeng; He, Fei; Huang, Hui; Huang, Zhenying; Li, Xinrong; Li, Yan; Liu, Wenzhao; Lin, Luxiang; Ouyang, Zhu; Qin, Boqiang; Shen, Weijun; Shen, Yanjun; Su, Hongxin; Song, Changchun; Sun, Bo; Sun, Song; Wang, Anzhi; Wang, Genxu; Wang, Huimin; Wang, Silong; Wang, Youshao; Wei, Wenxue; Xie, Ping; Xie, Zongqiang; Yan, Xiaoyuan; Zeng, Fanjiang; Zhang, Fawei; Zhang, Yangjian; Zhang, Yiping; Zhao, Chengyi; Zhao, Wenzhi; Zhao, Xueyong; Zhou, Guoyi; Zhu, Bo
2017-07-01
Ultraviolet (UV) radiation has significant effects on ecosystems, environments, and human health, as well as atmospheric processes and climate change. Two ultraviolet radiation datasets are described in this paper. One contains hourly observations of UV radiation measured at 40 Chinese Ecosystem Research Network stations from 2005 to 2015. CUV3 broadband radiometers were used to observe the UV radiation, with an accuracy of 5%, which meets the World Meteorology Organization's measurement standards. The extremum method was used to control the quality of the measured datasets. The other dataset contains daily cumulative UV radiation estimates that were calculated using an all-sky estimation model combined with a hybrid model. The reconstructed daily UV radiation data span from 1961 to 2014. The mean absolute bias error and root-mean-square error are smaller than 30% at most stations, and most of the mean bias error values are negative, which indicates underestimation of the UV radiation intensity. These datasets can improve our basic knowledge of the spatial and temporal variations in UV radiation. Additionally, these datasets can be used in studies of potential ozone formation and atmospheric oxidation, as well as simulations of ecological processes.
Quick, Jeffrey C
2014-01-01
Annual CO2 emission tallies for 210 coal-fired power plants during 2009 were more accurately calculated from fuel consumption records reported by the US. Energy Information Administration (EIA) than measurements from Continuous Emissions Monitoring Systems (CEMS) reported by the US. Environmental Protection Agency. Results from these accounting methods for individual plants vary by +/- 10.8%. Although the differences systematically vary with the method used to certify flue-gas flow instruments in CEMS, additional sources of CEMS measurement error remain to be identified. Limitations of the EIA fuel consumption data are also discussed. Consideration of weighing, sample collection, laboratory analysis, emission factor, and stock adjustment errors showed that the minimum error for CO2 emissions calculated from the fuel consumption data ranged from +/- 1.3% to +/- 7.2% with a plant average of +/- 1.6%. This error might be reduced by 50% if the carbon content of coal delivered to U.S. power plants were reported. Potentially, this study might inform efforts to regulate CO2 emissions (such as CO2 performance standards or taxes) and more immediately, the U.S. Greenhouse Gas Reporting Rule where large coal-fired power plants currently use CEMS to measure CO2 emissions. Moreover, if, as suggested here, the flue-gas flow measurement limits the accuracy of CO2 emission tallies from CEMS, then the accuracy of other emission tallies from CEMS (such as SO2, NOx, and Hg) would be similarly affected. Consequently, improved flue gas flow measurements are needed to increase the reliability of emission measurements from CEMS.
Buonaccorsi, John P; Dalen, Ingvild; Laake, Petter; Hjartåker, Anette; Engeset, Dagrun; Thoresen, Magne
2015-04-15
Measurement error occurs when we observe error-prone surrogates, rather than true values. It is common in observational studies and especially so in epidemiology, in nutritional epidemiology in particular. Correcting for measurement error has become common, and regression calibration is the most popular way to account for measurement error in continuous covariates. We consider its use in the context where there are validation data, which are used to calibrate the true values given the observed covariates. We allow for the case that the true value itself may not be observed in the validation data, but instead, a so-called reference measure is observed. The regression calibration method relies on certain assumptions.This paper examines possible biases in regression calibration estimators when some of these assumptions are violated. More specifically, we allow for the fact that (i) the reference measure may not necessarily be an 'alloyed gold standard' (i.e., unbiased) for the true value; (ii) there may be correlated random subject effects contributing to the surrogate and reference measures in the validation data; and (iii) the calibration model itself may not be the same in the validation study as in the main study; that is, it is not transportable. We expand on previous work to provide a general result, which characterizes potential bias in the regression calibration estimators as a result of any combination of the violations aforementioned. We then illustrate some of the general results with data from the Norwegian Women and Cancer Study. Copyright © 2015 John Wiley & Sons, Ltd.
Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.
Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang
2016-06-22
An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.
Intuitive theories of information: beliefs about the value of redundancy.
Soll, J B
1999-03-01
In many situations, quantity estimates from multiple experts or diagnostic instruments must be collected and combined. Normatively, and all else equal, one should value information sources that are nonredundant, in the sense that correlation in forecast errors should be minimized. Past research on the preference for redundancy has been inconclusive. While some studies have suggested that people correctly place higher value on uncorrelated inputs when collecting estimates, others have shown that people either ignore correlation or, in some cases, even prefer it. The present experiments show that the preference for redundancy depends on one's intuitive theory of information. The most common intuitive theory identified is the Error Tradeoff Model (ETM), which explicitly distinguishes between measurement error and bias. According to ETM, measurement error can only be averaged out by consulting the same source multiple times (normatively false), and bias can only be averaged out by consulting different sources (normatively true). As a result, ETM leads people to prefer redundant estimates when the ratio of measurement error to bias is relatively high. Other participants favored different theories. Some adopted the normative model, while others were reluctant to mathematically average estimates from different sources in any circumstance. In a post hoc analysis, science majors were more likely than others to subscribe to the normative model. While tentative, this result lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized. Copyright 1999 Academic Press.
ERP correlates of error processing during performance on the Halstead Category Test.
Santos, I M; Teixeira, A R; Tomé, A M; Pereira, A T; Rodrigues, P; Vagos, P; Costa, J; Carrito, M L; Oliveira, B; DeFilippis, N A; Silva, C F
2016-08-01
The Halstead Category Test (HCT) is a neuropsychological test that measures a person's ability to formulate and apply abstract principles. Performance must be adjusted based on feedback after each trial and errors are common until the underlying rules are discovered. Event-related potential (ERP) studies associated with the HCT are lacking. This paper demonstrates the use of a methodology inspired on Singular Spectrum Analysis (SSA) applied to EEG signals, to remove high amplitude ocular and movement artifacts during performance on the test. This filtering technique introduces no phase or latency distortions, with minimum loss of relevant EEG information. Importantly, the test was applied in its original clinical format, without introducing adaptations to ERP recordings. After signal treatment, the feedback-related negativity (FRN) wave, which is related to error-processing, was identified. This component peaked around 250ms, after feedback, in fronto-central electrodes. As expected, errors elicited more negative amplitudes than correct responses. Results are discussed in terms of the increased clinical potential that coupling ERP information with behavioral performance data can bring to the specificity of the HCT in diagnosing different types of impairment in frontal brain function. Copyright © 2016. Published by Elsevier B.V.
Yang, Jie; Liu, Qingquan; Dai, Wei
2017-02-01
To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.
Robert-Lachaine, Xavier; Mecheri, Hakim; Larue, Christian; Plamondon, André
2017-04-01
The potential of inertial measurement units (IMUs) for ergonomics applications appears promising. However, previous IMUs validation studies have been incomplete regarding aspects of joints analysed, complexity of movements and duration of trials. The objective was to determine the technological error and biomechanical model differences between IMUs and an optoelectronic system and evaluate the effect of task complexity and duration. Whole-body kinematics from 12 participants was recorded simultaneously with a full-body Xsens system where an Optotrak cluster was fixed on every IMU. Short functional movements and long manual material handling tasks were performed and joint angles were compared between the two systems. The differences attributed to the biomechanical model showed significantly greater (P ≤ .001) RMSE than the technological error. RMSE was systematically higher (P ≤ .001) for the long complex task with a mean on all joints of 2.8° compared to 1.2° during short functional movements. Definition of local coordinate systems based on anatomical landmarks or single posture was the most influent difference between the two systems. Additionally, IMUs accuracy was affected by the complexity and duration of the tasks. Nevertheless, technological error remained under 5° RMSE during handling tasks, which shows potential to track workers during their daily labour.
Uncertainties of predictions from parton distributions II: theoretical errors
NASA Astrophysics Data System (ADS)
Martin, A. D.; Roberts, R. G.; Stirling, W. J.; Thorne, R. S.
2004-06-01
We study the uncertainties in parton distributions, determined in global fits to deep inelastic and related hard scattering data, due to so-called theoretical errors. Amongst these, we include potential errors due to the change of perturbative order (NLO to NNLO), ln(1/x) and ln(1-x) effects, absorptive corrections and higher-twist contributions. We investigate these uncertainties both by including explicit corrections to our standard global analysis and by examining the sensitivity to changes of the x, Q 2, W 2 cuts on the data that are fitted. In this way we expose those kinematic regions where the conventional DGLAP description is inadequate. As a consequence we obtain a set of NLO, and of NNLO, conservative partons where the data are fully consistent with DGLAP evolution, but over a restricted kinematic domain. We also examine the potential effects of such issues as the choice of input parametrisation, heavy target corrections, assumptions about the strange quark sea and isospin violation. Hence we are able to compare the theoretical errors with those uncertainties due to errors on the experimental measurements, which we studied previously. We use W and Higgs boson production at the Tevatron and the LHC as explicit examples of the uncertainties arising from parton distributions. For many observables the theoretical error is dominant, but for the cross section for W production at the Tevatron both the theoretical and experimental uncertainties are small, and hence the NNLO prediction may serve as a valuable luminosity monitor.
Schnock, Kumiko O; Biggs, Bonnie; Fladger, Anne; Bates, David W; Rozenblum, Ronen
2017-02-22
Retained surgical instruments (RSI) are one of the most serious preventable complications in operating room settings, potentially leading to profound adverse effects for patients, as well as costly legal and financial consequences for hospitals. Safety measures to eliminate RSIs have been widely adopted in the United States and abroad, but despite widespread efforts, medical errors with RSI have not been eliminated. Through a systematic review of recent studies, we aimed to identify the impact of radio frequency identification (RFID) technology on reducing RSI errors and improving patient safety. A literature search on the effects of RFID technology on RSI error reduction was conducted in PubMed and CINAHL (2000-2016). Relevant articles were selected and reviewed by 4 researchers. After the literature search, 385 articles were identified and the full texts of the 88 articles were assessed for eligibility. Of these, 5 articles were included to evaluate the benefits and drawbacks of using RFID for preventing RSI-related errors. The use of RFID resulted in rapid detection of RSI through body tissue with high accuracy rates, reducing risk of counting errors and improving workflow. Based on the existing literature, RFID technology seems to have the potential to substantially improve patient safety by reducing RSI errors, although the body of evidence is currently limited. Better designed research studies are needed to get a clear understanding of this domain and to find new opportunities to use this technology and improve patient safety.
NASA Technical Reports Server (NTRS)
Young, J. C.; Bundick, W. T.; Irwin, S. H.
1983-01-01
Tests were conducted with a van mounted experimental magnetic leader cable sensor to evaluate its potential for measuring aircraft displacement and heading with respect to the leader cable during roll out and turnoff. Test results show that the system may be usable in measuring displacement but the heading measurement contains errors introduced by distortion of the magnetic field by the metal van or aircraft.
NASA Technical Reports Server (NTRS)
Carpenter, Paul; Armstrong, John
2004-01-01
Improvement in the accuracy of electron-probe microanalysis (EPMA) has been accomplished by critical assessment of standards, correction algorithms, and mass absorption coefficient data sets. Experimental measurement of relative x-ray intensities at multiple accelerating potential highlights errors in the absorption coefficient. The factor method has been applied to the evaluation of systematic errors in the analysis of semiconductor and silicate minds. Accurate EPMA of Martian soil stimulant is necessary in studies that build on Martian rover data in anticipation of missions to Mars.
Audio-frequency analysis of inductive voltage dividers based on structural models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramov, S.; Oldham, N.M.; Koffman, A.D.
1994-12-31
A Binary Inductive Voltage Divider (BIVD) is compared with a Decade Inductive Voltage Divider (DIVD) in an automatic IVD bridge. New detection and injection circuitry was designed and used to evaluate the IVDs with either the input or output tied to ground potential. In the audio frequency range the DIVD and BIVD error patterns are characterized for both in-phase and quadrature components. Differences between results obtained using a new error decomposition scheme based on structural modeling, and measurements using conventional IVD standards are reported.
1987-09-30
RESTRICTIVE MARKINGSC Unclassif ied 2a SECURIly CLASSIFICATION ALIIMOA4TY 3 DIS1RSBj~jiOAVAILAB.I1Y OF RkPORI _________________________________ Approved...of the AC current, including the time dependence at a growing DME, at a given fixed potential either in the presence or the absence of an...the relative error in k b(app) is ob relatively small for ks (true) : 0.5 cm s-, and increases rapidly for ob larger rate constants as kob reaches the
Shallow Water Bathymetry using the REMUS 100 Autonomous Underwater Vehicle
2013-12-01
potentially meeting IHO Standards for Hydrographic Surveys, are advertised but Kongsberg Hydroid do not recommend the REMUS 100 as a platform for...data set. Outlier soundings due to measurement errors have been discarded Figure 28: REMUS 100 depth soundings in isometric projection, coloured
Hanson, Sonya M.; Ekins, Sean; Chodera, John D.
2015-01-01
All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597
Identifying the causes of road crashes in Europe
Thomas, Pete; Morris, Andrew; Talbot, Rachel; Fagerlind, Helen
2013-01-01
This research applies a recently developed model of accident causation, developed to investigate industrial accidents, to a specially gathered sample of 997 crashes investigated in-depth in 6 countries. Based on the work of Hollnagel the model considers a collision to be a consequence of a breakdown in the interaction between road users, vehicles and the organisation of the traffic environment. 54% of road users experienced interpretation errors while 44% made observation errors and 37% planning errors. In contrast to other studies only 11% of drivers were identified as distracted and 8% inattentive. There was remarkably little variation in these errors between the main road user types. The application of the model to future in-depth crash studies offers the opportunity to identify new measures to improve safety and to mitigate the social impact of collisions. Examples given include the potential value of co-driver advisory technologies to reduce observation errors and predictive technologies to avoid conflicting interactions between road users. PMID:24406942
NASA Astrophysics Data System (ADS)
Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu
2018-05-01
Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.
Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki
2014-01-01
Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.
Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki
2015-01-01
Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors. PMID:25674058
Takada, Koki; Takahashi, Kana; Hirao, Kazuki
2018-01-17
Although the self-report version of Liebowitz Social Anxiety Scale (LSAS) is frequently used to measure social anxiety, data is lacking on the smallest detectable change (SDC), an important index of measurement error. We therefore aimed to determine the SDC of LSAS. Japanese adults aged 20-69 years were invited from a panel managed by a nationwide internet research agency. We then conducted a test-retest internet survey with a two-week interval to estimate the SDC at the individual (SDC ind ) and group (SDC group ) levels. The analysis included 1300 participants. The SDC ind and SDC group for the total fear subscale (scoring range: 0-72) were 23.52 points (32.7%) and 0.65 points (0.9%), respectively. The SDC ind and SDC group for the total avoidance subscale (scoring range: 0-72) were 32.43 points (45.0%) and 0.90 points (1.2%), respectively. The SDC ind and SDC group for the overall total score (scoring range: 0-144) were 45.90 points (31.9%) and 1.27 points (0.9%), respectively. Measurement error is large and indicate the potential for major problems when attempting to use the LSAS to detect changes at the individual level. These results should be considered when using the LSAS as measures of treatment change.
Muon Energy Calibration of the MINOS Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyagawa, Paul S.
MINOS is a long-baseline neutrino oscillation experiment designed to search for conclusive evidence of neutrino oscillations and to measure the oscillation parameters precisely. MINOS comprises two iron tracking calorimeters located at Fermilab and Soudan. The Calibration Detector at CERN is a third MINOS detector used as part of the detector response calibration programme. A correct energy calibration between these detectors is crucial for the accurate measurement of oscillation parameters. This thesis presents a calibration developed to produce a uniform response within a detector using cosmic muons. Reconstruction of tracks in cosmic ray data is discussed. This data is utilized tomore » calculate calibration constants for each readout channel of the Calibration Detector. These constants have an average statistical error of 1.8%. The consistency of the constants is demonstrated both within a single run and between runs separated by a few days. Results are presented from applying the calibration to test beam particles measured by the Calibration Detector. The responses are calibrated to within 1.8% systematic error. The potential impact of the calibration on the measurement of oscillation parameters by MINOS is also investigated. Applying the calibration reduces the errors in the measured parameters by ~ 10%, which is equivalent to increasing the amount of data by 20%.« less
NASA Astrophysics Data System (ADS)
Crowell, Sean M. R.; Randolph Kawa, S.; Browell, Edward V.; Hammerling, Dorit M.; Moore, Berrien; Schaefer, Kevin; Doney, Scott C.
2018-01-01
Space-borne observations of CO2 are vital to gaining understanding of the carbon cycle in regions of the world that are difficult to measure directly, such as the tropical terrestrial biosphere, the high northern and southern latitudes, and in developing nations such as China. Measurements from passive instruments such as GOSAT and OCO-2, however, are constrained by solar zenith angle limitations as well as sensitivity to the presence of clouds and aerosols. Active measurements such as those in development for the Active Sensing of CO2 Emissions over Nights, Days and Seasons (ASCENDS) mission show strong potential for making measurements in the high-latitude winter and in cloudy regions. In this work we examine the enhanced flux constraint provided by the improved coverage from an active measurement such as ASCENDS. The simulation studies presented here show that with sufficient precision, ASCENDS will detect permafrost thaw and fossil fuel emissions shifts at annual and seasonal time scales, even in the presence of transport errors, representativeness errors, and biogenic flux errors. While OCO-2 can detect some of these perturbations at the annual scale, the seasonal sampling provided by ASCENDS provides the stronger constraint.
Sarnat, Jeremy A; Wilson, William E; Strand, Matthew; Brook, Jeff; Wyzga, Ron; Lumley, Thomas
2007-12-01
Examining the validity of exposure metrics used in air pollution epidemiologic models has been a key focus of recent exposure assessment studies. The objective of this work has been, largely, to determine what a given exposure metric represents and to quantify and reduce any potential errors resulting from using these metrics in lieu of true exposure measurements. The current manuscript summarizes the presentations of the co-authors from a recent EPA workshop, held in December 2006, dealing with the role and contributions of exposure assessment in addressing these issues. Results are presented from US and Canadian exposure and pollutant measurement studies as well as theoretical simulations to investigate what both particulate and gaseous pollutant concentrations represent and the potential errors resulting from their use in air pollution epidemiologic studies. Quantifying the association between ambient pollutant concentrations and corresponding personal exposures has led to the concept of defining attenuation factors, or alpha. Specifically, characterizing pollutant-specific estimates for alpha was shown to be useful in developing regression calibration methods involving PM epidemiologic risk estimates. For some gaseous pollutants such as NO2 and SO2, the associations between ambient concentrations and personal exposures were shown to be complex and still poorly understood. Results from recent panel studies suggest that ambient NO2 measurements may, in some locations, be serving as surrogates to traffic pollutants, including traffic-related PM2.5, hopanes, steranes, and oxidized nitrogen compounds (rather than NO2).
Quantum Error Correction for Metrology
NASA Astrophysics Data System (ADS)
Sushkov, Alex; Kessler, Eric; Lovchinsky, Igor; Lukin, Mikhail
2014-05-01
The question of the best achievable sensitivity in a quantum measurement is of great experimental relevance, and has seen a lot of attention in recent years. Recent studies [e.g., Nat. Phys. 7, 406 (2011), Nat. Comms. 3, 1063 (2012)] suggest that in most generic scenarios any potential quantum gain (e.g. through the use of entangled states) vanishes in the presence of environmental noise. To overcome these limitations, we propose and analyze a new approach to improve quantum metrology based on quantum error correction (QEC). We identify the conditions under which QEC allows one to improve the signal-to-noise ratio in quantum-limited measurements, and we demonstrate that it enables, in certain situations, Heisenberg-limited sensitivity. We discuss specific applications to nanoscale sensing using nitrogen-vacancy centers in diamond in which QEC can significantly improve the measurement sensitivity and bandwidth under realistic experimental conditions.
Implementing technology to improve medication safety in healthcare facilities: a literature review.
Hidle, Unn
Medication errors remain one of the most common causes of patient injuries in the United States, with detrimental outcomes including adverse reactions and even death. By developing a better understanding of why and how medication errors occur, preventative measures may be implemented including technological advances. In this literature review, potential methods of reducing medication errors were explored. Furthermore, technology tools available for medication orders and administration are described, including advantages and disadvantages of each system. It was found that technology can be an excellent aid in improving safety of medication administration. However, computer technology cannot replace human intellect and intuition. Nurses should be involved when implementing any new computerized system in order to obtain the most appropriate and user-friendly structure.
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of themore » absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.« less
Error monitoring and empathy: Explorations within a neurophysiological context.
Amiruddin, Azhani; Fueggle, Simone N; Nguyen, An T; Gignac, Gilles E; Clunies-Ross, Karen L; Fox, Allison M
2017-06-01
Past literature has proposed that empathy consists of two components: cognitive and affective empathy. Error monitoring mechanisms indexed by the error-related negativity (ERN) have been associated with empathy. Studies have found that a larger ERN is associated with higher levels of empathy. We aimed to expand upon previous work by investigating how error monitoring relates to the independent theoretical domains of cognitive and affective empathy. Study 1 (N = 24) explored the relationship between error monitoring mechanisms and subcomponents of empathy using the Questionnaire of Cognitive and Affective Empathy and found no relationship. Study 2 (N = 38) explored the relationship between the error monitoring mechanisms and overall empathy. Contrary to past findings, there was no evidence to support a relationship between error monitoring mechanisms and scores on empathy measures. A subsequent meta-analysis (Study 3, N = 125) summarizing the relationship across previously published studies together with the two studies reported in the current paper indicated that overall there was no significant association between ERN and empathy and that there was significant heterogeneity across studies. Future investigations exploring the potential variables that may moderate these relationships are discussed. © 2017 Society for Psychophysiological Research.
Checa, Purificación; Castellanos, M C; Abundis-Gutiérrez, Alicia; Rosario Rueda, M
2014-01-01
Regulation of thoughts and behavior requires attention, particularly when there is conflict between alternative responses or when errors are to be prevented or corrected. Conflict monitoring and error processing are functions of the executive attention network, a neurocognitive system that greatly matures during childhood. In this study, we examined the development of brain mechanisms underlying conflict and error processing with event-related potentials (ERPs), and explored the relationship between brain function and individual differences in the ability to self-regulate behavior. Three groups of children aged 4-6, 7-9, and 10-13 years, and a group of adults performed a child-friendly version of the flanker task while ERPs were registered. Marked developmental changes were observed in both conflict processing and brain reactions to errors. After controlling by age, higher self-regulation skills are associated with smaller amplitude of the conflict effect but greater amplitude of the error-related negativity. Additionally, we found that electrophysiological measures of conflict and error monitoring predict individual differences in impulsivity and the capacity to delay gratification. These findings inform of brain mechanisms underlying the development of cognitive control and self-regulation.
Checa, Purificación; Castellanos, M. C.; Abundis-Gutiérrez, Alicia; Rosario Rueda, M.
2014-01-01
Regulation of thoughts and behavior requires attention, particularly when there is conflict between alternative responses or when errors are to be prevented or corrected. Conflict monitoring and error processing are functions of the executive attention network, a neurocognitive system that greatly matures during childhood. In this study, we examined the development of brain mechanisms underlying conflict and error processing with event-related potentials (ERPs), and explored the relationship between brain function and individual differences in the ability to self-regulate behavior. Three groups of children aged 4–6, 7–9, and 10–13 years, and a group of adults performed a child-friendly version of the flanker task while ERPs were registered. Marked developmental changes were observed in both conflict processing and brain reactions to errors. After controlling by age, higher self-regulation skills are associated with smaller amplitude of the conflict effect but greater amplitude of the error-related negativity. Additionally, we found that electrophysiological measures of conflict and error monitoring predict individual differences in impulsivity and the capacity to delay gratification. These findings inform of brain mechanisms underlying the development of cognitive control and self-regulation. PMID:24795676
Zhou, Tony; Dickson, Jennifer L; Geoffrey Chase, J
2018-01-01
Continuous glucose monitoring (CGM) devices have been effective in managing diabetes and offer potential benefits for use in the intensive care unit (ICU). Use of CGM devices in the ICU has been limited, primarily due to the higher point accuracy errors over currently used traditional intermittent blood glucose (BG) measures. General models of CGM errors, including drift and random errors, are lacking, but would enable better design of protocols to utilize these devices. This article presents an autoregressive (AR) based modeling method that separately characterizes the drift and random noise of the GlySure CGM sensor (GlySure Limited, Oxfordshire, UK). Clinical sensor data (n = 33) and reference measurements were used to generate 2 AR models to describe sensor drift and noise. These models were used to generate 100 Monte Carlo simulations based on reference blood glucose measurements. These were then compared to the original CGM clinical data using mean absolute relative difference (MARD) and a Trend Compass. The point accuracy MARD was very similar between simulated and clinical data (9.6% vs 9.9%). A Trend Compass was used to assess trend accuracy, and found simulated and clinical sensor profiles were similar (simulated trend index 11.4° vs clinical trend index 10.9°). The model and method accurately represents cohort sensor behavior over patients, providing a general modeling approach to any such sensor by separately characterizing each type of error that can arise in the data. Overall, it enables better protocol design based on accurate expected CGM sensor behavior, as well as enabling the analysis of what level of each type of sensor error would be necessary to obtain desired glycemic control safety and performance with a given protocol.
USDA-ARS?s Scientific Manuscript database
The sulfur hexafluoride tracer technique (SF**6) is a commonly used method for measuring CH**4 enteric emissions in ruminants. Studies using SF**6 have shown large variation in CH**4 emissions data, inconsistencies in CH**4 emissions across studies, and potential methodological errors. Therefore, th...
Updating finite element dynamic models using an element-by-element sensitivity methodology
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Hemez, Francois M.
1993-01-01
A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.
Interatomic potentials for HeAr, HeKr, and HeXe from multiproperty fits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danielson, L.J.; Keil, M.
1988-01-15
Crossed molecular beam measurements of differential cross sections (DCS) are reported for elastic scattering of He by Ar, Kr, and Xe at high resolution. Interatomic potentials are determined by simultaneously fitting the DCS's, as well as mixture viscosity and interaction second virial data. Bias due to systematic and potential model errors are examined and are used to estimate the accuracy of the potential energy curves obtained. Attractive well depths are 2.59, 2.67, and 2.64 meV +- 3% for HeAr, HeKr, and HeXe, respectively, agreeing with the best available HeAr potential and a previously proposed HeKr potential, but significantly deeper thanmore » previously reported potentials for HeXe. The HeXe attractive well is also considerably broader than previously reported. Attractive minimum positions are 3.48, 3.70, and 4.00 A ( +- 0.03 A) for HeAr, HeKr, and HeXe, respectively. Including the accurate diffusion data of Dunlop and co-workers (Physica A 95, 561 (1979)) and the absolute integral cross sections of Pirani and Vecchiocattivi (J. Chem. Phys. 66, 372 (1977) and revisions thereto) verify the error bounds for all three potentials.« less
Mismeasurement and the resonance of strong confounders: correlated errors.
Marshall, J R; Hastrup, J L; Ross, J S
1999-07-01
Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.
NASA Astrophysics Data System (ADS)
Ren, Zhengyong; Qiu, Lewen; Tang, Jingtian; Wu, Xiaoping; Xiao, Xiao; Zhou, Zilong
2018-01-01
Although accurate numerical solvers for 3-D direct current (DC) isotropic resistivity models are current available even for complicated models with topography, reliable numerical solvers for the anisotropic case are still an open question. This study aims to develop a novel and optimal numerical solver for accurately calculating the DC potentials for complicated models with arbitrary anisotropic conductivity structures in the Earth. First, a secondary potential boundary value problem is derived by considering the topography and the anisotropic conductivity. Then, two a posteriori error estimators with one using the gradient-recovery technique and one measuring the discontinuity of the normal component of current density are developed for the anisotropic cases. Combing the goal-oriented and non-goal-oriented mesh refinements and these two error estimators, four different solving strategies are developed for complicated DC anisotropic forward modelling problems. A synthetic anisotropic two-layer model with analytic solutions verified the accuracy of our algorithms. A half-space model with a buried anisotropic cube and a mountain-valley model are adopted to test the convergence rates of these four solving strategies. We found that the error estimator based on the discontinuity of current density shows better performance than the gradient-recovery based a posteriori error estimator for anisotropic models with conductivity contrasts. Both error estimators working together with goal-oriented concepts can offer optimal mesh density distributions and highly accurate solutions.
NASA Astrophysics Data System (ADS)
Smith, Gennifer T.; Dwork, Nicholas; Khan, Saara A.; Millet, Matthew; Magar, Kiran; Javanmard, Mehdi; Bowden, Audrey K.
2017-03-01
Urinalysis dipsticks were designed to revolutionize urine-based medical diagnosis. They are cheap, extremely portable, and have multiple assays patterned on a single platform. They were also meant to be incredibly easy to use. Unfortunately, there are many aspects in both the preparation and the analysis of the dipsticks that are plagued by user error. This high error is one reason that dipsticks have failed to flourish in both the at-home market and in low-resource settings. Sources of error include: inaccurate volume deposition, varying lighting conditions, inconsistent timing measurements, and misinterpreted color comparisons. We introduce a novel manifold and companion software for dipstick urinalysis that eliminates the aforementioned error sources. A micro-volume slipping manifold ensures precise sample delivery, an opaque acrylic box guarantees consistent lighting conditions, a simple sticker-based timing mechanism maintains accurate timing, and custom software that processes video data captured by a mobile phone ensures proper color comparisons. We show that the results obtained with the proposed device are as accurate and consistent as a properly executed dip-and-wipe method, the industry gold-standard, suggesting the potential for this strategy to enable confident urinalysis testing. Furthermore, the proposed all-acrylic slipping manifold is reusable and low in cost, making it a potential solution for at-home users and low-resource settings.
Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E
2011-06-22
Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.
Monitoring gait in multiple sclerosis with novel wearable motion sensors
McGinnis, Ryan S.; Seagers, Kirsten; Motl, Robert W.; Sheth, Nirav; Wright, John A.; Ghaffari, Roozbeh; Sosnoff, Jacob J.
2017-01-01
Background Mobility impairment is common in people with multiple sclerosis (PwMS) and there is a need to assess mobility in remote settings. Here, we apply a novel wireless, skin-mounted, and conformal inertial sensor (BioStampRC, MC10 Inc.) to examine gait characteristics of PwMS under controlled conditions. We determine the accuracy and precision of BioStampRC in measuring gait kinematics by comparing to contemporary research-grade measurement devices. Methods A total of 45 PwMS, who presented with diverse walking impairment (Mild MS = 15, Moderate MS = 15, Severe MS = 15), and 15 healthy control subjects participated in the study. Participants completed a series of clinical walking tests. During the tests participants were instrumented with BioStampRC and MTx (Xsens, Inc.) sensors on their shanks, as well as an activity monitor GT3X (Actigraph, Inc.) on their non-dominant hip. Shank angular velocity was simultaneously measured with the inertial sensors. Step number and temporal gait parameters were calculated from the data recorded by each sensor. Visual inspection and the MTx served as the reference standards for computing the step number and temporal parameters, respectively. Accuracy (error) and precision (variance of error) was assessed based on absolute and relative metrics. Temporal parameters were compared across groups using ANOVA. Results Mean accuracy±precision for the BioStampRC was 2±2 steps error for step number, 6±9ms error for stride time and 6±7ms error for step time (0.6–2.6% relative error). Swing time had the least accuracy±precision (25±19ms error, 5±4% relative error) among the parameters. GT3X had the least accuracy±precision (8±14% relative error) in step number estimate among the devices. Both MTx and BioStampRC detected significantly distinct gait characteristics between PwMS with different disability levels (p<0.01). Conclusion BioStampRC sensors accurately and precisely measure gait parameters in PwMS across diverse walking impairment levels and detected differences in gait characteristics by disability level in PwMS. This technology has the potential to provide granular monitoring of gait both inside and outside the clinic. PMID:28178288
Electron Beam Propagation Through a Magnetic Wiggler with Random Field Errors
1989-08-21
Another quantity of interest is the vector potential 6.A,.(:) associated with the field error 6B,,,(:). Defining the normalized vector potentials ba = ebA...then follows that the correlation of the normalized vector potential errors is given by 1 . 12 (-a.(zj)a.,(z2)) = a,k,, dz’ , dz" (bBE(z’)bB , (z")) a2...Throughout the following, terms of order O(z:/z) will be neglected. Similarly, for the y-component of the normalized vector potential errors, one
Evaluating the Effects of Interface Disruption Using fNIR Spectroscopy
2011-02-28
Error Potentials in Brain- Computer Interfaces. ADVANCES IN COGNITIVE NEURODYNAMICS , 2008: p. 777-782. 21. Nieuwenhuis S, et al., Psychophysiology...introduced until the 1990‟s and holds great potential for extremely non-invasive cognitive state measurement. It is significantly easier and faster to...As a reminder, the general protocol is as follows: 1) Researchers gather benchmark tasks from cognitive psychology that elicit high and low
Analysis on the dynamic error for optoelectronic scanning coordinate measurement network
NASA Astrophysics Data System (ADS)
Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie
2018-01-01
Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.
Statistically Controlling for Confounding Constructs Is Harder than You Think
Westfall, Jacob; Yarkoni, Tal
2016-01-01
Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707
Hampp, Emily L; Chughtai, Morad; Scholl, Laura Y; Sodhi, Nipun; Bhowmik-Stoker, Manoshi; Jacofsky, David J; Mont, Michael A
2018-05-01
This study determined if robotic-arm assisted total knee arthroplasty (RATKA) allows for more accurate and precise bone cuts and component position to plan compared with manual total knee arthroplasty (MTKA). Specifically, we assessed the following: (1) final bone cuts, (2) final component position, and (3) a potential learning curve for RATKA. On six cadaver specimens (12 knees), a MTKA and RATKA were performed on the left and right knees, respectively. Bone-cut and final-component positioning errors relative to preoperative plans were compared. Median errors and standard deviations (SDs) in the sagittal, coronal, and axial planes were compared. Median values of the absolute deviation from plan defined the accuracy to plan. SDs described the precision to plan. RATKA bone cuts were as or more accurate to plan based on nominal median values in 11 out of 12 measurements. RATKA bone cuts were more precise to plan in 8 out of 12 measurements ( p ≤ 0.05). RATKA final component positions were as or more accurate to plan based on median values in five out of five measurements. RATKA final component positions were more precise to plan in four out of five measurements ( p ≤ 0.05). Stacked error results from all cuts and implant positions for each specimen in procedural order showed that RATKA error was less than MTKA error. Although this study analyzed a small number of cadaver specimens, there were clear differences that separated these two groups. When compared with MTKA, RATKA demonstrated more accurate and precise bone cuts and implant positioning to plan. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Overview of medical errors and adverse events
2012-01-01
Safety is a global concept that encompasses efficiency, security of care, reactivity of caregivers, and satisfaction of patients and relatives. Patient safety has emerged as a major target for healthcare improvement. Quality assurance is a complex task, and patients in the intensive care unit (ICU) are more likely than other hospitalized patients to experience medical errors, due to the complexity of their conditions, need for urgent interventions, and considerable workload fluctuation. Medication errors are the most common medical errors and can induce adverse events. Two approaches are available for evaluating and improving quality-of-care: the room-for-improvement model, in which problems are identified, plans are made to resolve them, and the results of the plans are measured; and the monitoring model, in which quality indicators are defined as relevant to potential problems and then monitored periodically. Indicators that reflect structures, processes, or outcomes have been developed by medical societies. Surveillance of these indicators is organized at the hospital or national level. Using a combination of methods improves the results. Errors are caused by combinations of human factors and system factors, and information must be obtained on how people make errors in the ICU environment. Preventive strategies are more likely to be effective if they rely on a system-based approach, in which organizational flaws are remedied, rather than a human-based approach of encouraging people not to make errors. The development of a safety culture in the ICU is crucial to effective prevention and should occur before the evaluation of safety programs, which are more likely to be effective when they involve bundles of measures. PMID:22339769
Validation of simplified centre of mass models during gait in individuals with chronic stroke.
Huntley, Andrew H; Schinkel-Ivy, Alison; Aqui, Anthony; Mansfield, Avril
2017-10-01
The feasibility of using a multiple segment (full-body) kinematic model in clinical gait assessment is difficult when considering obstacles such as time and cost constraints. While simplified gait models have been explored in healthy individuals, no such work to date has been conducted in a stroke population. The aim of this study was to quantify the errors of simplified kinematic models for chronic stroke gait assessment. Sixteen individuals with chronic stroke (>6months), outfitted with full body kinematic markers, performed a series of gait trials. Three centre of mass models were computed: (i) 13-segment whole-body model, (ii) 3 segment head-trunk-pelvis model, and (iii) 1 segment pelvis model. Root mean squared error differences were compared between models, along with correlations to measures of stroke severity. Error differences revealed that, while both models were similar in the mediolateral direction, the head-trunk-pelvis model had less error in the anteroposterior direction and the pelvis model had less error in the vertical direction. There was some evidence that the head-trunk-pelvis model error is influenced in the mediolateral direction for individuals with more severe strokes, as a few significant correlations were observed between the head-trunk-pelvis model and measures of stroke severity. These findings demonstrate the utility and robustness of the pelvis model for clinical gait assessment in individuals with chronic stroke. Low error in the mediolateral and vertical directions is especially important when considering potential stability analyses during gait for this population, as lateral stability has been previously linked to fall risk. Copyright © 2017 Elsevier Ltd. All rights reserved.
Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.
Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente
2014-07-15
Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
The potential for error in sampling
Jack Lewis
2000-01-01
Editor's note: The measurement of water quality parameters in environmental laboratories follows standard quality control protocols using methodologies approved by the U.S. Environmental Protection Agency. However, little attention has been given to quality assurance and quality control in activities outside the laboratory. This article describes some of those...
Relationships of Measurement Error and Prediction Error in Observed-Score Regression
ERIC Educational Resources Information Center
Moses, Tim
2012-01-01
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
Measurement error is often neglected in medical literature: a systematic review.
Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten
2018-06-01
In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.
Measuring food intake in studies of obesity.
Lissner, Lauren
2002-12-01
The problem of how to measure habitual food intake in studies of obesity remains an enigma in nutritional research. The existence of obesity-specific underreporting was rather controversial until the advent of the doubly labelled water technique gave credence to previously anecdotal evidence that such a bias does in fact exist. This paper reviews a number of issues relevant to interpreting dietary data in studies involving obesity. Topics covered include: participation biases, normative biases,importance of matching method to study, selective underreporting, and a brief discussion of the potential implications of generalised and selective underreporting in analytical epidemiology. It is concluded that selective underreporting of certain food types by obese individuals would produce consequences in analytical epidemiological studies that are both unpredictable and complex. Since it is becoming increasingly acknowledged that selective reporting error does occur, it is important to emphasise that correction for energy intake is not sufficient to eliminate the biases from this type of error. This is true both for obesity-related selective reporting errors and more universal types of selective underreporting, e.g. foods of low social desirability. Additional research is urgently required to examine the consequences of this type of error.
An online detection system for aggregate sizes and shapes based on digital image processing
NASA Astrophysics Data System (ADS)
Yang, Jianhong; Chen, Sijia
2017-02-01
Traditional aggregate size measuring methods are time-consuming, taxing, and do not deliver online measurements. A new online detection system for determining aggregate size and shape based on a digital camera with a charge-coupled device, and subsequent digital image processing, have been developed to overcome these problems. The system captures images of aggregates while falling and flat lying. Using these data, the particle size and shape distribution can be obtained in real time. Here, we calibrate this method using standard globules. Our experiments show that the maximum particle size distribution error was only 3 wt%, while the maximum particle shape distribution error was only 2 wt% for data derived from falling aggregates, having good dispersion. In contrast, the data for flat-lying aggregates had a maximum particle size distribution error of 12 wt%, and a maximum particle shape distribution error of 10 wt%; their accuracy was clearly lower than for falling aggregates. However, they performed well for single-graded aggregates, and did not require a dispersion device. Our system is low-cost and easy to install. It can successfully achieve online detection of aggregate size and shape with good reliability, and it has great potential for aggregate quality assurance.
Kossowska, Małgorzata; Szwed, Paulina; Wyczesany, Miroslaw; Czarnek, Gabriela; Wronka, Eligiusz
2018-01-01
Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure) in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure). We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400) due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure), religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure).
Effect of bar-code technology on the safety of medication administration.
Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K
2010-05-06
Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society
Horowitz-Kraus, Tzipi
2016-05-01
The error-detection mechanism aids in preventing error repetition during a given task. Electroencephalography demonstrates that error detection involves two event-related potential components: error-related and correct-response negativities (ERN and CRN, respectively). Dyslexia is characterized by slow, inaccurate reading. In particular, individuals with dyslexia have a less active error-detection mechanism during reading than typical readers. In the current study, we examined whether a reading training programme could improve the ability to recognize words automatically (lexical representations) in adults with dyslexia, thereby resulting in more efficient error detection during reading. Behavioural and electrophysiological measures were obtained using a lexical decision task before and after participants trained with the reading acceleration programme. ERN amplitudes were smaller in individuals with dyslexia than in typical readers before training but increased following training, as did behavioural reading scores. Differences between the pre-training and post-training ERN and CRN components were larger in individuals with dyslexia than in typical readers. Also, the error-detection mechanism as represented by the ERN/CRN complex might serve as a biomarker for dyslexia and be used to evaluate the effectiveness of reading intervention programmes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Prescribers' expectations and barriers to electronic prescribing of controlled substances
Kim, Meelee; McDonald, Ann; Kreiner, Peter; Kelleher, Stephen J; Blackman, Michael B; Kaufman, Peter N; Carrow, Grant M
2011-01-01
Objective To better understand barriers associated with the adoption and use of electronic prescribing of controlled substances (EPCS), a practice recently established by US Drug Enforcement Administration regulation. Materials and methods Prescribers of controlled substances affiliated with a regional health system were surveyed regarding current electronic prescribing (e-prescribing) activities, current prescribing of controlled substances, and expectations and barriers to the adoption of EPCS. Results 246 prescribers (response rate of 64%) represented a range of medical specialties, with 43.1% of these prescribers current users of e-prescribing for non-controlled substances. Reported issues with controlled substances included errors, pharmacy call-backs, and diversion; most prescribers expected EPCS to address many of these problems, specifically reduce medical errors, improve work flow and efficiency of practice, help identify prescription diversion or misuse, and improve patient treatment management. Prescribers expected, however, that it would be disruptive to practice, and over one-third of respondents reported that carrying a security authentication token at all times would be so burdensome as to discourage adoption. Discussion Although adoption of e-prescribing has been shown to dramatically reduce medication errors, challenges to efficient processes and errors still persist from the perspective of the prescriber, that may interfere with the adoption of EPCS. Most prescribers regarded EPCS security measures as a small or moderate inconvenience (other than carrying a security token), with advantages outweighing the burden. Conclusion Prescribers are optimistic about the potential for EPCS to improve practice, but view certain security measures as a burden and potential barrier. PMID:21946239
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Optimizing Hybrid Metrology: Rigorous Implementation of Bayesian and Combined Regression
Henn, Mark-Alexander; Silver, Richard M.; Villarrubia, John S.; Zhang, Nien Fan; Zhou, Hui; Barnes, Bryan M.; Ming, Bin; Vladár, András E.
2015-01-01
Hybrid metrology, e.g., the combination of several measurement techniques to determine critical dimensions, is an increasingly important approach to meet the needs of the semiconductor industry. A proper use of hybrid metrology may yield not only more reliable estimates for the quantitative characterization of 3-D structures but also a more realistic estimation of the corresponding uncertainties. Recent developments at the National Institute of Standards and Technology (NIST) feature the combination of optical critical dimension (OCD) measurements and scanning electron microscope (SEM) results. The hybrid methodology offers the potential to make measurements of essential 3-D attributes that may not be otherwise feasible. However, combining techniques gives rise to essential challenges in error analysis and comparing results from different instrument models, especially the effect of systematic and highly correlated errors in the measurement on the χ2 function that is minimized. Both hypothetical examples and measurement data are used to illustrate solutions to these challenges. PMID:26681991
Intraoperative visualization and assessment of electromagnetic tracking error
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Ungi, Tamas; Lasso, Andras; MacDonald, Andrew; Nanji, Sulaiman; Fichtinger, Gabor
2015-03-01
Electromagnetic tracking allows for increased flexibility in designing image-guided interventions, however it is well understood that electromagnetic tracking is prone to error. Visualization and assessment of the tracking error should take place in the operating room with minimal interference with the clinical procedure. The goal was to achieve this ideal in an open-source software implementation in a plug and play manner, without requiring programming from the user. We use optical tracking as a ground truth. An electromagnetic sensor and optical markers are mounted onto a stylus device, pivot calibrated for both trackers. Electromagnetic tracking error is defined as difference of tool tip position between electromagnetic and optical readings. Multiple measurements are interpolated into the thin-plate B-spline transform visualized in real time using 3D Slicer. All tracked devices are used in a plug and play manner through the open-source SlicerIGT and PLUS extensions of the 3D Slicer platform. Tracking error was measured multiple times to assess reproducibility of the method, both with and without placing ferromagnetic objects in the workspace. Results from exhaustive grid sampling and freehand sampling were similar, indicating that a quick freehand sampling is sufficient to detect unexpected or excessive field distortion in the operating room. The software is available as a plug-in for the 3D Slicer platforms. Results demonstrate potential for visualizing electromagnetic tracking error in real time for intraoperative environments in feasibility clinical trials in image-guided interventions.
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Error measuring system of rotary Inductosyn
NASA Astrophysics Data System (ADS)
Liu, Chengjun; Zou, Jibin; Fu, Xinghe
2008-10-01
The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
The impact of response measurement error on the analysis of designed experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
NASA Astrophysics Data System (ADS)
Bao, Chuanchen; Li, Jiakun; Feng, Qibo; Zhang, Bin
2018-07-01
This paper introduces an error-compensation model for our measurement method to measure five motion errors of a rotary axis based on fibre laser collimation. The error-compensation model is established in a matrix form using the homogeneous coordinate transformation theory. The influences of the installation errors, error crosstalk, and manufacturing errors are analysed. The model is verified by both ZEMAX simulation and measurement experiments. The repeatability values of the radial and axial motion errors are significantly suppressed by more than 50% after compensation. The repeatability experiments of five degrees of freedom motion errors and the comparison experiments of two degrees of freedom motion errors of an indexing table were performed by our measuring device and a standard instrument. The results show that the repeatability values of the angular positioning error ε z and tilt motion error around the Y axis ε y are 1.2″ and 4.4″, and the comparison deviations of the two motion errors are 4.0″ and 4.4″, respectively. The repeatability values of the radial and axial motion errors, δ y and δ z , are 1.3 and 0.6 µm, respectively. The repeatability value of the tilt motion error around the X axis ε x is 3.8″.
Impact of Measurement Error on Synchrophasor Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less
Aronis, Konstantinos N.; Ashikaga, Hiroshi
2018-01-01
Background Conflicting evidence exists on the efficacy of focal impulse and rotor modulation on atrial fibrillation ablation. A potential explanation is inaccurate rotor localization from multiple rotors coexistence and a relatively large (9–11 mm) inter-electrode distance (IED) of the multi-electrode basket catheter. Methods and results We studied a numerical model of cardiac action potential to reproduce one through seven rotors in a two-dimensional lattice. We estimated rotor location using phase singularity, Shannon entropy and dominant frequency. We then spatially downsampled the time series to create IEDs of 2–30 mm. The error of rotor localization was measured with reference to the dynamics of phase singularity at the original spatial resolution (IED = 1 mm). IED has a significant impact on the error using all the methods. When only one rotor is present, the error increases exponentially as a function of IED. At the clinical IED of 10 mm, the error is 3.8 mm (phase singularity), 3.7 mm (dominant frequency), and 11.8 mm (Shannon entropy). When there are more than one rotors, the error of rotor localization increases 10-fold. The error based on the phase singularity method at the clinical IED of 10 mm ranges from 30.0 mm (two rotors) to 96.1 mm (five rotors). Conclusions The magnitude of error of rotor localization using a clinically available basket catheter, in the presence of multiple rotors might be high enough to impact the accuracy of targeting during AF ablation. Improvement of catheter design and development of high-density mapping catheters may improve clinical outcomes of FIRM-guided AF ablation. PMID:28988690
Aronis, Konstantinos N; Ashikaga, Hiroshi
Conflicting evidence exists on the efficacy of focal impulse and rotor modulation on atrial fibrillation ablation. A potential explanation is inaccurate rotor localization from multiple rotors coexistence and a relatively large (9-11mm) inter-electrode distance (IED) of the multi-electrode basket catheter. We studied a numerical model of cardiac action potential to reproduce one through seven rotors in a two-dimensional lattice. We estimated rotor location using phase singularity, Shannon entropy and dominant frequency. We then spatially downsampled the time series to create IEDs of 2-30mm. The error of rotor localization was measured with reference to the dynamics of phase singularity at the original spatial resolution (IED=1mm). IED has a significant impact on the error using all the methods. When only one rotor is present, the error increases exponentially as a function of IED. At the clinical IED of 10mm, the error is 3.8mm (phase singularity), 3.7mm (dominant frequency), and 11.8mm (Shannon entropy). When there are more than one rotors, the error of rotor localization increases 10-fold. The error based on the phase singularity method at the clinical IED of 10mm ranges from 30.0mm (two rotors) to 96.1mm (five rotors). The magnitude of error of rotor localization using a clinically available basket catheter, in the presence of multiple rotors might be high enough to impact the accuracy of targeting during AF ablation. Improvement of catheter design and development of high-density mapping catheters may improve clinical outcomes of FIRM-guided AF ablation. Copyright © 2017 Elsevier Inc. All rights reserved.
Bojmehrani, Azadeh; Bergeron-Duchesne, Maude; Bouchard, Carmelle; Simard, Serge; Bouchard, Pierre-Alexandre; Vanderschuren, Abel; L'Her, Erwan; Lellouche, François
2014-07-01
Protective ventilation implementation requires the calculation of predicted body weight (PBW), determined by a formula based on gender and height. Consequently, height inaccuracy may be a limiting factor to correctly set tidal volumes. The objective of this study was to evaluate the accuracy of different methods in measuring heights in mechanically ventilated patients. Before cardiac surgery, actual height was measured with a height gauge while subjects were standing upright (reference method); the height was also estimated by alternative methods based on lower leg and forearm measurements. After cardiac surgery, upon ICU admission, a subject's height was visually estimated by a clinician and then measured with a tape measure while the subject was supine and undergoing mechanical ventilation. One hundred subjects (75 men, 25 women) were prospectively included. Mean PBW was 61.0 ± 9.7 kg, and mean actual weight was 30.3% higher. In comparison with the reference method, estimating the height visually and using the tape measure were less accurate than both lower leg and forearm measurements. Errors above 10% in calculating the PBW were present in 25 and 40 subjects when the tape measure or visual estimation of height was used in the formula, respectively. With lower leg and forearm measurements, 15 subjects had errors above 10% (P < .001). Our results demonstrate that significant variability exists between the different methods used to measure height in bedridden patients on mechanical ventilation. Alternative methods based on lower leg and forearm measurements are potentially interesting solutions to facilitate the accurate application of protective ventilation. Copyright © 2014 by Daedalus Enterprises.
Analytical and Photogrammetric Characterization of a Planar Tetrahedral Truss
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Adams, Richard R.; Rhodes, Marvin D.
1990-01-01
Future space science missions are likely to require near-optical quality reflectors which are supported by a stiff truss structure. This support truss should conform closely with its intended shape to minimize its contribution to the overall surface error of the reflector. The current investigation was conducted to evaluate the planar surface accuracy of a regular tetrahedral truss structure by comparing the results of predicted and measured node locations. The truss is a 2-ring hexagonal structure composed of 102 equal-length truss members. Each truss member is nominally 2 meters in length between node centers and is comprised of a graphite/epoxy tube with aluminum nodes and joints. The axial stiffness and the length variation of the truss components were determined experimentally and incorporated into a static finite element analysis of the truss. From this analysis, the root mean square (RMS) surface error of the truss was predicted to be 0.11 mm (0004 in). Photogrammetry tests were performed on the assembled truss to measure the normal displacements of the upper surface nodes and to determine if the truss would maintain its intended shape when subjected to repeated assembly. Considering the variation in the truss component lengths, the measures rms error of 0.14 mm (0.006 in) in the assembled truss is relatively small. The test results also indicate that a repeatable truss surface is achievable. Several potential sources of error were identified and discussed.
Pressure probe and isopiestic psychrometer measure similar turgor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nonami, H.; Boyer, J.S.; Steudle, E.
1987-03-01
Turgor measured with a miniature pressure probe was compared to that measured with an isopiestic thermocouple psychrometer in mature regions of soybean (Glycine max (L.) Merr.) stems. The probe measured turgor directly in cells of intact stems whereas the psychrometer measured the water potential and osmotic potential of excised stem segments and turgor was calculated by difference. When care was taken to prevent dehydration when working with the pressure probe, and diffusive resistance and dilution errors with the psychrometer, both methods gave similar values of turgor whether the plants were dehydrating or rehydrating. This finding, together with the previously demonstratedmore » similarity in turgor measured with the isopiestic psychrometer and a pressure chamber, indicates that the pressure probe provides accurate measurements of turgor despite the need to penetrate the cell. On the other hand, it suggest that as long as precautions are taken to obtain accurate values for the water potential and osmotic potential, turgor can be determined by isopiestic psychrometry in tissues not accessible to the pressure probe for physical reasons.« less
Auto-tracking system for human lumbar motion analysis.
Sui, Fuge; Zhang, Da; Lam, Shing Chun Benny; Zhao, Lifeng; Wang, Dongjun; Bi, Zhenggang; Hu, Yong
2011-01-01
Previous lumbar motion analyses suggest the usefulness of quantitatively characterizing spine motion. However, the application of such measurements is still limited by the lack of user-friendly automatic spine motion analysis systems. This paper describes an automatic analysis system to measure lumbar spine disorders that consists of a spine motion guidance device, an X-ray imaging modality to acquire digitized video fluoroscopy (DVF) sequences and an automated tracking module with a graphical user interface (GUI). DVF sequences of the lumbar spine are recorded during flexion-extension under a guidance device. The automatic tracking software utilizing a particle filter locates the vertebra-of-interest in every frame of the sequence, and the tracking result is displayed on the GUI. Kinematic parameters are also extracted from the tracking results for motion analysis. We observed that, in a bone model test, the maximum fiducial error was 3.7%, and the maximum repeatability error in translation and rotation was 1.2% and 2.6%, respectively. In our simulated DVF sequence study, the automatic tracking was not successful when the noise intensity was greater than 0.50. In a noisy situation, the maximal difference was 1.3 mm in translation and 1° in the rotation angle. The errors were calculated in translation (fiducial error: 2.4%, repeatability error: 0.5%) and in the rotation angle (fiducial error: 1.0%, repeatability error: 0.7%). However, the automatic tracking software could successfully track simulated sequences contaminated by noise at a density ≤ 0.5 with very high accuracy, providing good reliability and robustness. A clinical trial with 10 healthy subjects and 2 lumbar spondylolisthesis patients were enrolled in this study. The measurement with auto-tacking of DVF provided some information not seen in the conventional X-ray. The results proposed the potential use of the proposed system for clinical applications.
ERIC Educational Resources Information Center
Kim, Matthew
2013-01-01
Government policies sometimes cause unintended consequences for other potentially desirable behaviors. One such policy is the charitable tax deduction, which encourages charitable giving by allowing individuals to deduct giving from taxable income. Whether charitable giving and other desirable behaviors are complements or substitutes affect the…
Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.
2008-01-01
A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
Error Analysis and Validation for Insar Height Measurement Induced by Slant Range
NASA Astrophysics Data System (ADS)
Zhang, X.; Li, T.; Fan, W.; Geng, X.
2018-04-01
InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.
Error disclosure: a new domain for safety culture assessment.
Etchegaray, Jason M; Gallagher, Thomas H; Bell, Sigall K; Dunlap, Ben; Thomas, Eric J
2012-07-01
To (1) develop and test survey items that measure error disclosure culture, (2) examine relationships among error disclosure culture, teamwork culture and safety culture and (3) establish predictive validity for survey items measuring error disclosure culture. All clinical faculty from six health institutions (four medical schools, one cancer centre and one health science centre) in The University of Texas System were invited to anonymously complete an electronic survey containing questions about safety culture and error disclosure. The authors found two factors to measure error disclosure culture: one factor is focused on the general culture of error disclosure and the second factor is focused on trust. Both error disclosure culture factors were unique from safety culture and teamwork culture (correlations were less than r=0.85). Also, error disclosure general culture and error disclosure trust culture predicted intent to disclose a hypothetical error to a patient (r=0.25, p<0.001 and r=0.16, p<0.001, respectively) while teamwork and safety culture did not predict such an intent (r=0.09, p=NS and r=0.12, p=NS). Those who received prior error disclosure training reported significantly higher levels of error disclosure general culture (t=3.7, p<0.05) and error disclosure trust culture (t=2.9, p<0.05). The authors created and validated a new measure of error disclosure culture that predicts intent to disclose an error better than other measures of healthcare culture. This measure fills an existing gap in organisational assessments by assessing transparent communication after medical error, an important aspect of culture.
Morbi, Abigail H M; Hamady, Mohamad S; Riga, Celia V; Kashef, Elika; Pearch, Ben J; Vincent, Charles; Moorthy, Krishna; Vats, Amit; Cheshire, Nicholas J W; Bicknell, Colin D
2012-08-01
To determine the type and frequency of errors during vascular interventional radiology (VIR) and design and implement an intervention to reduce error and improve efficiency in this setting. Ethical guidance was sought from the Research Services Department at Imperial College London. Informed consent was not obtained. Field notes were recorded during 55 VIR procedures by a single observer. Two blinded assessors identified failures from field notes and categorized them into one or more errors by using a 22-part classification system. The potential to cause harm, disruption to procedural flow, and preventability of each failure was determined. A preprocedural team rehearsal (PPTR) was then designed and implemented to target frequent preventable potential failures. Thirty-three procedures were observed subsequently to determine the efficacy of the PPTR. Nonparametric statistical analysis was used to determine the effect of intervention on potential failure rates, potential to cause harm and procedural flow disruption scores (Mann-Whitney U test), and number of preventable failures (Fisher exact test). Before intervention, 1197 potential failures were recorded, of which 54.6% were preventable. A total of 2040 errors were deemed to have occurred to produce these failures. Planning error (19.7%), staff absence (16.2%), equipment unavailability (12.2%), communication error (11.2%), and lack of safety consciousness (6.1%) were the most frequent errors, accounting for 65.4% of the total. After intervention, 352 potential failures were recorded. Classification resulted in 477 errors. Preventable failures decreased from 54.6% to 27.3% (P < .001) with implementation of PPTR. Potential failure rates per hour decreased from 18.8 to 9.2 (P < .001), with no increase in potential to cause harm or procedural flow disruption per failure. Failures during VIR procedures are largely because of ineffective planning, communication error, and equipment difficulties, rather than a result of technical or patient-related issues. Many of these potential failures are preventable. A PPTR is an effective means of targeting frequent preventable failures, reducing procedural delays and improving patient safety.
Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley
2017-11-17
Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation performance data revealed several areas of misuse of the CRS/booster seat associated with high potential injury risk. Collectively, findings indicate that standardized ESS ratings are useful for estimating injury risk potential associated with real-world CRS and booster seat installation errors.
NASA Astrophysics Data System (ADS)
Hammi, A.; Placidi, L.; Weber, D. C.; Lomax, A. J.
2018-01-01
To exploit the full potential of proton therapy, accurate and on-line methods to verify the patient positioning and the proton range during the treatment are desirable. Here we propose and validate an innovative technique for determining patient misalignment uncertainties through the use of a small number of low dose, carefully selected proton pencil beams (‘range probes’) (RP) with sufficient energy that their residual Bragg peak (BP) position and shape can be measured on exit. Since any change of the patient orientation in relation to these beams will result in changes of the density heterogeneities through which they pass, our hypothesis is that patient misalignments can be deduced from measured changes in Bragg curve (BC) shape and range. As such, a simple and robust methodology has been developed that estimates average proton range and range dilution of the detected residual BC, in order to locate range probe positions with optimal prediction power for detecting misalignments. The validation of this RP based approach has been split into two phases. First we retrospectively investigate its potential to detect translational patient misalignments under real clinical conditions. Second, we test it for determining rotational errors of an anthropomorphic phantom that was systematically rotated using an in-house developed high precision motion stage. Simulations of RPs in these two scenarios show that this approach could potentially predict translational errors to lower than1.5 mm and rotational errors to smaller than 1° using only three or five RPs positions respectively.
Brain potentials measured during a Go/NoGo task predict completion of substance abuse treatment.
Steele, Vaughn R; Fink, Brandi C; Maurer, J Michael; Arbabshirani, Mohammad R; Wilber, Charles H; Jaffe, Adam J; Sidz, Anna; Pearlson, Godfrey D; Calhoun, Vince D; Clark, Vincent P; Kiehl, Kent A
2014-07-01
U.S. nationwide estimates indicate that 50% to 80% of prisoners have a history of substance abuse or dependence. Tailoring substance abuse treatment to specific needs of incarcerated individuals could improve effectiveness of treating substance dependence and preventing drug abuse relapse. We tested whether pretreatment neural measures of a response inhibition (Go/NoGo) task would predict which individuals would or would not complete a 12-week cognitive behavioral substance abuse treatment program. Adult incarcerated participants (n = 89; women n = 55) who volunteered for substance abuse treatment performed a Go/NoGo task while event-related potentials (ERPs) were recorded. Stimulus- and response-locked ERPs were compared between participants who completed (n = 68; women = 45) and discontinued (n = 21; women = 10) treatment. As predicted, stimulus-locked P2, response-locked error-related negativity (ERN/Ne), and response-locked error positivity (Pe), measured with windowed time-domain and principal component analysis, differed between groups. Using logistic regression and support-vector machine (i.e., pattern classifiers) models, P2 and Pe predicted treatment completion above and beyond other measures (i.e., N2, P300, ERN/Ne, age, sex, IQ, impulsivity, depression, anxiety, motivation for change, and years of drug abuse). Participants who discontinued treatment exhibited deficiencies in sensory gating, as indexed by smaller P2; error-monitoring, as indexed by smaller ERN/Ne; and adjusting response strategy posterror, as indexed by larger Pe. The combination of P2 and Pe reliably predicted 83.33% of individuals who discontinued treatment. These results may help in the development of individualized therapies, which could lead to more favorable, long-term outcomes. © 2013 Society of Biological Psychiatry Published by Society of Biological Psychiatry All rights reserved.
Participation in the TOMS Science Team
NASA Technical Reports Server (NTRS)
Chance, Kelly; Hilsenrath, Ernest (Technical Monitor)
2002-01-01
Because of the nominal funding provided by this grant, some of the relevant research is partially funded by other sources. Research performed for this funding period included the following items: We have investigated errors in TOMS ozone measurements caused by the uncertainty in wavelength calibration, coupled with the ozone cross sections in the Huggins bands and their temperature dependence. Preliminary results show that 0.1 nm uncertainty in TOMS wavelength calibration at the ozone active wavelengths corresponds to approx. 1% systematic error in O3, and thus potential 1% biases among ozone trends from the various TOMS instruments. This conclusion will be revised for absolute O3 Measurements as cross sections are further investigated for inclusion in the HITRAN database at the SAO, but the potential for relative errors remains. In order to aid further comparisons among TOMS and GOME ozone measurements, we have implemented our method of direct fitting of GOME radiances (BOAS) for O3, and now obtain the best fitting precision to date for GOME O3 Columns. This will aid in future comparisons of the actual quantities measured and fitted for the two instrument types. We have made comparisons between GOME ICFA cloud fraction and cloud fraction determined from GOME data using the Ring effect in the Ca II lines. There is a strong correlation, as expected, but there are substantial systematic biases between the determinations. This study will be refined in the near future using the recently-developed GOME Cloud Retrieval Algorithm (GOMECAT). We have improved the SAO Ring effect determination to include better convolution with instrument transfer functions and inclusion of interferences by atmospheric absorbers (e.g., O3). This has been made available to the general community.
Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions
Onufriev, Alexey V.
2013-01-01
We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790
A toolkit for measurement error correction, with a focus on nutritional epidemiology
Keogh, Ruth H; White, Ian R
2014-01-01
Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure-disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure-outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset. © 2014 The Authors. PMID:24497385
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopan, O; Novak, A; Zeng, J
Purpose: Physics pre-treatment plan review is crucial to safe radiation oncology treatments. Studies show that most errors originate in treatment planning, which underscores the importance of physics plan review. As a QA measure the physics review is of fundamental importance and is central to the profession of medical physics. However, little is known about its effectiveness. More hard data are needed. The purpose of this study was to quantify the effectiveness of physics review with the goal of improving it. Methods: This study analyzed 315 “potentially serious” near-miss incidents within an institutional incident learning system collected over a two-year period.more » 139 of these originated prior to physics review and were found at the review or after. Incidents were classified as events that: 1)were detected by physics review, 2)could have been detected (but were not), and 3)could not have been detected. Category 1 and 2 events were classified by which specific check (within physics review) detected or could have detected the event. Results: Of the 139 analyzed events, 73/139 (53%) were detected or could have been detected by the physics review; although, 42/73 (58%) were not actually detected. 45/73 (62%) errors originated in treatment planning, making physics review the first step in the workflow that could detect the error. Two specific physics checks were particularly effective (combined effectiveness of >20%): verifying DRRs (8/73) and verifying isocenter (7/73). Software-based plan checking systems were evaluated and found to have potential effectiveness of 40%. Given current data structures, software implementations of some tests such as isocenter verification check would be challenging. Conclusion: Physics plan review is a key safety measure and can detect majority of reported events. However, a majority of events that potentially could have been detected were NOT detected in this study, indicating the need to improve the performance of physics review.« less
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei
2018-01-01
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, T; de Supinski, B R; Schulz, M
Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transformmore » and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.« less
NASA Technical Reports Server (NTRS)
Pallix, Joan B.; Copeland, Richard A.; Arnold, James O. (Technical Monitor)
1995-01-01
Advanced laser-based diagnostics have been developed to examine catalytic effects and atom/surface interactions on thermal protection materials. This study establishes the feasibility of using laser-induced fluorescence for detection of O and N atom loss in a diffusion tube to measure surface catalytic activity. The experimental apparatus is versatile in that it allows fluorescence detection to be used for measuring species selective recombination coefficients as well as diffusion tube and microwave discharge diagnostics. Many of the potential sources of error in measuring atom recombination coefficients by this method have been identified and taken into account. These include scattered light, detector saturation, sample surface cleanliness, reactor design, gas pressure and composition, and selectivity of the laser probe. Recombination coefficients and their associated errors are reported for N and O atoms on a quartz surface at room temperature.
Polarized Raman spectroscopy of bone tissue: watch the scattering
NASA Astrophysics Data System (ADS)
Raghavan, Mekhala; Sahar, Nadder D.; Wilson, Robert H.; Mycek, Mary-Ann; Pleshko, Nancy; Kohn, David H.; Morris, Michael D.
2010-02-01
Polarized Raman spectroscopy is widely used in the study of molecular composition and orientation in synthetic and natural polymer systems. Here, we describe the use of Raman spectroscopy to extract quantitative orientation information from bone tissue. Bone tissue poses special challenges to the use of polarized Raman spectroscopy for measurement of orientation distribution functions because the tissue is turbid and birefringent. Multiple scattering in turbid media depolarizes light and is potentially a source of error. Using a Raman microprobe, we show that repeating the measurements with a series of objectives of differing numerical apertures can be used to assess the contributions of sample turbidity and depth of field to the calculated orientation distribution functions. With this test, an optic can be chosen to minimize the systematic errors introduced by multiple scattering events. With adequate knowledge of the optical properties of these bone tissues, we can determine if elastic light scattering affects the polarized Raman measurements.
Errors in laboratory medicine: practical lessons to improve patient safety.
Howanitz, Peter J
2005-10-01
Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.
Simulations of a PSD Plastic Neutron Collar for Assaying Fresh Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hausladen, Paul; Newby, Jason; McElroy, Robert Dennis
The potential performance of a notional active coincidence collar for assaying uranium fuel based on segmented detectors constructed from the new PSD plastic fast organic scintillator with pulse shape discrimination capability was investigated in simulation. Like the International Atomic Energy Agency's present Uranium Neutron Collar for LEU (UNCL), the PSD plastic collar would also function by stimulating fission in the 235U content of the fuel with a moderated 241Am/Li neutron source and detecting instances of induced fission via neutron coincidence counting. In contrast to the moderated detectors of the UNCL, the fast time scale of detection in the scintillator eliminatesmore » statistical errors due to accidental coincidences that limit the performance of the UNCL. However, the potential to detect a single neutron multiple times historically has been one of the properties of organic scintillator detectors that has prevented their adoption for international safeguards applications. Consequently, as part of the analysis of simulated data, a method was developed by which true neutron-neutron coincidences can be distinguished from inter-detector scatter that takes advantage of the position and timing resolution of segmented detectors. Then, the performance of the notional simulated coincidence collar was evaluated for assaying a variety of fresh fuels, including some containing burnable poisons and partial defects. In these simulations, particular attention was paid to the analysis of fast mode measurements. In fast mode, a Cd liner is placed inside the collar to shield the fuel from the interrogating source and detector moderators, thereby eliminating the thermal neutron flux that is most sensitive to the presence of burnable poisons that are ubiquitous in modern nuclear fuels. The simulations indicate that the predicted precision of fast mode measurements is similar to what can be achieved by the present UNCL in thermal mode. For example, the statistical accuracy of a ten-minute measurement of fission coincidences collected in fast mode will be approximately 1% for most fuels of interest, yielding a ~1.4% error after subtraction of a five minute measurement of the spontaneous fissions from 238U in the fuel, a ~2% error in analyzed linear density after accounting for the slope of the calibration curve, and a ~2.9% total error after addition of an assumed systematic error of 2%.« less
NASA Astrophysics Data System (ADS)
Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang
2018-02-01
This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.
NASA Astrophysics Data System (ADS)
Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.
2018-04-01
Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and aircraft atmospheric GHG observations in top-down urban emission monitoring systems.
Neutrino mass and dark energy from weak lensing.
Abazajian, Kevork N; Dodelson, Scott
2003-07-25
Weak gravitational lensing of background galaxies by intervening matter directly probes the mass distribution in the Universe. This distribution is sensitive to both the dark energy and neutrino mass. We examine the potential of lensing experiments to measure features of both simultaneously. Focusing on the radial information contained in a future deep 4000 deg(2) survey, we find that the expected (1-sigma) error on a neutrino mass is 0.1 eV, if the dark-energy parameters are allowed to vary. The constraints on dark-energy parameters are similarly restrictive, with errors on w of 0.09.
Honda, Michitaka
2014-04-01
Several improvements were implemented in the edge method of presampled modulation transfer function measurements (MTFs). The estimation technique for edge angle was newly developed by applying an algorithm for principal components analysis. The error in the estimation was statistically confirmed to be less than 0.01 even in the presence of quantum noise. Secondly, the geometrical edge slope was approximated using a rationalized number, making it possible to obtain an oversampled edge response function (ESF) with equal intervals. Thirdly, the final MTFs were estimated using the average of multiple MTFs calculated for local areas. This averaging operation eliminates the errors caused by the rationalized approximation. Computer-simulated images were used to evaluate the accuracy of our method. The relative error between the estimated MTF and the theoretical MTF at the Nyquist frequency was less than 0.5% when the MTF was expressed as a sinc function. For MTFs representing an indirect detector and phase-contrast detector, good agreement was also observed for the estimated MTFs for each. The high accuracy of the MTF estimation was also confirmed, even for edge angles of around 10 degrees, which suggests the potential for simplification of the measurement conditions. The proposed method could be incorporated into an automated measurement technique using a software application.
Jacobson, Peggy F; Walden, Patrick R
2013-08-01
This study explored the utility of language sample analysis for evaluating language ability in school-age Spanish-English sequential bilingual children. Specifically, the relative potential of lexical diversity and word/morpheme omission as predictors of typical or atypical language status was evaluated. Narrative samples were obtained from 48 bilingual children in both of their languages using the suggested narrative retell protocol and coding conventions as per Systematic Analysis of Language Transcripts (SALT; Miller & Iglesias, 2008) software. An additional lexical diversity measure, VocD, was also calculated. A series of logistical hierarchical regressions explored the utility of the number of different words, VocD statistic, and word and morpheme omissions in each language for predicting language status. Omission errors turned out to be the best predictors of bilingual language impairment at all ages, and this held true across languages. Although lexical diversity measures did not predict typical or atypical language status, the measures were significantly related to oral language proficiency in English and Spanish. The results underscore the significance of omission errors in bilingual language impairment while simultaneously revealing the limitations of lexical diversity measures as indicators of impairment. The relationship between lexical diversity and oral language proficiency highlights the importance of considering relative language proficiency in bilingual assessment.
Schmidt, Frank L; Le, Huy; Ilies, Remus
2003-06-01
On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.
Endoscopic Stone Measurement During Ureteroscopy.
Ludwig, Wesley W; Lim, Sunghwan; Stoianovici, Dan; Matlaga, Brian R
2018-01-01
Currently, stone size cannot be accurately measured while performing flexible ureteroscopy (URS). We developed novel software for ureteroscopic, stone size measurement, and then evaluated its performance. A novel application capable of measuring stone fragment size, based on the known distance of the basket tip in the ureteroscope's visual field, was designed and calibrated in a laboratory setting. Complete URS procedures were recorded and 30 stone fragments were extracted and measured using digital calipers. The novel software program was applied to the recorded URS footage to obtain ureteroscope-derived stone size measurements. These ureteroscope-derived measurements were then compared with the actual-measured fragment size. The median longitudinal and transversal errors were 0.14 mm (95% confidence interval [CI] 0.1, 0.18) and 0.09 mm (95% CI 0.02, 0.15), respectively. The overall software accuracy and precision were 0.17 and 0.15 mm, respectively. The longitudinal and transversal measurements obtained by the software and digital calipers were highly correlated (r = 0.97 and 0.93). Neither stone size nor stone type was correlated with error measurements. This novel method and software reliably measured stone fragment size during URS. The software ultimately has the potential to make URS safer and more efficient.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
Estimating Uncertainty in Long Term Total Ozone Records from Multiple Sources
NASA Technical Reports Server (NTRS)
Frith, Stacey M.; Stolarski, Richard S.; Kramarova, Natalya; McPeters, Richard D.
2014-01-01
Total ozone measurements derived from the TOMS and SBUV backscattered solar UV instrument series cover the period from late 1978 to the present. As the SBUV series of instruments comes to an end, we look to the 10 years of data from the AURA Ozone Monitoring Instrument (OMI) and two years of data from the Ozone Mapping Profiler Suite (OMPS) on board the Suomi National Polar-orbiting Partnership satellite to continue the record. When combining these records to construct a single long-term data set for analysis we must estimate the uncertainty in the record resulting from potential biases and drifts in the individual measurement records. In this study we present a Monte Carlo analysis used to estimate uncertainties in the Merged Ozone Dataset (MOD), constructed from the Version 8.6 SBUV2 series of instruments. We extend this analysis to incorporate OMI and OMPS total ozone data into the record and investigate the impact of multiple overlapping measurements on the estimated error. We also present an updated column ozone trend analysis and compare the size of statistical error (error from variability not explained by our linear regression model) to that from instrument uncertainty.
Improved estimation of anomalous diffusion exponents in single-particle tracking experiments
NASA Astrophysics Data System (ADS)
Kepten, Eldad; Bronshtein, Irena; Garini, Yuval
2013-05-01
The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.
NASA Astrophysics Data System (ADS)
Lee, Minho; Cho, Nahm-Gyoo
2013-09-01
A new probing and compensation method is proposed to improve the three-dimensional (3D) measuring accuracy of 3D shapes, including irregular surfaces. A new tactile coordinate measuring machine (CMM) probe with a five-degree of freedom (5-DOF) force/moment sensor using carbon fiber plates was developed. The proposed method efficiently removes the anisotropic sensitivity error and decreases the stylus deformation and the actual contact point estimation errors that are major error components of shape measurement using touch probes. The relationship between the measuring force and estimation accuracy of the actual contact point error and stylus deformation error are examined for practical use of the proposed method. The appropriate measuring force condition is presented for the precision measurement.
Sea Ice Topography Profiling using Laser Altimetry from Small Unmanned Aircraft Systems
NASA Astrophysics Data System (ADS)
Crocker, Roger Ian
Arctic sea ice is undergoing a dramatic transition from a perennial ice pack with a high prevalence of old multiyear ice, to a predominantly seasonal ice pack comprised primarily of young first-year and second-year ice. This transition has brought about changes in the sea ice thickness and topography characteristics, which will further affect the evolution and survivability of the ice pack. The varying ice conditions have substantial implications for commercial operations, international affairs, regional and global climate, our ability to model climate dynamics, and the livelihood of Arctic inhabitants. A number of satellite and airborne missions are dedicated to monitoring sea ice, but they are limited by their spatial and temporal resolution and coverage. Given the fast rate of sea ice change and its pervasive implications, enhanced observational capabilities are needed to augment the current strategies. The CU Laser Profilometer and Imaging System (CULPIS) is designed specifically for collecting fine-resolution elevation data and imagery from small unmanned aircraft systems (UAS), and has a great potential to compliment ongoing missions. This altimeter system has been integrated into four different UAS, and has been deployed during Arctic and Antarctic science campaigns. The CULPIS elevation measurement accuracy is shown to be 95±25 cm, and is limited primarily by GPS positioning error (<25 cm), aircraft attitude determination error (<20 cm), and sensor misalignment error (<20 cm). The relative error is considerably smaller over short flight distances, and the measurement precision is shown to be <10 cm over a distance of 200 m. Given its fine precision, the CULPIS is well suited for measuring sea ice topography, and observed ridge height and ridge separation distributions are found to agree with theoretical distributions to within 5%. Simulations demonstrate the inability of course-resolution measurements to accurately represent the theoretical distributions, with differences up to 30%. Future efforts should focus on reducing the total measurement error to <20 cm to make the CULPIS suitable for detecting ice sheet elevation change.
SU-E-T-257: Output Constancy: Reducing Measurement Variations in a Large Practice Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hedrick, K; Fitzgerald, T; Miller, R
2014-06-01
Purpose: To standardize output constancy check procedures in a large medical physics practice group covering multiple sites, in order to identify and reduce small systematic errors caused by differences in equipment and the procedures of multiple physicists. Methods: A standardized machine output constancy check for both photons and electrons was instituted within the practice group in 2010. After conducting annual TG-51 measurements in water and adjusting the linac to deliver 1.00 cGy/MU at Dmax, an acrylic phantom (comparable at all sites) and PTW farmer ion chamber are used to obtain monthly output constancy reference readings. From the collected charge reading,more » measurements of air pressure and temperature, and chamber Ndw and Pelec, a value we call the Kacrylic factor is determined, relating the chamber reading in acrylic to the dose in water with standard set-up conditions. This procedure easily allows for multiple equipment combinations to be used at any site. The Kacrylic factors and output results from all sites and machines are logged monthly in a central database and used to monitor trends in calibration and output. Results: The practice group consists of 19 sites, currently with 34 Varian and 8 Elekta linacs (24 Varian and 5 Elekta linacs in 2010). Over the past three years, the standard deviation of Kacrylic factors measured on all machines decreased by 20% for photons and high energy electrons as systematic errors were found and reduced. Low energy electrons showed very little change in the distribution of Kacrylic values. Small errors in linac beam data were found by investigating outlier Kacrylic values. Conclusion: While the use of acrylic phantoms introduces an additional source of error through small differences in depth and effective depth, the new standardized procedure eliminates potential sources of error from using many different phantoms and results in more consistent output constancy measurements.« less
Should Studies of Diabetes Treatment Stratification Correct for Baseline HbA1c?
Jones, Angus G.; Lonergan, Mike; Henley, William E.; Pearson, Ewan R.; Hattersley, Andrew T.; Shields, Beverley M.
2016-01-01
Aims Baseline HbA1c is a major predictor of response to glucose lowering therapy and therefore a potential confounder in studies aiming to identify other predictors. However, baseline adjustment may introduce error if the association between baseline HbA1c and response is substantially due to measurement error and regression to the mean. We aimed to determine whether studies of predictors of response should adjust for baseline HbA1c. Methods We assessed the relationship between baseline HbA1c and glycaemic response in 257 participants treated with GLP-1R agonists and assessed whether it reflected measurement error and regression to the mean using duplicate ‘pre-baseline’ HbA1c measurements not included in the response variable. In this cohort and an additional 2659 participants treated with sulfonylureas we assessed the relationship between covariates associated with baseline HbA1c and treatment response with and without baseline adjustment, and with a bias correction using pre-baseline HbA1c to adjust for the effects of error in baseline HbA1c. Results Baseline HbA1c was a major predictor of response (R2 = 0.19,β = -0.44,p<0.001).The association between pre-baseline and response was similar suggesting the greater response at higher baseline HbA1cs is not mainly due to measurement error and subsequent regression to the mean. In unadjusted analysis in both cohorts, factors associated with baseline HbA1c were associated with response, however these associations were weak or absent after adjustment for baseline HbA1c. Bias correction did not substantially alter associations. Conclusions Adjustment for the baseline HbA1c measurement is a simple and effective way to reduce bias in studies of predictors of response to glucose lowering therapy. PMID:27050911
Intravenous Chemotherapy Compounding Errors in a Follow-Up Pan-Canadian Observational Study.
Gilbert, Rachel E; Kozak, Melissa C; Dobish, Roxanne B; Bourrier, Venetia C; Koke, Paul M; Kukreti, Vishal; Logan, Heather A; Easty, Anthony C; Trbovich, Patricia L
2018-05-01
Intravenous (IV) compounding safety has garnered recent attention as a result of high-profile incidents, awareness efforts from the safety community, and increasingly stringent practice standards. New research with more-sensitive error detection techniques continues to reinforce that error rates with manual IV compounding are unacceptably high. In 2014, our team published an observational study that described three types of previously unrecognized and potentially catastrophic latent chemotherapy preparation errors in Canadian oncology pharmacies that would otherwise be undetectable. We expand on this research and explore whether additional potential human failures are yet to be addressed by practice standards. Field observations were conducted in four cancer center pharmacies in four Canadian provinces from January 2013 to February 2015. Human factors specialists observed and interviewed pharmacy managers, oncology pharmacists, pharmacy technicians, and pharmacy assistants as they carried out their work. Emphasis was on latent errors (potential human failures) that could lead to outcomes such as wrong drug, dose, or diluent. Given the relatively short observational period, no active failures or actual errors were observed. However, 11 latent errors in chemotherapy compounding were identified. In terms of severity, all 11 errors create the potential for a patient to receive the wrong drug or dose, which in the context of cancer care, could lead to death or permanent loss of function. Three of the 11 practices were observed in our previous study, but eight were new. Applicable Canadian and international standards and guidelines do not explicitly address many of the potentially error-prone practices observed. We observed a significant degree of risk for error in manual mixing practice. These latent errors may exist in other regions where manual compounding of IV chemotherapy takes place. Continued efforts to advance standards, guidelines, technological innovation, and chemical quality testing are needed.
From the Lab to the real world : sources of error in UF {sub 6} gas enrichment monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lombardi, Marcie L.
2012-03-01
Safeguarding uranium enrichment facilities is a serious concern for the International Atomic Energy Agency (IAEA). Safeguards methods have changed over the years, most recently switching to an improved safeguards model that calls for new technologies to help keep up with the increasing size and complexity of today’s gas centrifuge enrichment plants (GCEPs). One of the primary goals of the IAEA is to detect the production of uranium at levels greater than those an enrichment facility may have declared. In order to accomplish this goal, new enrichment monitors need to be as accurate as possible. This dissertation will look at themore » Advanced Enrichment Monitor (AEM), a new enrichment monitor designed at Los Alamos National Laboratory. Specifically explored are various factors that could potentially contribute to errors in a final enrichment determination delivered by the AEM. There are many factors that can cause errors in the determination of uranium hexafluoride (UF{sub 6}) gas enrichment, especially during the period when the enrichment is being measured in an operating GCEP. To measure enrichment using the AEM, a passive 186-keV (kiloelectronvolt) measurement is used to determine the {sup 235}U content in the gas, and a transmission measurement or a gas pressure reading is used to determine the total uranium content. A transmission spectrum is generated using an x-ray tube and a “notch” filter. In this dissertation, changes that could occur in the detection efficiency and the transmission errors that could result from variations in pipe-wall thickness will be explored. Additional factors that could contribute to errors in enrichment measurement will also be examined, including changes in the gas pressure, ambient and UF{sub 6} temperature, instrumental errors, and the effects of uranium deposits on the inside of the pipe walls will be considered. The sensitivity of the enrichment calculation to these various parameters will then be evaluated. Previously, UF{sub 6} gas enrichment monitors have required empty pipe measurements to accurately determine the pipe attenuation (the pipe attenuation is typically much larger than the attenuation in the gas). This dissertation reports on a method for determining the thickness of a pipe in a GCEP when obtaining an empty pipe measurement may not be feasible. This dissertation studies each of the components that may add to the final error in the enrichment measurement, and the factors that were taken into account to mitigate these issues are also detailed and tested. The use of an x-ray generator as a transmission source and the attending stability issues are addressed. Both analytical calculations and experimental measurements have been used. For completeness, some real-world analysis results from the URENCO Capenhurst enrichment plant have been included, where the final enrichment error has remained well below 1% for approximately two months.« less
Poon, Eric G; Cina, Jennifer L; Churchill, William W; Mitton, Patricia; McCrea, Michelle L; Featherstone, Erica; Keohane, Carol A; Rothschild, Jeffrey M; Bates, David W; Gandhi, Tejal K
2005-01-01
We performed a direct observation pre-post study to evaluate the impact of barcode technology on medication dispensing errors and potential adverse drug events in the pharmacy of a tertiary-academic medical center. We found that barcode technology significantly reduced the rate of target dispensing errors leaving the pharmacy by 85%, from 0.37% to 0.06%. The rate of potential adverse drug events (ADEs) due to dispensing errors was also significantly reduced by 63%, from 0.19% to 0.069%. In a 735-bed hospital where 6 million doses of medications are dispensed per year, this technology is expected to prevent about 13,000 dispensing errors and 6,000 potential ADEs per year. PMID:16779372
Advancing the research agenda for diagnostic error reduction.
Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep
2013-10-01
Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.
Theory of sampling: four critical success factors before analysis.
Wagner, Claas; Esbensen, Kim H
2015-01-01
Food and feed materials characterization, risk assessment, and safety evaluations can only be ensured if QC measures are based on valid analytical data, stemming from representative samples. The Theory of Sampling (TOS) is the only comprehensive theoretical framework that fully defines all requirements to ensure sampling correctness and representativity, and to provide the guiding principles for sampling in practice. TOS also defines the concept of material heterogeneity and its impact on the sampling process, including the effects from all potential sampling errors. TOS's primary task is to eliminate bias-generating errors and to minimize sampling variability. Quantitative measures are provided to characterize material heterogeneity, on which an optimal sampling strategy should be based. Four critical success factors preceding analysis to ensure a representative sampling process are presented here.
NASA Astrophysics Data System (ADS)
Kocifaj, Miroslav; Gueymard, Christian A.
2011-02-01
Aerosol optical depth (AOD) has a crucial importance for estimating the optical properties of the atmosphere, and is constantly present in optical models of aerosol systems. Any error in aerosol optical depth (∂AOD) has direct and indirect consequences. On the one hand, such errors affect the accuracy of radiative transfer models (thus implying, e.g., potential errors in the evaluation of radiative forcing by aerosols). Additionally, any error in determining AOD is reflected in the retrieved microphysical properties of aerosol particles, which might therefore be inaccurate. Three distinct effects (circumsolar radiation, optical mass, and solar disk's brightness distribution) affecting ∂AOD are qualified and quantified in the present study. The contribution of circumsolar (CS) radiation to the measured flux density of direct solar radiation has received more attention than the two other effects in the literature. It varies rapidly with meteorological conditions and size distribution of the aerosol particles, but also with instrument field of view. Numerical simulations of the three effects just mentioned were conducted, assuming otherwise "perfect" experimental conditions. The results show that CS is responsible for the largest error in AOD, while the effect of brightness distribution (BD) has only a negligible impact. The optical mass (OM) effect yields negligible errors in AOD generally, but noticeable errors for low sun (within 10° of the horizon). In general, the OM and BD effects result in negative errors in AOD (i.e. the true AOD is smaller than that of the experimental determination), conversely to CS. Although the rapid increase in optical mass at large zenith angles can change the sign of ∂AOD, the CS contribution frequently plays the leading role in ∂AOD. To maximize the accuracy in AOD retrievals, the CS effect should not be ignored. In practice, however, this effect can be difficult to evaluate correctly unless the instantaneous aerosols size distribution is known from, e.g., inversion techniques.
Yamada, Akira; Mohri, Satoshi; Nakamura, Michihiro; Naruse, Keiji
2015-01-01
The liquid junction potential (LJP), the phenomenon that occurs when two electrolyte solutions of different composition come into contact, prevents accurate measurements in potentiometry. The effect of the LJP is usually remarkable in measurements of diluted solutions with low buffering capacities or low ion concentrations. Our group has constructed a simple method to eliminate the LJP by exerting spatiotemporal control of a liquid junction (LJ) formed between two solutions, a sample solution and a baseline solution (BLS), in a flow-through-type differential pH sensor probe. The method was contrived based on microfluidics. The sensor probe is a differential measurement system composed of two ion-sensitive field-effect transistors (ISFETs) and one Ag/AgCl electrode. With our new method, the border region of the sample solution and BLS is vibrated in order to mix solutions and suppress the overshoot after the sample solution is suctioned into the sensor probe. Compared to the conventional method without vibration, our method shortened the settling time from over two min to 15 s and reduced the measurement error by 86% to within 0.060 pH. This new method will be useful for improving the response characteristics and decreasing the measurement error of many apparatuses that use LJs. PMID:25835300
Random measurement error: Why worry? An example of cardiovascular risk factors.
Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H
2018-01-01
With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.
Incorporating measurement error in n = 1 psychological autoregressive modeling.
Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.
Task-dependent signal variations in EEG error-related potentials for brain-computer interfaces.
Iturrate, I; Montesano, L; Minguez, J
2013-04-01
A major difficulty of brain-computer interface (BCI) technology is dealing with the noise of EEG and its signal variations. Previous works studied time-dependent non-stationarities for BCIs in which the user's mental task was independent of the device operation (e.g., the mental task was motor imagery and the operational task was a speller). However, there are some BCIs, such as those based on error-related potentials, where the mental and operational tasks are dependent (e.g., the mental task is to assess the device action and the operational task is the device action itself). The dependence between the mental task and the device operation could introduce a new source of signal variations when the operational task changes, which has not been studied yet. The aim of this study is to analyse task-dependent signal variations and their effect on EEG error-related potentials. The work analyses the EEG variations on the three design steps of BCIs: an electrophysiology study to characterize the existence of these variations, a feature distribution analysis and a single-trial classification analysis to measure the impact on the final BCI performance. The results demonstrate that a change in the operational task produces variations in the potentials, even when EEG activity exclusively originated in brain areas related to error processing is considered. Consequently, the extracted features from the signals vary, and a classifier trained with one operational task presents a significant loss of performance for other tasks, requiring calibration or adaptation for each new task. In addition, a new calibration for each of the studied tasks rapidly outperforms adaptive techniques designed in the literature to mitigate the EEG time-dependent non-stationarities.
Task-dependent signal variations in EEG error-related potentials for brain-computer interfaces
NASA Astrophysics Data System (ADS)
Iturrate, I.; Montesano, L.; Minguez, J.
2013-04-01
Objective. A major difficulty of brain-computer interface (BCI) technology is dealing with the noise of EEG and its signal variations. Previous works studied time-dependent non-stationarities for BCIs in which the user’s mental task was independent of the device operation (e.g., the mental task was motor imagery and the operational task was a speller). However, there are some BCIs, such as those based on error-related potentials, where the mental and operational tasks are dependent (e.g., the mental task is to assess the device action and the operational task is the device action itself). The dependence between the mental task and the device operation could introduce a new source of signal variations when the operational task changes, which has not been studied yet. The aim of this study is to analyse task-dependent signal variations and their effect on EEG error-related potentials.Approach. The work analyses the EEG variations on the three design steps of BCIs: an electrophysiology study to characterize the existence of these variations, a feature distribution analysis and a single-trial classification analysis to measure the impact on the final BCI performance.Results and significance. The results demonstrate that a change in the operational task produces variations in the potentials, even when EEG activity exclusively originated in brain areas related to error processing is considered. Consequently, the extracted features from the signals vary, and a classifier trained with one operational task presents a significant loss of performance for other tasks, requiring calibration or adaptation for each new task. In addition, a new calibration for each of the studied tasks rapidly outperforms adaptive techniques designed in the literature to mitigate the EEG time-dependent non-stationarities.
Yasui, Takeshi; Fujio, Makoto; Yokoyama, Shuko; Araki, Tsutomu
2014-07-14
Phase measurement of continuous-wave terahertz (CW-THz) radiation is a potential tool for direct distance and imaging measurement of optically rough objects due to its high robustness to optical rough surfaces. However, the 2π phase ambiguity in the phase measurement of single-frequency CW-THz radiation limits the dynamic range of the measured distance to the order of the wavelength used. In this article, phase-slope measurement of tunable CW-THz radiation with a THz frequency comb was effectively used to extend the dynamic range up to 1.834 m while maintaining an error of a few tens µm in the distance measurement of an optically rough object. Furthermore, a combination of phase-slope measurement of tunable CW-THz radiation and phase measurement of single-frequency CW-THz radiation enhanced the distance error to a few µm within the dynamic range of 1.834 m without any influence from the 2π phase ambiguity. The proposed method will be a powerful tool for the construction and maintenance of large-scale structures covered with optically rough surfaces.
On epicardial potential reconstruction using regularization schemes with the L1-norm data term.
Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart
2011-01-07
The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.
NASA Astrophysics Data System (ADS)
Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu
2008-10-01
The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.
Cognitive flexibility correlates with gambling severity in young adults.
Leppink, Eric W; Redden, Sarah A; Chamberlain, Samuel R; Grant, Jon E
2016-10-01
Although gambling disorder (GD) is often characterized as a problem of impulsivity, compulsivity has recently been proposed as a potentially important feature of addictive disorders. The present analysis assessed the neurocognitive and clinical relationship between compulsivity on gambling behavior. A sample of 552 non-treatment seeking gamblers age 18-29 was recruited from the community for a study on gambling in young adults. Gambling severity levels included both casual and disordered gamblers. All participants completed the Intra/Extra-Dimensional Set Shift (IED) task, from which the total adjusted errors were correlated with gambling severity measures, and linear regression modeling was used to assess three error measures from the task. The present analysis found significant positive correlations between problems with cognitive flexibility and gambling severity (reflected by the number of DSM-5 criteria, gambling frequency, amount of money lost in the past year, and gambling urge/behavior severity). IED errors also showed a positive correlation with self-reported compulsive behavior scores. A significant correlation was also found between IED errors and non-planning impulsivity from the BIS. Linear regression models based on total IED errors, extra-dimensional (ED) shift errors, or pre-ED shift errors indicated that these factors accounted for a significant portion of the variance noted in several variables. These findings suggest that cognitive flexibility may be an important consideration in the assessment of gamblers. Results from correlational and linear regression analyses support this possibility, but the exact contributions of both impulsivity and cognitive flexibility remain entangled. Future studies will ideally be able to assess the longitudinal relationships between gambling, compulsivity, and impulsivity, helping to clarify the relative contributions of both impulsive and compulsive features. Copyright © 2016 Elsevier Ltd. All rights reserved.
Baum, John M; Monhaut, Nanette M; Parker, Donald R; Price, Christopher P
2006-06-01
Two independent studies reported that 16% of people who self-monitor blood glucose used incorrectly coded meters. The degree of analytical error, however, was not characterized. Our study objectives were to demonstrate that miscoding can cause analytical errors and to characterize the potential amount of bias that can occur. The impact of calibration error with three selfblood glucose monitoring systems (BGMSs), one of which has an autocoding feature, is reported. Fresh capillary fingerstick blood from 50 subjects, 18 men and 32 women ranging in age from 23 to 82 years, was used to measure glucose with three BGMSs. Two BGMSs required manual coding and were purposely miscoded using numbers different from the one recommended for the reagent lot used. Two properly coded meters of each BGMS were included to assess within-system variability. Different reagent lots were used to challenge a third system that had autocoding capability and could not be miscoded. Some within-system comparisons showed deviations of greater than +/-30% when results obtained with miscoded meters were compared with data obtained with ones programmed using the correct code number. Similar erroneous results were found when the miscoded meter results were compared with those obtained with a glucose analyzer. For some miscoded meter and test strip combinations, error grid analysis showed that 90% of results fell into zones indicating altered clinical action. Such inaccuracies were not found with the BGMS having the autocoding feature. When certain meter code number settings of two BGMSs were used in conjunction with test strips having code numbers that did not match, statistically and clinically inaccurate results were obtained. Coding errors resulted in analytical errors of greater than +/-30% (-31.6 to +60.9%). These results confirm the value of a BGMS with an automatic coding feature.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Attitude and Trajectory Estimation Using Earth Magnetic Field Data
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack Y.
1996-01-01
The magnetometer has long been a reliable, inexpensive sensor used in spacecraft momentum management and attitude estimation. Recent studies show an increased accuracy potential for magnetometer-only attitude estimation systems. Since the Earth's magnetic field is a function of time and position, and since time is known quite precisely, the differences between the computer and measured magnetic field components, as measured by the magnetometers throughout the entire spacecraft orbit, are a function of both the spacecraft trajectory and attitude errors. Therefore, these errors can be used to estimate both trajectory and attitude. Traditionally, satellite attitude and trajectory have been estimated with completely separate system, using different measurement data. Recently, trajectory estimation for low earth orbit satellites was successfully demonstrated in ground software using only magnetometer data. This work proposes a single augmented extended Kalman Filter to simultaneously and autonomously estimate both spacecraft trajectory and attitude with data from a magnetometer and either dynamically determined rates or gyro-measured body rates.
O'Connor, Annette M; Totton, Sarah C; Cullen, Jonah N; Ramezani, Mahmood; Kalivarapu, Vijay; Yuan, Chaohui; Gilbert, Stephen B
2018-01-01
Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates novel challenges for accurate automation of data extraction and bias and error assessment in preclinical experiments.
Laboratory Safety Monitoring of Chronic Medications in Ambulatory Care Settings
Hurley, Judith S; Roberts, Melissa; Solberg, Leif I; Gunter, Margaret J; Nelson, Winnie W; Young, Linda; Frost, Floyd J
2005-01-01
OBJECTIVE To evaluate laboratory safety monitoring in patients taking selected chronic prescription drugs. DESIGN Retrospective study using 1999–2001 claims data to calculate rates of missed laboratory tests (potential laboratory monitoring errors). Eleven drugs/drug groups and 64 laboratory tests were evaluated. SETTING Two staff/network model health maintenance organizations. PATIENTS Continuously enrolled health plan members age≥19 years taking ≥1 chronic medications. MEASUREMENTS AND MAIN RESULTS Among patients taking chronic medications (N=29,823 in 1999, N=32,423 in 2000, and N=36,811 in 2001), 47.1% in 1999, 45.0% in 2000, and 44.0% in 2001 did not receive ≥1 test recommended for safety monitoring. Taking into account that patients were sometimes missing more than 1 test for a given drug and that patients were frequently taking multiple drugs, the rate of all potential laboratory monitoring errors was 849/1,000 patients/year in 1999, 810/1,000 patients/year in 2000, and 797/1,000 patients/year in 2001. Rates of potential laboratory monitoring errors varied considerably across individual drugs and laboratory tests. CONCLUSIONS Lapses in laboratory monitoring of patients taking selected chronic medications were common. Further research is needed to determine whether, and to what extent, this failure to monitor patients is associated with adverse clinical outcomes. PMID:15857489
Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank
2015-01-01
Quantification of the setup errors is vital to define appropriate setup margins preventing geographical misses. The no‐action–level (NAL) correction protocol reduces the systematic setup errors and, hence, the setup margins. The manual entry of the setup corrections in the record‐and‐verify software, however, increases the susceptibility of the NAL protocol to human errors. Moreover, the impact of the skin mobility on the anteroposterior patient setup reproducibility in whole‐breast radiotherapy (WBRT) is unknown. In this study, we therefore investigated the potential of fixed vertical couch position‐based patient setup in WBRT. The possibility to introduce a threshold for correction of the systematic setup errors was also explored. We measured the anteroposterior, mediolateral, and superior–inferior setup errors during fractions 1–12 and weekly thereafter with tangential angled single modality paired imaging. These setup data were used to simulate the residual setup errors of the NAL protocol, the fixed vertical couch position protocol, and the fixed‐action–level protocol with different correction thresholds. Population statistics of the setup errors of 20 breast cancer patients and 20 breast cancer patients with additional regional lymph node (LN) irradiation were calculated to determine the setup margins of each off‐line correction protocol. Our data showed the potential of the fixed vertical couch position protocol to restrict the systematic and random anteroposterior residual setup errors to 1.8 mm and 2.2 mm, respectively. Compared to the NAL protocol, a correction threshold of 2.5 mm reduced the frequency of mediolateral and superior–inferior setup corrections with 40% and 63%, respectively. The implementation of the correction threshold did not deteriorate the accuracy of the off‐line setup correction compared to the NAL protocol. The combination of the fixed vertical couch position protocol, for correction of the anteroposterior setup error, and the fixed‐action–level protocol with 2.5 mm correction threshold, for correction of the mediolateral and the superior–inferior setup errors, was proved to provide adequate and comparable patient setup accuracy in WBRT and WBRT with additional LN irradiation. PACS numbers: 87.53.Kn, 87.57.‐s
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
NASA Astrophysics Data System (ADS)
Dumoulin, Romain
Despite the fact that noise-induced hearing loss remains the number one occupational disease in developed countries, individual noise exposure levels are still rarely known and infrequently tracked. Indeed, efforts to standardize noise exposure levels present disadvantages such as costly instrumentation and difficulties associated with on site implementation. Given their advanced technical capabilities and widespread daily usage, mobile phones could be used to measure noise levels and make noise monitoring more accessible. However, the use of mobile phones for measuring noise exposure is currently limited due to the lack of formal procedures for their calibration and challenges regarding the measurement procedure. Our research investigated the calibration of mobile phone-based solutions for measuring noise exposure using a mobile phone's built-in microphones and wearable external microphones. The proposed calibration approach integrated corrections that took into account microphone placement error. The corrections were of two types: frequency-dependent, using a digital filter and noise level-dependent, based on the difference between the C-weighted noise level minus A-weighted noise level of the noise measured by the phone. The electro-acoustical limitations and measurement calibration procedure of the mobile phone were investigated. The study also sought to quantify the effect of noise exposure characteristics on the accuracy of calibrated mobile phone measurements. Measurements were carried out in reverberant and semi-anechoic chambers with several mobiles phone units of the same model, two types of external devices (an earpiece and a headset with an in-line microphone) and an acoustical test fixture (ATF). The proposed calibration approach significantly improved the accuracy of the noise level measurements in diffuse and free fields, with better results in the diffuse field and with ATF positions causing little or no acoustic shadowing. Several sources of errors and uncertainties were identified including the errors associated with the inter-unit-variability, the presence of signal saturation and the microphone placement relative to the source and the wearer. The results of the investigations and validation measurements led to recommendations regarding the measurement procedure including the use of external microphones having lower sensitivity and provided the basis for a standardized and unique factory default calibration method intended for implementation in any mobile phone. A user-defined adjustment was proposed to minimize the errors associated with calibration and the acoustical field. Mobile phones implementing the proposed laboratory calibration and used with external microphones showed great potential as noise exposure instruments. Combined with their potential as training and prevention tools, the expansion of their use could significantly help reduce the risks of noise-induced hearing loss.
Tilt error in cryospheric surface radiation measurements at high latitudes: a model study
NASA Astrophysics Data System (ADS)
Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve
2016-03-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.
Measurement system and model for simultaneously measuring 6DOF geometric errors.
Zhao, Yuqiong; Zhang, Bin; Feng, Qibo
2017-09-04
A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.
Mossavar-Rahmani, Yasmin; Sotres-Alvarez, Daniela; Wong, William W.; Loria, Catherine M.; Gellman, Marc D.; Van Horn, Linda; Alderman, Michael H.; Beasley, Jeannette M.; Lora, Claudia M.; Siega-Riz, Anna Maria; Kaplan, Robert C.; Shaw, Pamela A.
2017-01-01
Measurement error in assessment of sodium and potassium intake obscures associations with health outcomes. The level of this error in a diverse US Hispanic/Latino population is unknown. We investigated measurement error in self-reported dietary intake of sodium and potassium and examined differences by background (Central American, Cuban, Dominican, Mexican, Puerto Rican and South American). In 2010-2012, we studied 447 participants aged 18-74 years from four communities (Miami, Bronx, Chicago, and San Diego), obtaining objective 24-hour urinary sodium and potassium excretion measures. Self-report was captured from two interviewer-administered 24-hour dietary recalls. Twenty percent of the sample repeated the study. We examined bias in self-reported sodium and potassium from diet and the association of mismeasurement with participant characteristics. Linear regression relating self-report with objective measures was used to develop calibrations equations. Self-report underestimated sodium intake by 19.8% and 20.8% and potassium intake by 1.3% and 4.6% in men and women, respectively. Sodium intake underestimation varied by Hispanic/Latino background (p<0.05) and was associated with higher body mass index. Potassium intake underestimation was associated with higher BMI, lower restaurant score (indicating lower consumption of foods prepared away from home and/or eaten outside the home), and supplement use. The R2 was 19.7% and 25.0% for the sodium and potassium calibration models, respectively, increasing to 59.5% and 61.7% after adjusting for within-person variability in each biomarker. These calibration equations, corrected for subject-specific reporting error, have the potential to reduce bias in diet-disease associations within this largest cohort of Hispanics in the US. PMID:28205551
Bae, Youngchul
2016-05-23
An optical sensor such as a laser range finder (LRF) or laser displacement meter (LDM) uses reflected and returned laser beam from a target. The optical sensor has been mainly used to measure the distance between a launch position and the target. However, optical sensor based LRF and LDM have numerous and various errors such as statistical errors, drift errors, cyclic errors, alignment errors and slope errors. Among these errors, an alignment error that contains measurement error for the strength of radiation of returned laser beam from the target is the most serious error in industrial optical sensors. It is caused by the dependence of the measurement offset upon the strength of radiation of returned beam incident upon the focusing lens from the target. In this paper, in order to solve these problems, we propose a novel method for the measurement of the output of direct current (DC) voltage that is proportional to the strength of radiation of returned laser beam in the received avalanche photo diode (APD) circuit. We implemented a measuring circuit that is able to provide an exact measurement of reflected laser beam. By using the proposed method, we can measure the intensity or strength of radiation of laser beam in real time and with a high degree of precision.
Bae, Youngchul
2016-01-01
An optical sensor such as a laser range finder (LRF) or laser displacement meter (LDM) uses reflected and returned laser beam from a target. The optical sensor has been mainly used to measure the distance between a launch position and the target. However, optical sensor based LRF and LDM have numerous and various errors such as statistical errors, drift errors, cyclic errors, alignment errors and slope errors. Among these errors, an alignment error that contains measurement error for the strength of radiation of returned laser beam from the target is the most serious error in industrial optical sensors. It is caused by the dependence of the measurement offset upon the strength of radiation of returned beam incident upon the focusing lens from the target. In this paper, in order to solve these problems, we propose a novel method for the measurement of the output of direct current (DC) voltage that is proportional to the strength of radiation of returned laser beam in the received avalanche photo diode (APD) circuit. We implemented a measuring circuit that is able to provide an exact measurement of reflected laser beam. By using the proposed method, we can measure the intensity or strength of radiation of laser beam in real time and with a high degree of precision. PMID:27223291
Medication Errors in Vietnamese Hospitals: Prevalence, Potential Outcome and Associated Factors
Nguyen, Huong-Thao; Nguyen, Tuan-Dung; van den Heuvel, Edwin R.; Haaijer-Ruskamp, Flora M.; Taxis, Katja
2015-01-01
Background Evidence from developed countries showed that medication errors are common and harmful. Little is known about medication errors in resource-restricted settings, including Vietnam. Objectives To determine the prevalence and potential clinical outcome of medication preparation and administration errors, and to identify factors associated with errors. Methods This was a prospective study conducted on six wards in two urban public hospitals in Vietnam. Data of preparation and administration errors of oral and intravenous medications was collected by direct observation, 12 hours per day on 7 consecutive days, on each ward. Multivariable logistic regression was applied to identify factors contributing to errors. Results In total, 2060 out of 5271 doses had at least one error. The error rate was 39.1% (95% confidence interval 37.8%- 40.4%). Experts judged potential clinical outcomes as minor, moderate, and severe in 72 (1.4%), 1806 (34.2%) and 182 (3.5%) doses. Factors associated with errors were drug characteristics (administration route, complexity of preparation, drug class; all p values < 0.001), and administration time (drug round, p = 0.023; day of the week, p = 0.024). Several interactions between these factors were also significant. Nurse experience was not significant. Higher error rates were observed for intravenous medications involving complex preparation procedures and for anti-infective drugs. Slightly lower medication error rates were observed during afternoon rounds compared to other rounds. Conclusions Potentially clinically relevant errors occurred in more than a third of all medications in this large study conducted in a resource-restricted setting. Educational interventions, focusing on intravenous medications with complex preparation procedure, particularly antibiotics, are likely to improve patient safety. PMID:26383873
The effect of misclassification errors on case mix measurement.
Sutherland, Jason M; Botz, Chas K
2006-12-01
Case mix systems have been implemented for hospital reimbursement and performance measurement across Europe and North America. Case mix categorizes patients into discrete groups based on clinical information obtained from patient charts in an attempt to identify clinical or cost difference amongst these groups. The diagnosis related group (DRG) case mix system is the most common methodology, with variants adopted in many countries. External validation studies of coding quality have confirmed that widespread variability exists between originally recorded diagnoses and re-abstracted clinical information. DRG assignment errors in hospitals that share patient level cost data for the purpose of establishing cost weights affects cost weight accuracy. The purpose of this study is to estimate bias in cost weights due to measurement error of reported clinical information. DRG assignment error rates are simulated based on recent clinical re-abstraction study results. Our simulation study estimates that 47% of cost weights representing the least severe cases are over weight by 10%, while 32% of cost weights representing the most severe cases are under weight by 10%. Applying the simulated weights to a cross-section of hospitals, we find that teaching hospitals tend to be under weight. Since inaccurate cost weights challenges the ability of case mix systems to accurately reflect patient mix and may lead to potential distortions in hospital funding, bias in hospital case mix measurement highlights the role clinical data quality plays in hospital funding in countries that use DRG-type case mix systems. Quality of clinical information should be carefully considered from hospitals that contribute financial data for establishing cost weights.
ERIC Educational Resources Information Center
Nicewander, W. Alan
2018-01-01
Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…
Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study
NASA Astrophysics Data System (ADS)
Bogren, W.; Kylling, A.; Burkhart, J. F.
2015-12-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.
Effect of temporal sampling and timing for soil moisture measurements at field scale
NASA Astrophysics Data System (ADS)
Snapir, B.; Hobbs, S.
2012-04-01
Estimating soil moisture at field scale is valuable for various applications such as irrigation scheduling in cultivated watersheds, flood and drought prediction, waterborne disease spread assessment, or even determination of mobility with lightweight vehicles. Synthetic aperture radar on satellites in low Earth orbit can provide fine resolution images with a repeat time of a few days. For an Earth observing satellite, the choice of the orbit is driven in particular by the frequency of measurements required to meet a certain accuracy in retrieving the parameters of interest. For a given target, having only one image every week may not enable to capture the full dynamic range of soil moisture - soil moisture can change significantly within a day when rainfall occurs. Hence this study focuses on the effect of temporal sampling and timing of measurements in terms of error on the retrieved signal. All the analyses are based on in situ measurements of soil moisture (acquired every 30 min) from the OzNet Hydrological Monitoring Network in Australia for different fields over several years. The first study concerns sampling frequency. Measurements at different frequencies were simulated by sub-sampling the original data. Linear interpolation was used to estimate the missing intermediate values, and then this time series was compared to the original. The difference between these two signals is computed for different levels of sub-sampling. Results show that the error increases linearly when the interval is less than 1 day. For intervals longer than a day, a sinusoidal component appears on top of the linear growth due to the diurnal variation of surface soil moisture. Thus, for example, the error with measurements every 4.5 days can be slightly less than the error with measurements every 2 days. Next, for a given sampling interval, this study evaluated the effect of the time during the day at which measurements are made. Of course when measurements are very frequent the time of acquisition does not matter, but when few measurements are available (sampling interval > 1 day), the time of acquisition can be important. It is shown that with daily measurements the error can double depending on the time of acquisition. This result is very sensitive to the phase of the sinusoidal variation of soil moisture. For example, in autumn for a given field with soil moisture ranging from 7.08% to 11.44% (mean and standard deviation being respectively 8.68% and 0.74%), daily measurements at 2 pm lead to a mean error of 0.47% v/v, while daily measurements at 9 am/pm produce a mean error of 0.24% v/v. The minimum of the sinusoid occurs every afternoon around 2 pm, after interpolation, measurements acquired at this time underestimate soil moisture, whereas measurements around 9 am/pm correspond to nodes of the sinusoid, hence they represent the average soil moisture. These results concerning the frequency and the timing of measurements can potentially drive the schedule of satellite image acquisition over some fields.
NASA Technical Reports Server (NTRS)
Diorio, Kimberly A.; Voska, Ned (Technical Monitor)
2002-01-01
This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.
Sources of medical error in refractive surgery.
Moshirfar, Majid; Simpson, Rachel G; Dave, Sonal B; Christiansen, Steven M; Edmonds, Jason N; Culbertson, William W; Pascucci, Stephen E; Sher, Neal A; Cano, David B; Trattler, William B
2013-05-01
To evaluate the causes of laser programming errors in refractive surgery and outcomes in these cases. In this multicenter, retrospective chart review, 22 eyes of 18 patients who had incorrect data entered into the refractive laser computer system at the time of treatment were evaluated. Cases were analyzed to uncover the etiology of these errors, patient follow-up treatments, and final outcomes. The results were used to identify potential methods to avoid similar errors in the future. Every patient experienced compromised uncorrected visual acuity requiring additional intervention, and 7 of 22 eyes (32%) lost corrected distance visual acuity (CDVA) of at least one line. Sixteen patients were suitable candidates for additional surgical correction to address these residual visual symptoms and six were not. Thirteen of 22 eyes (59%) received surgical follow-up treatment; nine eyes were treated with contact lenses. After follow-up treatment, six patients (27%) still had a loss of one line or more of CDVA. Three significant sources of error were identified: errors of cylinder conversion, data entry, and patient identification error. Twenty-seven percent of eyes with laser programming errors ultimately lost one or more lines of CDVA. Patients who underwent surgical revision had better outcomes than those who did not. Many of the mistakes identified were likely avoidable had preventive measures been taken, such as strict adherence to patient verification protocol or rigorous rechecking of treatment parameters. Copyright 2013, SLACK Incorporated.
Applying lessons from social psychology to transform the culture of error disclosure.
Han, Jason; LaMarra, Denise; Vapiwala, Neha
2017-10-01
The ability to carry out prompt and effective error disclosure has been described in the literature as an essential skill among physicians that can lead to improved patient satisfaction, staff well-being and hospital outcomes. However, few studies have addressed the social psychology principles that may influence physician behaviour. The authors provide an overview of recent administrative measures designed to encourage physicians to disclose error, but note that deliberate practice, buttressed with lessons from social psychology, is needed to implement further productive behavioural changes. Two main cognitive biases that may hinder error disclosure are identified, namely: fundamental attribution error, and forecasting error. Strategies to overcome these maladaptive cognitive patterns are discussed. The authors note that interactions with standardised patients (SPs) can be used to simulate hospital encounters and help teach important behavioural considerations. Virtual reality is introduced as an immersive, realistic and easily scalable technology that can supplement traditional curricula. Lastly, the authors highlight the importance of establishing a professional standard of competence, potentially by incorporating difficult patient encounters, including disclosure of error, into medical licensing examinations that assess clinical skills. Existing curricula that cover physician error disclosure may benefit from reviewing the social psychology literature. These lessons, incorporated into SP programmes and emerging technological platforms, may improve training and evaluative methods for all medical trainees. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Infant Eyes: A Window on Cognitive Development
ERIC Educational Resources Information Center
Aslin, Richard N.
2012-01-01
Eye-trackers suitable for use with infants are now marketed by several commercial vendors. As eye-trackers become more prevalent in infancy research, there is the potential for users to be unaware of dangers lurking "under the hood" if they assume the eye-tracker introduces no errors in measuring infants' gaze. Moreover, the influx of voluminous…
ERIC Educational Resources Information Center
Redmond, Sean M.
2016-01-01
Purpose: The empirical record regarding the expected co-occurrence of attention-deficit/hyperactivity disorder (ADHD) and specific language impairment is confusing and contradictory. A research plan is presented that has the potential to untangle links between these 2 common neurodevelopmental disorders. Method: Data from completed and ongoing…
Application of the Doppler lidar system to agricultural burning and air-sea interactions
NASA Technical Reports Server (NTRS)
Fitzjarrald, D.
1980-01-01
The Doppler lidar system is potentially a very powerful measurement system. Three areas concerning the system are discussed: (1) error analysis of the system to verify the results; (2) application of the system to agricultural burning in California central valley; and (3) oceanographic possibilities of the system.
Ultrasonic Doppler measurement of renal artery blood flow
NASA Technical Reports Server (NTRS)
Freund, W. R.; Meindl, J. D.
1975-01-01
An extensive evaluation of the practical and theoretical limitations encountered in the use of totally implantable CW Doppler flowmeters is provided. Theoretical analyses, computer models, in-vitro and in-vivo calibration studies describe the sources and magnitudes of potential errors in the measurement of blood flow through the renal artery, as well as larger vessels in the circulatory system. The evaluation of new flowmeter/transducer systems and their use in physiological investigations is reported.
Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.
2017-01-01
Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2016-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2018-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.
Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong
2016-03-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Tianxing; Zhou, Junxiang; Deng, Xiaozhong; Li, Jubo; Xing, Chunrong; Su, Jianxin; Wang, Huiliang
2018-07-01
A manufacturing error of a cycloidal gear is the key factor affecting the transmission accuracy of a robot rotary vector (RV) reducer. A methodology is proposed to realize the digitized measurement and data processing of the cycloidal gear manufacturing error based on the gear measuring center, which can quickly and accurately measure and evaluate the manufacturing error of the cycloidal gear by using both the whole tooth profile measurement and a single tooth profile measurement. By analyzing the particularity of the cycloidal profile and its effect on the actual meshing characteristics of the RV transmission, the cycloid profile measurement strategy is planned, and the theoretical profile model and error measurement model of cycloid-pin gear transmission are established. Through the digital processing technology, the theoretical trajectory of the probe and the normal vector of the measured point are calculated. By means of precision measurement principle and error compensation theory, a mathematical model for the accurate calculation and data processing of manufacturing error is constructed, and the actual manufacturing error of the cycloidal gear is obtained by the optimization iterative solution. Finally, the measurement experiment of the cycloidal gear tooth profile is carried out on the gear measuring center and the HEXAGON coordinate measuring machine, respectively. The measurement results verify the correctness and validity of the measurement theory and method. This methodology will provide the basis for the accurate evaluation and the effective control of manufacturing precision of the cycloidal gear in a robot RV reducer.
Somarathna, P D S N; Minasny, Budiman; Malone, Brendan P; Stockmann, Uta; McBratney, Alex B
2018-08-01
Spatial modelling of environmental data commonly only considers spatial variability as the single source of uncertainty. In reality however, the measurement errors should also be accounted for. In recent years, infrared spectroscopy has been shown to offer low cost, yet invaluable information needed for digital soil mapping at meaningful spatial scales for land management. However, spectrally inferred soil carbon data are known to be less accurate compared to laboratory analysed measurements. This study establishes a methodology to filter out the measurement error variability by incorporating the measurement error variance in the spatial covariance structure of the model. The study was carried out in the Lower Hunter Valley, New South Wales, Australia where a combination of laboratory measured, and vis-NIR and MIR inferred topsoil and subsoil soil carbon data are available. We investigated the applicability of residual maximum likelihood (REML) and Markov Chain Monte Carlo (MCMC) simulation methods to generate parameters of the Matérn covariance function directly from the data in the presence of measurement error. The results revealed that the measurement error can be effectively filtered-out through the proposed technique. When the measurement error was filtered from the data, the prediction variance almost halved, which ultimately yielded a greater certainty in spatial predictions of soil carbon. Further, the MCMC technique was successfully used to define the posterior distribution of measurement error. This is an important outcome, as the MCMC technique can be used to estimate the measurement error if it is not explicitly quantified. Although this study dealt with soil carbon data, this method is amenable for filtering the measurement error of any kind of continuous spatial environmental data. Copyright © 2018 Elsevier B.V. All rights reserved.
Space-Borne Laser Altimeter Geolocation Error Analysis
NASA Astrophysics Data System (ADS)
Wang, Y.; Fang, J.; Ai, Y.
2018-05-01
This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Desjardins, M.; Shenk, W. E.
1979-01-01
Simultaneous Geosynchronous Operational Environmental Satellite (GOES) 1 km resolution visible image pairs can provide quantitative three dimensional measurements of clouds. These data have great potential for severe storms research and as a basic parameter measurement source for other areas of meteorology (e.g. climate). These stereo cloud height measurements are not subject to the errors and ambiguities caused by unknown cloud emissivity and temperature profiles that are associated with infrared techniques. This effort describes the display and measurement of stereo data using digital processing techniques.
Position Tracking During Human Walking Using an Integrated Wearable Sensing System.
Zizzo, Giulio; Ren, Lei
2017-12-10
Progress has been made enabling expensive, high-end inertial measurement units (IMUs) to be used as tracking sensors. However, the cost of these IMUs is prohibitive to their widespread use, and hence the potential of low-cost IMUs is investigated in this study. A wearable low-cost sensing system consisting of IMUs and ultrasound sensors was developed. Core to this system is an extended Kalman filter (EKF), which provides both zero-velocity updates (ZUPTs) and Heuristic Drift Reduction (HDR). The IMU data was combined with ultrasound range measurements to improve accuracy. When a map of the environment was available, a particle filter was used to impose constraints on the possible user motions. The system was therefore composed of three subsystems: IMUs, ultrasound sensors, and a particle filter. A Vicon motion capture system was used to provide ground truth information, enabling validation of the sensing system. Using only the IMU, the system showed loop misclosure errors of 1% with a maximum error of 4-5% during walking. The addition of the ultrasound sensors resulted in a 15% reduction in the total accumulated error. Lastly, the particle filter was capable of providing noticeable corrections, which could keep the tracking error below 2% after the first few steps.
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Arenas Jiménez, María Dolores; Ferre, Gabriel; Álvarez-Ude, Fernando
Haemodialysis (HD) patients are a high-risk population group. For these patients, an error could have catastrophic consequences. Therefore, systems that ensure the safety of these patients in an environment with high technology and great interaction of the human factor is a requirement. To show a systematic working approach, reproducible in any HD unit, which consists of recording the complications and errors that occurred during the HD session; defining which of those complications could be considered adverse event (AE), and therefore preventable; and carrying out a systematic analysis of them, as well as of underlying real or potential errors, evaluating their severity, frequency and detection; as well as establishing priorities for action (Failure Mode and Effects Analysis system [FMEA systems]). Retrospective analysis of the graphs of all HD sessions performed during one month (October 2015) on 97 patients, analysing all recorded complications. The consideration of these complications as AEs was based on a consensus among 13 health professionals and 2 patients. The severity, frequency and detection of each AE was evaluated by the FMEA system. We analysed 1303 HD treatments in 97 patients. A total of 383 complications (1 every 3.4 HD treatments) were recorded. Approximately 87.9% of them was deemed AEs and 23.7% complications related with patients' underlying pathology. There was one AE every 3.8 HD treatments. Hypertension and hypotension were the most frequent AEs (42.7 and 27.5% of all AEs recorded, respectively). Vascular-access related AEs were one every 68.5 HD treatments. A total of 21 errors (1 every 62 HD treatments), mainly related to the HD technique and to the administration of prescribed medication, were registered. The highest risk priority number, according to the FMEA, corresponded to errors related to patient body weight; dysfunction/rupture of the catheter; and needle extravasation. HD complications are frequent. Consideration of some of them as AEs could improve safety by facilitating the implementation of preventive measures. The application of the FMEA system allows stratifying real and potential errors in dialysis units and acting with the appropriate degree of urgency, developing and implementing the necessary preventive and improvement measures. Copyright © 2017 Sociedad Española de Nefrología. Published by Elsevier España, S.L.U. All rights reserved.
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
Incorporating measurement error in n = 1 psychological autoregressive modeling
Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Dietary Assessment in Food Environment Research
Kirkpatrick, Sharon I.; Reedy, Jill; Butler, Eboneé N.; Dodd, Kevin W.; Subar, Amy F.; Thompson, Frances E.; McKinnon, Robin A.
2015-01-01
Context The existing evidence on food environments and diet is inconsistent, potentially due in part to heterogeneity in measures used to assess diet. The objective of this review, conducted in 2012–2013, was to examine measures of dietary intake utilized in food environment research. Evidence acquisition Included studies were published from January 2007 through June 2012 and assessed relationships between at least one food environment exposure and at least one dietary outcome. Fifty-one articles were identified using PubMed, Scopus, Web of Knowledge, and PsycINFO; references listed in the papers reviewed and relevant review articles; and the National Cancer Institute's Measures of the Food Environment website. The frequency of the use of dietary intake measures and assessment of specific dietary outcomes was examined, as were patterns of results among studies using different dietary measures. Evidence synthesis The majority of studies used brief instruments, such as screeners or one or two questions, to assess intake. Food frequency questionnaires were used in about a third of studies, one in ten used 24-hour recalls, and fewer than one in twenty used diaries. Little consideration of dietary measurement error was evident. Associations between the food environment and diet were more consistently in the expected direction in studies using less error-prone measures. Conclusions There is a tendency toward the use of brief dietary assessment instruments with low cost and burden rather than more detailed instruments that capture intake with less bias. Use of error-prone dietary measures may lead to spurious findings and reduced power to detect associations. PMID:24355678
Evaluation of electrolytic tilt sensors for wind tunnel model angle-of-attack (AOA) measurements
NASA Technical Reports Server (NTRS)
Wong, Douglas T.
1991-01-01
The results of a laboratory evaluation of three types of electrolytic tilt sensors as potential candidates for model attitude or angle of attack (AOA) measurements in wind tunnel tests are presented. Their performance was also compared with that from typical servo accelerometers used for AOA measurements. Model RG-37 electrolytic tilt sensors were found to have the highest overall accuracy among the three types. Compared with the servo accelerometer, their accuracies are about one order of magnitude worse and each of them cost about two-thirds less. Therefore, the sensors are unsuitable for AOA measurements although they are less expensive. However, the potential for other applications exists where the errors resulting from roll interaction, vibration, and response time are less, and sensor temperature can be controlled.
Use of units of measurement error in anthropometric comparisons.
Lucas, Teghan; Henneberg, Maciej
2017-09-01
Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, Eric C., E-mail: eford@uw.edu; Terezakis, Stephanie; Souranis, Annette
Purpose: To quantify the error-detection effectiveness of commonly used quality control (QC) measures. Methods: We analyzed incidents from 2007-2010 logged into a voluntary in-house, electronic incident learning systems at 2 academic radiation oncology clinics. None of the incidents resulted in patient harm. Each incident was graded for potential severity using the French Nuclear Safety Authority scoring scale; high potential severity incidents (score >3) were considered, along with a subset of 30 randomly chosen low severity incidents. Each report was evaluated to identify which of 15 common QC checks could have detected it. The effectiveness was calculated, defined as the percentagemore » of incidents that each QC measure could detect, both for individual QC checks and for combinations of checks. Results: In total, 4407 incidents were reported, 292 of which had high-potential severity. High- and low-severity incidents were detectable by 4.0 {+-} 2.3 (mean {+-} SD) and 2.6 {+-} 1.4 QC checks, respectively (P<.001). All individual checks were less than 50% sensitive with the exception of pretreatment plan review by a physicist (63%). An effectiveness of 97% was achieved with 7 checks used in combination and was not further improved with more checks. The combination of checks with the highest effectiveness includes physics plan review, physician plan review, Electronic Portal Imaging Device-based in vivo portal dosimetry, radiation therapist timeout, weekly physics chart check, the use of checklists, port films, and source-to-skin distance checks. Some commonly used QC checks such as pretreatment intensity modulated radiation therapy QA do not substantially add to the ability to detect errors in these data. Conclusions: The effectiveness of QC measures in radiation oncology depends sensitively on which checks are used and in which combinations. A small percentage of errors cannot be detected by any of the standard formal QC checks currently in broad use, suggesting that further improvements are needed. These data require confirmation with a broader incident-reporting database.« less
Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor
2011-05-14
In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.
Polarization entanglement purification for concatenated Greenberger-Horne-Zeilinger state
NASA Astrophysics Data System (ADS)
Zhou, Lan; Sheng, Yu-Bo
2017-10-01
Entanglement purification plays a fundamental role in long-distance quantum communication. In the paper, we put forward the first polarization entanglement purification protocol (EPP) for one type of nonlocal logic-qubit entanglement, i.e., concatenated Greenberger-Horne-Zeilinger (C-GHZ) state, resorting to the photon-atom interaction in low-quality (Q) cavity. In contrast to existing EPPs, this protocol can purify the bit-flip error and phase-flip error in both physic and logic level. Instead of measuring the photons directly, this protocol only requires to measure the atom states to judge whether the protocol is successful. In this way, the purified logic entangled states can be preserved for further application. Moreover, it makes this EPP repeatable so as to obtain a higher fidelity of logic entangled states. As the logic-qubit entanglement utilizes the quantum error correction (QEC) codes, which has an inherent stability against noise and decoherence, this EPP combined with the QEC codes may provide a double protection for the entanglement from the channel noise and may have potential applications in long-distance quantum communication.
Assessment of ecologic regression in the study of lung cancer and indoor radon.
Stidley, C A; Samet, J M
1994-02-01
Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.
NASA Astrophysics Data System (ADS)
Lee, Eunji; Park, Sang-Young; Shin, Bumjoon; Cho, Sungki; Choi, Eun-Jung; Jo, Junghyun; Park, Jang-Hyun
2017-03-01
The optical wide-field patrol network (OWL-Net) is a Korean optical surveillance system that tracks and monitors domestic satellites. In this study, a batch least squares algorithm was developed for optical measurements and verified by Monte Carlo simulation and covariance analysis. Potential error sources of OWL-Net, such as noise, bias, and clock errors, were analyzed. There is a linear relation between the estimation accuracy and the noise level, and the accuracy significantly depends on the declination bias. In addition, the time-tagging error significantly degrades the observation accuracy, while the time-synchronization offset corresponds to the orbital motion. The Cartesian state vector and measurement bias were determined using the OWL-Net tracking data of the KOMPSAT-1 and Cryosat-2 satellites. The comparison with known orbital information based on two-line elements (TLE) and the consolidated prediction format (CPF) shows that the orbit determination accuracy is similar to that of TLE. Furthermore, the precision and accuracy of OWL-Net observation data were determined to be tens of arcsec and sub-degree level, respectively.
NASA Astrophysics Data System (ADS)
Hutchinson, G. L.; Livingston, G. P.; Healy, R. W.; Striegl, R. G.
2000-04-01
We employed a three-dimensional finite difference gas diffusion model to simulate the performance of chambers used to measure surface-atmosphere trace gas exchange. We found that systematic errors often result from conventional chamber design and deployment protocols, as well as key assumptions behind the estimation of trace gas exchange rates from observed concentration data. Specifically, our simulations showed that (1) when a chamber significantly alters atmospheric mixing processes operating near the soil surface, it also nearly instantaneously enhances or suppresses the postdeployment gas exchange rate, (2) any change resulting in greater soil gas diffusivity, or greater partitioning of the diffusing gas to solid or liquid soil fractions, increases the potential for chamber-induced measurement error, and (3) all such errors are independent of the magnitude, kinetics, and/or distribution of trace gas sources, but greater for trace gas sinks with the same initial absolute flux. Finally, and most importantly, we found that our results apply to steady state as well as non-steady-state chambers, because the slow rate of gas diffusion in soil inhibits recovery of the former from their initial non-steady-state condition. Over a range of representative conditions, the error in steady state chamber estimates of the trace gas flux varied from -30 to +32%, while estimates computed by linear regression from non-steady-state chamber concentrations were 2 to 31% too small. Although such errors are relatively small in comparison to the temporal and spatial variability characteristic of trace gas exchange, they bias the summary statistics for each experiment as well as larger scale trace gas flux estimates based on them.
Hutchinson, G.L.; Livingston, G.P.; Healy, R.W.; Striegl, Robert G.
2000-01-01
We employed a three-dimensional finite difference gas diffusion model to simulate the performance of chambers used to measure surface-atmosphere tace gas exchange. We found that systematic errors often result from conventional chamber design and deployment protocols, as well as key assumptions behind the estimation of trace gas exchange rates from observed concentration data. Specifically, our simulationshowed that (1) when a chamber significantly alters atmospheric mixing processes operating near the soil surface, it also nearly instantaneously enhances or suppresses the postdeployment gas exchange rate, (2) any change resulting in greater soil gas diffusivity, or greater partitioning of the diffusing gas to solid or liquid soil fractions, increases the potential for chamber-induced measurement error, and (3) all such errors are independent of the magnitude, kinetics, and/or distribution of trace gas sources, but greater for trace gas sinks with the same initial absolute flux. Finally, and most importantly, we found that our results apply to steady state as well as non-steady-state chambers, because the slow rate of gas diffusion in soil inhibits recovery of the former from their initial non-steady-state condition. Over a range of representative conditions, the error in steady state chamber estimates of the trace gas flux varied from -30 to +32%, while estimates computed by linear regression from non-steadystate chamber concentrations were 2 to 31% too small. Although such errors are relatively small in comparison to the temporal and spatial variability characteristic of trace gas exchange, they bias the summary statistics for each experiment as well as larger scale trace gas flux estimates based on them.
NASA Astrophysics Data System (ADS)
Gao, X.; Li, T.; Zhang, X.; Geng, X.
2018-04-01
In this paper, we proposed the stochastic model of InSAR height measurement by considering the interferometric geometry of InSAR height measurement. The model directly described the relationship between baseline error and height measurement error. Then the simulation analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of baseline error to height measurement. Furthermore, the whole emulation validation of InSAR stochastic model was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were fully evaluated.
Kofman, Rianne; Beekman, Anna M; Emmelot, Cornelis H; Geertzen, Jan H B; Dijkstra, Pieter U
2018-06-01
Non-contact scanners may have potential for measurement of residual limb volume. Different non-contact scanners have been introduced during the last decades. Reliability and usability (practicality and user friendliness) should be assessed before introducing these systems in clinical practice. The aim of this study was to analyze the measurement properties and usability of four non-contact scanners (TT Design, Omega Scanner, BioSculptor Bioscanner, and Rodin4D Scanner). Quasi experimental. Nine (geometric and residual limb) models were measured on two occasions, each consisting of two sessions, thus in total 4 sessions. In each session, four observers used the four systems for volume measurement. Mean for each model, repeatability coefficients for each system, variance components, and their two-way interactions of measurement conditions were calculated. User satisfaction was evaluated with the Post-Study System Usability Questionnaire. Systematic differences between the systems were found in volume measurements. Most of the variances were explained by the model (97%), while error variance was 3%. Measurement system and the interaction between system and model explained 44% of the error variance. Repeatability coefficient of the systems ranged from 0.101 (Omega Scanner) to 0.131 L (Rodin4D). Differences in Post-Study System Usability Questionnaire scores between the systems were small and not significant. The systems were reliable in determining residual limb volume. Measurement systems and the interaction between system and residual limb model explained most of the error variances. The differences in repeatability coefficient and usability between the four CAD/CAM systems were small. Clinical relevance If accurate measurements of residual limb volume are required (in case of research), modern non-contact scanners should be taken in consideration nowadays.
Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.
2014-01-01
Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877
Littel, Marianne; van den Berg, Ivo; Luijten, Maartje; van Rooij, Antonius J; Keemink, Lianne; Franken, Ingmar H A
2012-09-01
Excessive computer gaming has recently been proposed as a possible pathological illness. However, research on this topic is still in its infancy and underlying neurobiological mechanisms have not yet been identified. The determination of underlying mechanisms of excessive gaming might be useful for the identification of those at risk, a better understanding of the behavior and the development of interventions. Excessive gaming has been often compared with pathological gambling and substance use disorder. Both disorders are characterized by high levels of impulsivity, which incorporates deficits in error processing and response inhibition. The present study aimed to investigate error processing and response inhibition in excessive gamers and controls using a Go/NoGo paradigm combined with event-related potential recordings. Results indicated that excessive gamers show reduced error-related negativity amplitudes in response to incorrect trials relative to correct trials, implying poor error processing in this population. Furthermore, excessive gamers display higher levels of self-reported impulsivity as well as more impulsive responding as reflected by less behavioral inhibition on the Go/NoGo task. The present study indicates that excessive gaming partly parallels impulse control and substance use disorders regarding impulsivity measured on the self-reported, behavioral and electrophysiological level. Although the present study does not allow drawing firm conclusions on causality, it might be that trait impulsivity, poor error processing and diminished behavioral response inhibition underlie the excessive gaming patterns observed in certain individuals. They might be less sensitive to negative consequences of gaming and therefore continue their behavior despite adverse consequences. © 2012 The Authors, Addiction Biology © 2012 Society for the Study of Addiction.
Beste, Christian; Mückschel, Moritz; Elben, Saskia; J Hartmann, Christian; McIntyre, Cameron C; Saft, Carsten; Vesper, Jan; Schnitzler, Alfons; Wojtecki, Lars
2015-07-01
Deep brain stimulation of the dorsal pallidum (globus pallidus, GP) is increasingly considered as a surgical therapeutic option in Huntington's disease (HD), but there is need to identify outcome measures useful for clinical trials. Computational models consider the GP to be part of a basal ganglia network involved in cognitive processes related to the control of actions. We examined behavioural and event-related potential (ERP) correlates of action control (i.e., error monitoring) and evaluated the effects of deep brain stimulation (DBS). We did this using a standard flanker paradigm and evaluated error-related ERPs. Patients were recruited from a prospective pilot trial for pallidal DBS in HD (trial number NCT00902889). From the initial four patients with Huntington's chorea, two patients with chronic external dorsal pallidum stimulation were available for follow-up and able to perform the task. The results suggest that the external GP constitutes an important basal ganglia element not only for error processing and behavioural adaptation but for general response monitoring processes as well. Response monitoring functions were fully controllable by switching pallidal DBS stimulation on and off. When stimulation was switched off, no neurophysiological and behavioural signs of error and general performance monitoring, as reflected by the error-related negativity and post-error slowing in reaction times were evident. The modulation of response monitoring processes by GP-DBS reflects a side effect of efforts to alleviate motor symptoms in HD. From a clinical neurological perspective, the results suggest that DBS in the external GP segment can be regarded as a potentially beneficial treatment with respect to cognitive functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Lee, H; Choi, K
Purpose: The mechanical quality assurance (QA) of medical accelerators consists of a time consuming series of procedures. Since most of the procedures are done manually – e.g., checking gantry rotation angle with the naked eye using a level attached to the gantry –, it is considered to be a process with high potential for human errors. To remove the possibilities of human errors and reduce the procedure duration, we developed a smartphone application for automated mechanical QA. Methods: The preparation for the automated process was done by attaching a smartphone to the gantry facing upward. For the assessments of gantrymore » and collimator angle indications, motion sensors (gyroscope, accelerator, and magnetic field sensor) embedded in the smartphone were used. For the assessments of jaw position indicator, cross-hair centering, and optical distance indicator (ODI), an optical-image processing module using a picture taken by the high-resolution camera embedded in the smartphone was implemented. The application was developed with the Android software development kit (SDK) and OpenCV library. Results: The system accuracies in terms of angle detection error and length detection error were < 0.1° and < 1 mm, respectively. The mean absolute error for gantry and collimator rotation angles were 0.03° and 0.041°, respectively. The mean absolute error for the measured light field size was 0.067 cm. Conclusion: The automated system we developed can be used for the mechanical QA of medical accelerators with proven accuracy. For more convenient use of this application, the wireless communication module is under development. This system has a strong potential for the automation of the other QA procedures such as light/radiation field coincidence and couch translation/rotations.« less
High-precision multiband spectroscopy of ultracold fermions in a nonseparable optical lattice
NASA Astrophysics Data System (ADS)
Fläschner, Nick; Tarnowski, Matthias; Rem, Benno S.; Vogel, Dominik; Sengstock, Klaus; Weitenberg, Christof
2018-05-01
Spectroscopic tools are fundamental for the understanding of complex quantum systems. Here, we demonstrate high-precision multiband spectroscopy in a graphenelike lattice using ultracold fermionic atoms. From the measured band structure, we characterize the underlying lattice potential with a relative error of 1.2 ×10-3 . Such a precise characterization of complex lattice potentials is an important step towards precision measurements of quantum many-body systems. Furthermore, we explain the excitation strengths into different bands with a model and experimentally study their dependency on the symmetry of the perturbation operator. This insight suggests the excitation strengths as a suitable observable for interaction effects on the eigenstates.
Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys
NASA Astrophysics Data System (ADS)
Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.
2016-12-01
Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.
Idzinga, J C; de Jong, A L; van den Bemt, P M L A
2009-11-01
Previous studies, both in hospitals and in institutions for clients with an intellectual disability (ID), have shown that medication errors at the administration stage are frequent, especially when medication has to be administered through an enteral feeding tube. In hospitals a specially designed intervention programme has proven to be effective in reducing these feeding tube-related medication errors, but the effect of such a programme within an institution for clients with an ID is unknown. Therefore, a study was designed to measure the influence of such an intervention programme on the number of medication administration errors in clients with an ID who also have enteral feeding tubes. A before-after study design with disguised observation to document administration errors was used. The study was conducted from February to June 2008 within an institution for individuals with an ID in the Western part of The Netherlands. Included were clients with enteral feeding tubes. The intervention consisted of advice on medication administration through enteral feeding tubes by the pharmacist, a training programme and introduction of a 'medication through tube' box containing proper materials for crushing and suspending tablets. The outcome measure was the frequency of medication administration errors, comparing the pre-intervention period with the post-intervention period. A total of 245 medication administrations in six clients (by 23 nurse attendants) have been observed in the pre-intervention measurement period and 229 medication administrations in five clients (by 20 nurse attendants) have been observed in the post-intervention period. Before the intervention, 158 (64.5%) medication administration errors were observed, and after the intervention, this decreased to 69 (30.1%). Of all potential confounders and effect modifiers, only 'medication dispensed in automated dispensing system ("robot") packaging' contributed to the multivariate model; effect modification was shown for this determinant. Multilevel analysis using this multivariate model resulted in an odds ratio of 0.33 (95% confidence interval 0.13-0.71) for the error percentage in the post-intervention period compared with the pre-intervention period. The intervention was found to be effective in an institution for clients with an ID. However, additional efforts are needed to reduce the proportion of administration errors which is still high after the intervention.
An interpretation of radiosonde errors in the atmospheric boundary layer
Bernadette H. Connell; David R. Miller
1995-01-01
The authors review sources of error in radiosonde measurements in the atmospheric boundary layer and analyze errors of two radiosonde models manufactured by Atmospheric Instrumentation Research, Inc. The authors focus on temperature and humidity lag errors and wind errors. Errors in measurement of azimuth and elevation angles and pressure over short time intervals and...
Validation and Error Characterization for the Global Precipitation Measurement
NASA Technical Reports Server (NTRS)
Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.
2003-01-01
The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration, assumption, or algorithm. The instrumentation and techniques of the Supersites will be discussed. The GPM core satellite, with its dual-frequency radar and conically scanning radiometer, will provide insight into precipitation drop-size distributions and potentially increased measurement capabilities of light rain and snowfall. The ground validation program will include instrumentation and techniques commensurate with these new measurement capabilities.
Power Measurement Errors on a Utility Aircraft
NASA Technical Reports Server (NTRS)
Bousman, William G.
2002-01-01
Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.
Fusing metabolomics data sets with heterogeneous measurement errors
Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.
2018-01-01
Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
Aliasing errors in measurements of beam position and ellipticity
NASA Astrophysics Data System (ADS)
Ekdahl, Carl
2005-09-01
Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.
Error analysis and experiments of attitude measurement using laser gyroscope
NASA Astrophysics Data System (ADS)
Ren, Xin-ran; Ma, Wen-li; Jiang, Ping; Huang, Jin-long; Pan, Nian; Guo, Shuai; Luo, Jun; Li, Xiao
2018-03-01
The precision of photoelectric tracking and measuring equipment on the vehicle and vessel is deteriorated by the platform's movement. Specifically, the platform's movement leads to the deviation or loss of the target, it also causes the jitter of visual axis and then produces image blur. In order to improve the precision of photoelectric equipment, the attitude of photoelectric equipment fixed with the platform must be measured. Currently, laser gyroscope is widely used to measure the attitude of the platform. However, the measurement accuracy of laser gyro is affected by its zero bias, scale factor, installation error and random error. In this paper, these errors were analyzed and compensated based on the laser gyro's error model. The static and dynamic experiments were carried out on a single axis turntable, and the error model was verified by comparing the gyro's output with an encoder with an accuracy of 0.1 arc sec. The accuracy of the gyroscope has increased from 7000 arc sec to 5 arc sec for an hour after error compensation. The method used in this paper is suitable for decreasing the laser gyro errors in inertial measurement applications.
Black, D; Gates, G; Sanders, S; Taylor, L
2000-05-01
This work provides an overview of standard social science data sources that now allow some systematic study of the gay and lesbian population in the United States. For each data source, we consider how sexual orientation can be defined, and we note the potential sample sizes. We give special attention to the important problem of measurement error, especially the extent to which individuals recorded as gay and lesbian are indeed recorded correctly. Our concern is that because gays and lesbians constitute a relatively small fraction of the population, modest measurement problems could lead to serious errors in inference. In examining gays and lesbians in multiple data sets we also achieve a second objective: We provide a set of statistics about this population that is relevant to several current policy debates.
NASA Astrophysics Data System (ADS)
Boardsen, S. A.; Adrian, M. L.; Pfaff, R.; Menietti, J. D.
2014-10-01
Direct measurement of low < 1 eV electron temperature is difficult to make in the Earth's inner magnetosphere for electron densities (Ne) < 3 × 102 cm-3. We compute these quantities by solving current balance equations in low-density regions. Concurrent measurements from the Polar spacecraft of the relative potential (VS - VP), between the spacecraft body and the electric field probe, and the electron density (Ne), derived from upper hybrid frequency (fUHR), were used in the current balance equations to solve for the electron temperature (Te), Vs, and Vp. Where VP is the probe potential and VS is the spacecraft potential relative to the nearby plasma. The assumption that the bulk plasma electrons are Maxwellian is used in the computations. Our data set covered 1.5 years of measurements when fUHR was detectable (L < 10). The following "averaged" Te versus L relation for 3 < L < 5 was obtained: Te = 0.58 + 0.49 (L - 3) eV. This expression is in reasonable agreement with extrapolations of ionospheric Te measurements by Akebono at lower altitudes. However, the solution is sensitive to the photoemission coefficients, substituting those of Scudder et al. (2000) with those of Escoubet et al. (1997), the Te curve shifted upward by ~1 eV. Also, the solution is sensitive to measurement error of VS - VP, applying a voltage shift of ±0.1 and ±0.2 V to VS - VP, the relative median error for our data set was computed to be 0.27 and 1.04, respectively. We believe that our Te values computed outside the plasmasphere are unrealistically low. We conclude that this method shows promise inside the plasmasphere but should be used with caution. We also quantified the Ne versus VS - VP relationship. The running median Ne versus VS - VP curve shows no significant variation over the 1.5 year period of the data set, suggesting that the photoemission coefficients did not change significantly over this time span. The Scudder et al. (2000) Ne model, based on only one Polar orbit, is in reasonable agreement (within a factor of 2) with our results.
Radiographic cup anteversion measurement corrected from pelvic tilt.
Wang, Liao; Thoreson, Andrew R; Trousdale, Robert T; Morrey, Bernard F; Dai, Kerong; An, Kai-Nan
2017-11-01
The purpose of this study was to develop a novel technique to improve the accuracy of radiographic cup anteversion measurement by correcting the influence of pelvic tilt. Ninety virtual total hip arthroplasties were simulated from computed tomography data of 6 patients with 15 predetermined cup orientations. For each simulated implantation, anteroposterior (AP) virtual pelvic radiographs were generated for 11 predetermined pelvic tilts. A linear regression model was created to capture the relationship between radiographic cup anteversion angle error measured on AP pelvic radiographs and pelvic tilt. Overall, nine hundred and ninety virtual AP pelvic radiographs were measured, and 90 linear regression models were created. Pearson's correlation analyses confirmed a strong correlation between the errors of conventional radiographic cup anteversion angle measured on AP pelvic radiographs and the magnitude of pelvic tilt (P < 0.001). The mean of 90 slopes and y-intercepts of the regression lines were -0.8 and -2.5°, which were applied as the general correction parameters for the proposed tool to correct conventional cup anteversion angle from the influence of pelvic tilt. The current method proposes to measure the pelvic tilt on a lateral radiograph, and to use it as a correction for the radiographic cup anteversion measurement on an AP pelvic radiograph. Thus, both AP and lateral pelvic radiographs are required for the measurement of pelvic posture-integrated cup anteversion. Compared with conventional radiographic cup anteversion, the errors of pelvic posture-integrated radiographic cup anteversion were reduced from 10.03 (SD = 5.13) degrees to 2.53 (SD = 1.33) degrees. Pelvic posture-integrated cup anteversion measurement improves the accuracy of radiographic cup anteversion measurement, which shows the potential of further clarifying the etiology of postoperative instability based on planar radiographs. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Online production validation in a HEP environment
NASA Astrophysics Data System (ADS)
Harenberg, T.; Kuhl, T.; Lang, N.; Mättig, P.; Sandhoff, M.; Schwanenberger, C.; Volkmer, F.
2017-03-01
In high energy physics (HEP) event simulations, petabytes of data are processed and stored requiring millions of CPU-years. This enormous demand for computing resources is handled by centers distributed worldwide, which form part of the LHC computing grid. The consumption of such an important amount of resources demands for an efficient production of simulation and for the early detection of potential errors. In this article we present a new monitoring framework for grid environments, which polls a measure of data quality during job execution. This online monitoring facilitates the early detection of configuration errors (specially in simulation parameters), and may thus contribute to significant savings in computing resources.
Modeling longitudinal data, I: principles of multivariate analysis.
Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick
2009-01-01
Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).
Structured methods for identifying and correcting potential human errors in aviation operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1997-10-01
Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less
Error-related negativities elicited by monetary loss and cues that predict loss.
Dunning, Jonathan P; Hajcak, Greg
2007-11-19
Event-related potential studies have reported error-related negativity following both error commission and feedback indicating errors or monetary loss. The present study examined whether error-related negativities could be elicited by a predictive cue presented prior to both the decision and subsequent feedback in a gambling task. Participants were presented with a cue that indicated the probability of reward on the upcoming trial (0, 50, and 100%). Results showed a negative deflection in the event-related potential in response to loss cues compared with win cues; this waveform shared a similar latency and morphology with the traditional feedback error-related negativity.
Acheampong, Franklin; Tetteh, Ashalley Raymond; Anto, Berko Panyin
2016-12-01
This study determined the incidence, types, clinical significance, and potential causes of medication administration errors (MAEs) at the emergency department (ED) of a tertiary health care facility in Ghana. This study used a cross-sectional nonparticipant observational technique. Study participants (nurses) were observed preparing and administering medication at the ED of a 2000-bed tertiary care hospital in Accra, Ghana. The observations were then compared with patients' medication charts, and identified errors were clarified with staff for possible causes. Of the 1332 observations made, involving 338 patients and 49 nurses, 362 had errors, representing 27.2%. However, the error rate excluding "lack of drug availability" fell to 12.8%. Without wrong time error, the error rate was 22.8%. The 2 most frequent error types were omission (n = 281, 77.6%) and wrong time (n = 58, 16%) errors. Omission error was mainly due to unavailability of medicine, 48.9% (n = 177). Although only one of the errors was potentially fatal, 26.7% were definitely clinically severe. The common themes that dominated the probable causes of MAEs were unavailability, staff factors, patient factors, prescription, and communication problems. This study gives credence to similar studies in different settings that MAEs occur frequently in the ED of hospitals. Most of the errors identified were not potentially fatal; however, preventive strategies need to be used to make life-saving processes such as drug administration in such specialized units error-free.
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Hurford, Amy
2009-05-20
Movement data are frequently collected using Global Positioning System (GPS) receivers, but recorded GPS locations are subject to errors. While past studies have suggested methods to improve location accuracy, mechanistic movement models utilize distributions of turning angles and directional biases and these data present a new challenge in recognizing and reducing the effect of measurement error. I collected locations from a stationary GPS collar, analyzed a probabilistic model and used Monte Carlo simulations to understand how measurement error affects measured turning angles and directional biases. Results from each of the three methods were in complete agreement: measurement error gives rise to a systematic bias where a stationary animal is most likely to be measured as turning 180 degrees or moving towards a fixed point in space. These spurious effects occur in GPS data when the measured distance between locations is <20 meters. Measurement error must be considered as a possible cause of 180 degree turning angles in GPS data. Consequences of failing to account for measurement error are predicting overly tortuous movement, numerous returns to previously visited locations, inaccurately predicting species range, core areas, and the frequency of crossing linear features. By understanding the effect of GPS measurement error, ecologists are able to disregard false signals to more accurately design conservation plans for endangered wildlife.
NASA Astrophysics Data System (ADS)
Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew
2017-11-01
Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.
Effects of microgravity on tissue perfusion and the efficacy of astronaut denitrogenation for EVA
NASA Technical Reports Server (NTRS)
Gerth, Wayne A.; Vann, Richard D.; Leatherman, Nelson E.; Feezor, Michael D.
1987-01-01
A potentially flight-applicable, breath-by-breath method for measuring N2 elimination from human subjects breathing 100 percent O2 for 2-3 hr periods has been developed. The present report describes this development with particular emphasis on required methodological accuracy and its achievement in view of certain properties of mass spectrometer performance. A method for the breath-by-breath analysis of errors in measured N2 elimination profiles is also described.
Ford, Eric C; Terezakis, Stephanie; Souranis, Annette; Harris, Kendra; Gay, Hiram; Mutic, Sasa
2012-11-01
To quantify the error-detection effectiveness of commonly used quality control (QC) measures. We analyzed incidents from 2007-2010 logged into a voluntary in-house, electronic incident learning systems at 2 academic radiation oncology clinics. None of the incidents resulted in patient harm. Each incident was graded for potential severity using the French Nuclear Safety Authority scoring scale; high potential severity incidents (score >3) were considered, along with a subset of 30 randomly chosen low severity incidents. Each report was evaluated to identify which of 15 common QC checks could have detected it. The effectiveness was calculated, defined as the percentage of incidents that each QC measure could detect, both for individual QC checks and for combinations of checks. In total, 4407 incidents were reported, 292 of which had high-potential severity. High- and low-severity incidents were detectable by 4.0 ± 2.3 (mean ± SD) and 2.6 ± 1.4 QC checks, respectively (P<.001). All individual checks were less than 50% sensitive with the exception of pretreatment plan review by a physicist (63%). An effectiveness of 97% was achieved with 7 checks used in combination and was not further improved with more checks. The combination of checks with the highest effectiveness includes physics plan review, physician plan review, Electronic Portal Imaging Device-based in vivo portal dosimetry, radiation therapist timeout, weekly physics chart check, the use of checklists, port films, and source-to-skin distance checks. Some commonly used QC checks such as pretreatment intensity modulated radiation therapy QA do not substantially add to the ability to detect errors in these data. The effectiveness of QC measures in radiation oncology depends sensitively on which checks are used and in which combinations. A small percentage of errors cannot be detected by any of the standard formal QC checks currently in broad use, suggesting that further improvements are needed. These data require confirmation with a broader incident-reporting database. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Stegehuis, Paulien L.; Boogerd, Leonora S. F.; Inderson, Akin; Veenendaal, Roeland A.; van Gerven, P.; Bonsing, Bert A.; Sven Mieog, J.; Amelink, Arjen; Veselic, Maud; Morreau, Hans; van de Velde, Cornelis J. H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Robinson, Dominic J.; Vahrmeijer, Alexander L.
2017-02-01
Endoscopic ultrasound-guided fine needle aspirations (EUS-FNA) of pancreatic masses suffer from sample errors and low-negative predictive values. Fiber-optic spectroscopy in the visible to near-infrared wavelength spectrum can noninvasively extract physiological parameters from tissue and has the potential to guide the sampling process and reduce sample errors. We assessed the feasibility of single fiber (SF) reflectance spectroscopy measurements during EUS-FNA of pancreatic masses and its ability to distinguish benign from malignant pancreatic tissue. A single optical fiber was placed inside a 19-gauge biopsy needle during EUS-FNA and at least three reflectance measurements were taken prior to FNA. Spectroscopy measurements did not cause any related adverse events and prolonged procedure time with ˜5 min. An accurate correlation between spectroscopy measurements and cytology could be made in nine patients (three benign and six malignant). The oxygen saturation and bilirubin concentration were significantly higher in benign tissue compared with malignant tissue (55% versus 21%, p=0.038; 166 μmol/L versus 17 μmol/L, p=0.039, respectively). To conclude, incorporation of SF spectroscopy during EUS-FNA was feasible, safe, and relatively quick to perform. The optical properties of benign and malignant pancreatic tissue are different, implying that SF spectroscopy can potentially guide the FNA sampling.
Measurement of nutritional status in simulated microgravity by bioelectrical impedance spectroscopy
NASA Technical Reports Server (NTRS)
Bartok, Cynthia; Atkinson, Richard L.; Schoeller, Dale A.
2003-01-01
The potential of bioelectrical impedance spectroscopy (BIS) for assessing nutritional status in spaceflight was tested in two head-down-tilt bed-rest studies. BIS-predicted extracellular water (ECW), intracellular water (ICW), and total body water (TBW) measured using knee-elbow electrode placement were compared with deuterium and bromide dilution (DIL) volumes in healthy, 19- to 45-yr-old subjects. BIS was accurate during 44 h of head-down tilt with mean differences (BIS - DIL) of 0-0.1 kg for ECW, 0.3-0.5 for ICW, and 0.4-0.6 kg for TBW (n = 28). At 44 h, BIS followed the within-individual change in body water compartments with a relative prediction error (standard error of the estimate/baseline volume) of 2.0-3.6% of water space. In the second study, BIS did not detect an acute decrease (-1.41 +/- 0.91 kg) in ICW secondary to 48 h of a protein-free, 800 kcal/day diet (n = 18). BIS's insensitivity to ICW losses may be because they were predominantly (65%) localized to the trunk and/or because there was a general failure of BIS to measure ICW independently of ECW and TBW. BIS may have potential for measuring nutritional status during spaceflight, but its limitations in precision and insensitivity to acute ICW changes warrant further validation studies.
Collery, Ross F.; Veth, Kerry N.; Dubis, Adam M.; Carroll, Joseph; Link, Brian A.
2014-01-01
Refractive errors in vision can be caused by aberrant axial length of the eye, irregular corneal shape, or lens abnormalities. Causes of eye length overgrowth include multiple genetic loci, and visual parameters. We evaluate zebrafish as a potential animal model for studies of the genetic, cellular, and signaling basis of emmetropization and myopia. Axial length and other eye dimensions of zebrafish were measured using spectral domain-optical coherence tomography (SD-OCT). We used ocular lens and body metrics to normalize and compare eye size and relative refractive error (difference between observed retinal radial length and controls) in wild-type and lrp2 zebrafish. Zebrafish were dark-reared to assess effects of visual deprivation on eye size. Two relative measurements, ocular axial length to body length and axial length to lens diameter, were found to accurately normalize comparisons of eye sizes between different sized fish (R2 = 0.9548, R2 = 0.9921). Ray-traced focal lengths of wild-type zebrafish lenses were equal to their retinal radii, while lrp2 eyes had longer retinal radii than focal lengths. Both genetic mutation (lrp2) and environmental manipulation (dark-rearing) caused elongated eye axes. lrp2 mutants had relative refractive errors of −0.327 compared to wild-types, and dark-reared wild-type fish had relative refractive errors of −0.132 compared to light-reared siblings. Therefore, zebrafish eye anatomy (axial length, lens radius, retinal radius) can be rapidly and accurately measured by SD-OCT, facilitating longitudinal studies of regulated eye growth and emmetropization. Specifically, genes homologous to human myopia candidates may be modified, inactivated or overexpressed in zebrafish, and myopia-sensitizing conditions used to probe gene-environment interactions. Our studies provide foundation for such investigations into genetic contributions that control eye size and impact refractive errors. PMID:25334040
Research in China on event-related potentials in patients with schizophrenia
Wang, Jijun; Guo, Qian
2012-01-01
Abstract Event-related potentials (ERPs) are objective electrophysiological indicators that can be used to assess cognitive processes in the human brain. Psychiatric researchers in China have applied this method to study schizophrenia since the early 1980s. ERP measures used in the study of schizophrenia include contingent negative variation (CNV), P300, mismatch negativity (MMN), error-related negativity (ERN) and auditory P50 inhibition. This review summarizes the main findings of ERP research in patients with schizophrenia reported by Chinese investigators. PMID:25324605
The reduction of a ""safety catastrophic'' potential hazard: A case history
NASA Technical Reports Server (NTRS)
Jones, J. P.
1971-01-01
A worst case analysis is reported on the safety of time watch movements for triggering explosive packages on the lunar surface in an experiment to investigate physical lunar structural characteristics through induced seismic energy waves. Considered are the combined effects of low pressure, low temperature, lunar gravity, gear train error, and position. Control measures constitute a seal control cavity and design requirements to prevent overbanking in the mainspring torque curve. Thus, the potential hazard is reduced to safety negligible.
Dual-joint modeling for estimation of total knee replacement contact forces during locomotion.
Hast, Michael W; Piazza, Stephen J
2013-02-01
Model-based estimation of in vivo contact forces arising between components of a total knee replacement is challenging because such forces depend upon accurate modeling of muscles, tendons, ligaments, contact, and multibody dynamics. Here we describe an approach to solving this problem with results that are tested by comparison to knee loads measured in vivo for a single subject and made available through the Grand Challenge Competition to Predict in vivo Tibiofemoral Loads. The approach makes use of a "dual-joint" paradigm in which the knee joint is alternately represented by (1) a ball-joint knee for inverse dynamic computation of required muscle controls and (2) a 12 degree-of-freedom (DOF) knee with elastic foundation contact at the tibiofemoral and patellofemoral articulations for forward dynamic integration. Measured external forces and kinematics were applied as a feedback controller and static optimization attempted to track measured knee flexion angles and electromyographic (EMG) activity. The resulting simulations showed excellent tracking of knee flexion (average RMS error of 2.53 deg) and EMG (muscle activations within ±10% envelopes of normalized measured EMG signals). Simulated tibiofemoral contact forces agreed qualitatively with measured contact forces, but their RMS errors were approximately 25% of the peak measured values. These results demonstrate the potential of a dual-joint modeling approach to predict joint contact forces from kinesiological data measured in the motion laboratory. It is anticipated that errors in the estimation of contact force will be reduced as more accurate subject-specific models of muscles and other soft tissues are developed.
ERIC Educational Resources Information Center
Schumacker, Randall E.; Smith, Everett V., Jr.
2007-01-01
Measurement error is a common theme in classical measurement models used in testing and assessment. In classical measurement models, the definition of measurement error and the subsequent reliability coefficients differ on the basis of the test administration design. Internal consistency reliability specifies error due primarily to poor item…
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection
Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-01-01
Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.
Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-08-18
The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.
Measurement-based quantum communication with resource states generated by entanglement purification
NASA Astrophysics Data System (ADS)
Wallnöfer, J.; Dür, W.
2017-01-01
We investigate measurement-based quantum communication with noisy resource states that are generated by entanglement purification. We consider the transmission of encoded information via noisy quantum channels using a measurement-based implementation of encoding, error correction, and decoding. We show that such an approach offers advantages over direct transmission, gate-based error correction, and measurement-based schemes with direct generation of resource states. We analyze the noise structure of resource states generated by entanglement purification and show that a local error model, i.e., noise acting independently on all qubits of the resource state, is a good approximation in general, and provides an exact description for Greenberger-Horne-Zeilinger states. The latter are resources for a measurement-based implementation of error-correction codes for bit-flip or phase-flip errors. This provides an approach to link the recently found very high thresholds for fault-tolerant measurement-based quantum information processing based on local error models for resource states with error thresholds for gate-based computational models.
Accurate characterisation of hole size and location by projected fringe profilometry
NASA Astrophysics Data System (ADS)
Wu, Yuxiang; Dantanarayana, Harshana G.; Yue, Huimin; Huntley, Jonathan M.
2018-06-01
The ability to accurately estimate the location and geometry of holes is often required in the field of quality control and automated assembly. Projected fringe profilometry is a potentially attractive technique on account of being non-contacting, of lower cost, and orders of magnitude faster than the traditional coordinate measuring machine. However, we demonstrate in this paper that fringe projection is susceptible to significant (hundreds of µm) measurement artefacts in the neighbourhood of hole edges, which give rise to errors of a similar magnitude in the estimated hole geometry. A mechanism for the phenomenon is identified based on the finite size of the imaging system’s point spread function and the resulting bias produced near to sample discontinuities in geometry and reflectivity. A mathematical model is proposed, from which a post-processing compensation algorithm is developed to suppress such errors around the holes. The algorithm includes a robust and accurate sub-pixel edge detection method based on a Fourier descriptor of the hole contour. The proposed algorithm was found to reduce significantly the measurement artefacts near the hole edges. As a result, the errors in estimated hole radius were reduced by up to one order of magnitude, to a few tens of µm for hole radii in the range 2–15 mm, compared to those from the uncompensated measurements.
Agogo, George O; van der Voet, Hilko; van 't Veer, Pieter; Ferrari, Pietro; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek C
2016-10-13
Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. We proposed a method to adjust for the bias in the diet-disease association (hereafter, association), due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV) intakes, cigarette smoking (confounder) and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.
Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D
2018-05-18
Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Simmons, B. E.
1981-08-01
This report derives equations predicting satellite ephemeris error as a function of measurement errors of space-surveillance sensors. These equations lend themselves to rapid computation with modest computer resources. They are applicable over prediction times such that measurement errors, rather than uncertainties of atmospheric drag and of Earth shape, dominate in producing ephemeris error. This report describes the specialization of these equations underlying the ANSER computer program, SEEM (Satellite Ephemeris Error Model). The intent is that this report be of utility to users of SEEM for interpretive purposes, and to computer programmers who may need a mathematical point of departure for limited generalization of SEEM.
A potential quantitative method for assessing individual tree performance
Lance A. Vickers; David R. Larsen; Daniel C. Dey; John M. Kabrick; Benjamin O. Knapp
2014-01-01
By what standard should a tree be judged? This question, perhaps unknowingly, is posed almost daily by practicing foresters. Unfortunately, there are few cases in which clearly defined quantitative (i.e., directly measurable) references have been established in forestry. A lack of common references may be an unnecessary source of error in silvicultural application and...
Measurement of Knock Characteristics in Spark-ignition Engines
NASA Technical Reports Server (NTRS)
Schutz, R
1940-01-01
This paper presents a discussion of three potential sources of error in recording engine knocking which are: the natural oscillation of the membrane, the shock process between test contacts, and the danger of burned contacts. Following this discussion, the paper calls attention to various results which make the bouncing-pin indicator appear fundamentally unsuitable for recording knock phenomena.
All the adiabatic bound states of NO{sub 2}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salzgeber, R.F.; Mandelshtam, V.; Schlier, C.
1998-07-01
We calculated all 2967 even and odd bound states of the adiabatic ground state of NO{sub 2}, using a modification of the abthinspinitio potential energy surface of Leonardi {ital et al.} [J. Chem. Phys. {bold 105}, 9051 (1996)]. The calculation was performed by harmonic inversion of the Chebyshev correlation function generated by a DVR Hamiltonian in Radau coordinates. The relative error for the computed eigenenergies (measured from the potential minimum), is 10{sup {minus}4} or better, corresponding to an absolute error of less than about 2.5thinspcm{sup {minus}1}. Near the dissociation threshold the average density of states is about 0.2/cm{sup {minus}1} formore » each symmetry. Statistical analysis of the states shows some interesting structure of the rigidity parameter {Delta}{sub 3} as a function of energy. {copyright} {ital 1998 American Institute of Physics.}« less
Tully, Mary P; Buchan, Iain E
2009-12-01
To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.
Weinberg, Anna; Klein, Daniel N; Hajcak, Greg
2012-11-01
Generalized anxiety disorder (GAD) and major depressive disorder (MDD) are so frequently comorbid that some have suggested that the 2 should be collapsed into a single overarching "distress" disorder. Yet there is also increasing evidence that the 2 categories are not redundant. Neurobehavioral markers that differentiate GAD and MDD would be helpful in ongoing efforts to refine classification schemes based on neurobiological measures. The error-related negativity (ERN) may be one such marker. The ERN is an event-related potential component presenting as a negative deflection approximately 50 ms following an erroneous response and reflects activity of the anterior cingulate cortex. There is evidence for an enhanced ERN in individuals with GAD, but the literature in MDD is mixed. The present study measured the ERN in 26 GAD, 23 comorbid GAD and MDD, and 36 control participants, all of whom were female and medication-free. Consistent with previous research, the GAD group was characterized by a larger ERN and an increased difference between error and correct trials than controls. No such enhancement was evident in the comorbid group, suggesting comorbid depression may moderate the relationship between the ERN and anxiety. The present study further suggests that the ERN is a potentially useful neurobiological marker for future studies that consider the pathophysiology of multiple disorders in order to construct or refine neurobiologically based diagnostic phenotypes. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Evaluation of a UMLS Auditing Process of Semantic Type Assignments
Gu, Huanying; Hripcsak, George; Chen, Yan; Morrey, C. Paul; Elhanan, Gai; Cimino, James J.; Geller, James; Perl, Yehoshua
2007-01-01
The UMLS is a terminological system that integrates many source terminologies. Each concept in the UMLS is assigned one or more semantic types from the Semantic Network, an upper level ontology for biomedicine. Due to the complexity of the UMLS, errors exist in the semantic type assignments. Finding assignment errors may unearth modeling errors. Even with sophisticated tools, discovering assignment errors requires manual review. In this paper we describe the evaluation of an auditing project of UMLS semantic type assignments. We studied the performance of the auditors who reviewed potential errors. We found that four auditors, interacting according to a multi-step protocol, identified a high rate of errors (one or more errors in 81% of concepts studied) and that results were sufficiently reliable (0.67 to 0.70) for the two most common types of errors. However, reliability was low for each individual auditor, suggesting that review of potential errors is resource-intensive. PMID:18693845
Clarification of terminology in medication errors: definitions and classification.
Ferner, Robin E; Aronson, Jeffrey K
2006-01-01
We have previously described and analysed some terms that are used in drug safety and have proposed definitions. Here we discuss and define terms that are used in the field of medication errors, particularly terms that are sometimes misunderstood or misused. We also discuss the classification of medication errors. A medication error is a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient. Errors can be classified according to whether they are mistakes, slips, or lapses. Mistakes are errors in the planning of an action. They can be knowledge based or rule based. Slips and lapses are errors in carrying out an action - a slip through an erroneous performance and a lapse through an erroneous memory. Classification of medication errors is important because the probabilities of errors of different classes are different, as are the potential remedies.
Burton, Brett M; Aras, Kedar K; Good, Wilson W; Tate, Jess D; Zenger, Brian; MacLeod, Rob S
2018-05-21
The biophysical basis for electrocardiographic evaluation of myocardial ischemia stems from the notion that ischemic tissues develop, with relative uniformity, along the endocardial aspects of the heart. These injured regions of subendocardial tissue give rise to intramural currents that lead to ST segment deflections within electrocardiogram (ECG) recordings. The concept of subendocardial ischemic regions is often used in clinical practice, providing a simple and intuitive description of ischemic injury; however, such a model grossly oversimplifies the presentation of ischemic disease-inadvertently leading to errors in ECG-based diagnoses. Furthermore, recent experimental studies have brought into question the subendocardial ischemia paradigm suggesting instead a more distributed pattern of tissue injury. These findings come from experiments and so have both the impact and the limitations of measurements from living organisms. Computer models have often been employed to overcome the constraints of experimental approaches and have a robust history in cardiac simulation. To this end, we have developed a computational simulation framework aimed at elucidating the effects of ischemia on measurable cardiac potentials. To validate our framework, we simulated, visualized, and analyzed 226 experimentally derived acute myocardial ischemic events. Simulation outcomes agreed both qualitatively (feature comparison) and quantitatively (correlation, average error, and significance) with experimentally obtained epicardial measurements, particularly under conditions of elevated ischemic stress. Our simulation framework introduces a novel approach to incorporating subject-specific, geometric models and experimental results that are highly resolved in space and time into computational models. We propose this framework as a means to advance the understanding of the underlying mechanisms of ischemic disease while simultaneously putting in place the computational infrastructure necessary to study and improve ischemia models aimed at reducing diagnostic errors in the clinic.
St James, Sara; Seco, Joao; Mishra, Pankaj; Lewis, John H
2013-09-01
The purpose of this work is to present a framework to evaluate the accuracy of four-dimensional treatment planning in external beam radiation therapy using measured patient data and digital phantoms. To accomplish this, 4D digital phantoms of two model patients were created using measured patient lung tumor positions. These phantoms were used to simulate a four-dimensional computed tomography image set, which in turn was used to create a 4D Monte Carlo (4DMC) treatment plan. The 4DMC plan was evaluated by simulating the delivery of the treatment plan over approximately 5 min of tumor motion measured from the same patient on a different day. Unique phantoms accounting for the patient position (tumor position and thorax position) at 2 s intervals were used to represent the model patients on the day of treatment delivery and the delivered dose to the tumor was determined using Monte Carlo simulations. For Patient 1, the tumor was adequately covered with 95.2% of the tumor receiving the prescribed dose. For Patient 2, the tumor was not adequately covered and only 74.3% of the tumor received the prescribed dose. This study presents a framework to evaluate 4D treatment planning methods and demonstrates a potential limitation of 4D treatment planning methods. When systematic errors are present, including when the imaging study used for treatment planning does not represent all potential tumor locations during therapy, the treatment planning methods may not adequately predict the dose to the tumor. This is the first example of a simulation study based on patient tumor trajectories where systematic errors that occur due to an inaccurate estimate of tumor motion are evaluated.
Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil †
Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao
2018-01-01
An innovative array of magnetic coils (the discrete Rogowski coil—RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC’s interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors. PMID:29534006
Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil.
Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao
2018-03-13
An innovative array of magnetic coils (the discrete Rogowski coil-RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC's interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors.
Curran, Christopher A.; Eng, Ken; Konrad, Christopher P.
2012-01-01
Regional low-flow regression models for estimating Q7,10 at ungaged stream sites are developed from the records of daily discharge at 65 continuous gaging stations (including 22 discontinued gaging stations) for the purpose of evaluating explanatory variables. By incorporating the base-flow recession time constant τ as an explanatory variable in the regression model, the root-mean square error for estimating Q7,10 at ungaged sites can be lowered to 72 percent (for known values of τ), which is 42 percent less than if only basin area and mean annual precipitation are used as explanatory variables. If partial-record sites are included in the regression data set, τ must be estimated from pairs of discharge measurements made during continuous periods of declining low flows. Eight measurement pairs are optimal for estimating τ at partial-record sites, and result in a lowering of the root-mean square error by 25 percent. A low-flow survey strategy that includes paired measurements at partial-record sites requires additional effort and planning beyond a standard strategy, but could be used to enhance regional estimates of τ and potentially reduce the error of regional regression models for estimating low-flow characteristics at ungaged sites.
Modeling work zone crash frequency by quantifying measurement errors in work zone length.
Yang, Hong; Ozbay, Kaan; Ozturk, Ozgur; Yildirimoglu, Mehmet
2013-06-01
Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence. Copyright © 2013 Elsevier Ltd. All rights reserved.
De Menezes, Marcio; Cerón-Zapata, Ana Maria; López-Palacio, Ana Maria; Mapelli, Andrea; Pisoni, Luca; Sforza, Chiarella
2016-01-01
To assess a three-dimensional (3D) stereophotogrammetric method for area delimitation and evaluation of the dental arches of children with unilateral cleft lip and palate (UCLP). Obtained data were also used to assess the 3D changes occurring in the maxillary arch with the use of orthopedic therapy prior to rhinocheiloplasty and before the first year of life. Within the collaboration between the Università degli Studi di Milano (Italy) and the University CES of Medellin (Colombia), 96 palatal cast models obtained from neonatal patients with UCLP were analyzed using a 3D stereophotogrammetric imaging system. The area of the minor and greater cleft segments on the digital dental cast surface were delineated by the visualization tool of the stereophotogrammetric software and then examined. "Trueness" of the measurements, as well as systematic and random errors between operators' tracings ("precision") were calculated. The method gave area measurements close to true values (errors lower than 2%), without systematic measurement errors for tracings by both interoperators and intraoperators (P > .05). Statistically significant differences (P < .05) were noted for alveolar segment and time. Maxillary segments have the potential for growth during presurgical orthopedic treatment in the early neonatal period. The cleft segment delimitation on digital dental casts and area measurements by the 3D stereophotogrammetric system revealed an accurate (true and precise) method for evaluating the stone casts of newborn patients with UCLP.
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation
NASA Astrophysics Data System (ADS)
Tobon-Gomez, Catalina; Sukno, Federico M.; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F.
2012-07-01
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18% LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.
Can we improve patient safety?
Corbally, Martin Thomas
2014-01-01
Despite greater awareness of patient safety issues especially in the operating room and the widespread implementation of surgical time out World Health Organization (WHO), errors, especially wrong site surgery, continue. Most such errors are due to lapses in communication where decision makers fail to consult or confirm operative findings but worryingly where parental concerns over the planned procedure are ignored or not followed through. The WHO Surgical Pause/Time Out aims to capture these errors and prevent them, but the combination of human error and complex hospital environments can overwhelm even robust safety structures and simple common sense. Parents are the ultimate repository of information on their child's condition and planned surgery but are traditionally excluded from the process of Surgical Pause and Time Out, perhaps to avoid additional stress. In addition, surgeons, like pilots, are subject to the phenomenon of "plan-continue-fail" with potentially disastrous outcomes. If we wish to improve patient safety during surgery and avoid wrong site errors then we must include parents in the Surgical Pause/Time Out. A recent pilot study has shown that neither staff nor parents found it added to their stress, but, moreover, 100% of parents considered that it should be a mandatory component of the Surgical Pause nor does it add to the stress of surgery. Surgeons should be required to confirm that the planned procedure is in keeping with the operative findings especially in extirpative surgery and this "step back" should be incorporated into the standard Surgical Pause. It is clear that we must improve patient safety further and these simple measures should add to that potential.
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation.
Tobon-Gomez, Catalina; Sukno, Federico M; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F
2012-07-07
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18%; LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.