Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Laboratory errors and patient safety.
Miligy, Dawlat A
2015-01-01
Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that evaluated the encountered laboratory errors and launch the great need for universal standardization and bench marking measures to control the laboratory work.
Skylab water balance error analysis
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study
NASA Astrophysics Data System (ADS)
Bogren, W.; Kylling, A.; Burkhart, J. F.
2015-12-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.
Tilt error in cryospheric surface radiation measurements at high latitudes: a model study
NASA Astrophysics Data System (ADS)
Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve
2016-03-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.
Influence of measurement error on Maxwell's demon
NASA Astrophysics Data System (ADS)
Sørdal, Vegard; Bergli, Joakim; Galperin, Y. M.
2017-06-01
In any general cycle of measurement, feedback, and erasure, the measurement will reduce the entropy of the system when information about the state is obtained, while erasure, according to Landauer's principle, is accompanied by a corresponding increase in entropy due to the compression of logical and physical phase space. The total process can in principle be fully reversible. A measurement error reduces the information obtained and the entropy decrease in the system. The erasure still gives the same increase in entropy, and the total process is irreversible. Another consequence of measurement error is that a bad feedback is applied, which further increases the entropy production if the proper protocol adapted to the expected error rate is not applied. We consider the effect of measurement error on a realistic single-electron box Szilard engine, and we find the optimal protocol for the cycle as a function of the desired power P and error ɛ .
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
Error in total ozone measurements arising from aerosol attenuation
NASA Technical Reports Server (NTRS)
Thomas, R. W. L.; Basher, R. E.
1979-01-01
A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.
NASA Astrophysics Data System (ADS)
Saad, Katherine M.; Wunch, Debra; Deutscher, Nicholas M.; Griffith, David W. T.; Hase, Frank; De Mazière, Martine; Notholt, Justus; Pollard, David F.; Roehl, Coleen M.; Schneider, Matthias; Sussmann, Ralf; Warneke, Thorsten; Wennberg, Paul O.
2016-11-01
Global and regional methane budgets are markedly uncertain. Conventionally, estimates of methane sources are derived by bridging emissions inventories with atmospheric observations employing chemical transport models. The accuracy of this approach requires correctly simulating advection and chemical loss such that modeled methane concentrations scale with surface fluxes. When total column measurements are assimilated into this framework, modeled stratospheric methane introduces additional potential for error. To evaluate the impact of such errors, we compare Total Carbon Column Observing Network (TCCON) and GEOS-Chem total and tropospheric column-averaged dry-air mole fractions of methane. We find that the model's stratospheric contribution to the total column is insensitive to perturbations to the seasonality or distribution of tropospheric emissions or loss. In the Northern Hemisphere, we identify disagreement between the measured and modeled stratospheric contribution, which increases as the tropopause altitude decreases, and a temporal phase lag in the model's tropospheric seasonality driven by transport errors. Within the context of GEOS-Chem, we find that the errors in tropospheric advection partially compensate for the stratospheric methane errors, masking inconsistencies between the modeled and measured tropospheric methane. These seasonally varying errors alias into source attributions resulting from model inversions. In particular, we suggest that the tropospheric phase lag error leads to large misdiagnoses of wetland emissions in the high latitudes of the Northern Hemisphere.
Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K
2016-11-25
Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.
Total absorption cross sections of several gases of aeronomic interest at 584 A.
NASA Technical Reports Server (NTRS)
Starr, W. L.; Loewenstein, M.
1972-01-01
Total photoabsorption cross sections have been measured at 584.3 A for N2, O2, Ar, CO2, CO, NO, N2O, NH3, CH4, H2, and H2S. A monochromator was used to isolate the He I 584 line produced in a helium resonance lamp, and thin aluminum filters were used as absorption cell windows, thereby eliminating possible errors associated with the use of undispersed radiation or windowless cells. Sources of error are examined, and limits of uncertainty are given. Previous relevant cross-sectional measurements and possible error sources are reviewed. Wall adsorption as a source of error in cross-sectional measurements has not previously been considered and is discussed briefly.
A method of treating the non-grey error in total emittance measurements
NASA Technical Reports Server (NTRS)
Heaney, J. B.; Henninger, J. H.
1971-01-01
In techniques for the rapid determination of total emittance, the sample is generally exposed to surroundings that are at a different temperature than the sample's surface. When the infrared spectral reflectance of the surface is spectrally selective, these techniques introduce an error into the total emittance values. Surfaces of aluminum overcoated with oxides of various thicknesses fall into this class. Because they are often used as temperature control coatings on satellites, their emittances must be accurately known. The magnitude of the error was calculated for Alzak and silicon oxide-coated aluminum and was shown to be dependent on the thickness of the oxide coating. The results demonstrate that, because the magnitude of the error is thickness-dependent, it is generally impossible or impractical to eliminate it by calibrating the measuring device.
In-Flight Pitot-Static Calibration
NASA Technical Reports Server (NTRS)
Foster, John V. (Inventor); Cunningham, Kevin (Inventor)
2016-01-01
A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.
A simulation study to quantify the impacts of exposure ...
BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health.MethodsZIP-code level estimates of exposure for six pollutants (CO, NOx, EC, PM2.5, SO4, O3) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error.Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs.ResultsSubstantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3–85% for population error, and 31–85% for total error. When CO, NOx or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copoll
Total ozone trend significance from space time variability of daily Dobson data
NASA Technical Reports Server (NTRS)
Wilcox, R. W.
1981-01-01
Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.
Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S
2013-06-01
Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.
Iampietro, Mary; Giovannetti, Tania; Drabick, Deborah A. G.; Kessler, Rachel K.
2013-01-01
Executive function (EF) deficits in schizophrenia (SZ) are well documented, although much less is known about patterns of EF deficits and their association to differential impairments in everyday functioning. The present study empirically defined SZ groups based on measures of various EF abilities and then compared these EF groups on everyday action errors. Participants (n=45) completed various subtests from the Delis–Kaplan Executive Function System (D-KEFS) and the Naturalistic Action Test (NAT), a performance-based measure of everyday action that yields scores reflecting total errors and a range of different error types (e.g., omission, perseveration). Results of a latent class analysis revealed three distinct EF groups, characterized by (a) multiple EF deficits, (b) relatively spared EF, and (c) perseverative responding. Follow-up analyses revealed that the classes differed significantly on NAT total errors, total commission errors, and total perseveration errors; the two classes with EF impairment performed comparably on the NAT but performed worse than the class with relatively spared EF. In sum, people with SZ demonstrate variable patterns of EF deficits, and distinct aspects of these EF deficit patterns (i.e., poor mental control abilities) may be associated with everyday functioning capabilities. PMID:23035705
Incorporating measurement error in n = 1 psychological autoregressive modeling.
Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.
Uses and biases of volunteer water quality data
Loperfido, J.V.; Beyer, P.; Just, C.L.; Schnoor, J.L.
2010-01-01
State water quality monitoring has been augmented by volunteer monitoring programs throughout the United States. Although a significant effort has been put forth by volunteers, questions remain as to whether volunteer data are accurate and can be used by regulators. In this study, typical volunteer water quality measurements from laboratory and environmental samples in Iowa were analyzed for error and bias. Volunteer measurements of nitrate+nitrite were significantly lower (about 2-fold) than concentrations determined via standard methods in both laboratory-prepared and environmental samples. Total reactive phosphorus concentrations analyzed by volunteers were similar to measurements determined via standard methods in laboratory-prepared samples and environmental samples, but were statistically lower than the actual concentration in four of the five laboratory-prepared samples. Volunteer water quality measurements were successful in identifying and classifying most of the waters which violate United States Environmental Protection Agency recommended water quality criteria for total nitrogen (66%) and for total phosphorus (52%) with the accuracy improving when accounting for error and biases in the volunteer data. An understanding of the error and bias in volunteer water quality measurements can allow regulators to incorporate volunteer water quality data into total maximum daily load planning or state water quality reporting. ?? 2010 American Chemical Society.
Re-assessing accumulated oxygen deficit in middle-distance runners.
Bickham, D; Le Rossignol, P; Gibbons, C; Russell, A P
2002-12-01
The purpose of this study was to re-assess the accumulated oxygen deficit (AOD), incorporating recent methodological improvements i.e., 4 min submaximal tests spread above and below the lactate threshold (LT). We Investigated the Influence of the VO2 -speed regression, on the precision of the estimated total energy demand and AOD. utilising different numbers of regression points and including measurement errors. Seven trained middle-distance runners (mean +/- SD age: 25.3 +/- 5.4y, mass: 73.7 +/- 4.3kg. VO2max 64.4 +/- 6.1 mL x kg(-1) x min(-1)) completed a VO2max, LT, 10 x 4 min exercise tests (above and below LT) and high-intensity exhaustive tests. The VO2 -speed regression was developed using 10 submaximal points and a forced y-intercept value. The average precision (measured as the width of 95% confidence Interval) for the estimated total energy demand using this regression was 7.8mL O2 Eq x kg(-1) x min(-1). There was a two-fold decrease in precision of estimated total energy demand with the Inclusion of measurement errors from the metabolic system. The mean AOD value was 43.3 mL O2 Eq x kg(-1) (upper and lower 95% CI 32.1 and 54.5mL o2 Eq x kg(-1) respectively). Converting the 95% CI for estimated total energy demand to AOD or including maximum possible measurement errors amplified the error associated with the estimated total energy demand. No significant difference in AOD variables were found, using 10,4 or 2 regression points with a forced y-intercept. For practical purposes we recommend the use of 4 submaximal values with a y-intercept. Using 95% CIs and calculating error highlighted possible error in estimating AOD. Without accurate data collection, increased variability could decrease the accuracy of the AOD as shown by a 95% CI of the AOD.
Quantifying precision of in situ length and weight measurements of fish
Gutreuter, S.; Krzoska, D.J.
1994-01-01
We estimated and compared errors in field-made (in situ) measurements of lengths and weights of fish. We made three measurements of length and weight on each of 33 common carp Cyprinus carpio, and on each of a total of 34 bluegills Lepomis macrochirus and black crappies Pomoxis nigromaculatus. Maximum total lengths of all fish were measured to the nearest 1 mm on a conventional measuring board. The bluegills and black crappies (85–282 mm maximum total length) were weighed to the nearest 1 g on a 1,000-g spring-loaded scale. The common carp (415–600 mm maximum total length) were weighed to the nearest 0.05 kg on a 20-kg spring-loaded scale. We present a statistical model for comparison of coefficients of variation of length (Cl ) and weight (Cw ). Expected Cl was near zero and constant across mean length, indicating that length can be measured with good precision in the field. Expected Cw decreased with increasing mean length, and was larger than expected Cl by 5.8 to over 100 times for the bluegills and black crappies, and by 3 to over 20 times for the common carp. Unrecognized in situ weighing errors bias the apparent content of unique information in weight, which is the information not explained by either length or measurement error. We recommend procedures to circumvent effects of weighing errors, including elimination of unnecessary weighing from routine monitoring programs. In situ weighing must be conducted with greater care than is common if the content of unique and nontrivial information in weight is to be correctly identified.
NASA Technical Reports Server (NTRS)
Hegsted, D. M.
1975-01-01
A prototype balance study was conducted on earth prior to the balance studies conducted in Skylab itself. Collected were daily dietary intake data of 6 minerals and nitrogen, and fecal and urinary outputs on each of three astronauts. Essential statistical issues show what quantities need to be estimated and establish the scope of inference associated with alternative variance estimates. The procedures for obtaining the final variability due both to errors of measurement and total error (total = measurement and biological variability) are exhibited.
Incorporating measurement error in n = 1 psychological autoregressive modeling
Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988
Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...
2017-01-07
Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less
Eisele, Thomas P; Rhoda, Dale A; Cutts, Felicity T; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J D; Arnold, Fred
2013-01-01
Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.
Eisele, Thomas P.; Rhoda, Dale A.; Cutts, Felicity T.; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J. D.; Arnold, Fred
2013-01-01
Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used. PMID:23667331
Evaluation of Preanalytical Quality Indicators by Six Sigma and Pareto`s Principle.
Kulkarni, Sweta; Ramesh, R; Srinivasan, A R; Silvia, C R Wilma Delphine
2018-01-01
Preanalytical steps are the major sources of error in clinical laboratory. The analytical errors can be corrected by quality control procedures but there is a need for stringent quality checks in preanalytical area as these processes are done outside the laboratory. Sigma value depicts the performance of laboratory and its quality measures. Hence in the present study six sigma and Pareto principle was applied to preanalytical quality indicators to evaluate the clinical biochemistry laboratory performance. This observational study was carried out for a period of 1 year from November 2015-2016. A total of 1,44,208 samples and 54,265 test requisition forms were screened for preanalytical errors like missing patient information, sample collection details in forms and hemolysed, lipemic, inappropriate, insufficient samples and total number of errors were calculated and converted into defects per million and sigma scale. Pareto`s chart was drawn using total number of errors and cumulative percentage. In 75% test requisition forms diagnosis was not mentioned and sigma value of 0.9 was obtained and for other errors like sample receiving time, stat and type of sample sigma values were 2.9, 2.6, and 2.8 respectively. For insufficient sample and improper ratio of blood to anticoagulant sigma value was 4.3. Pareto`s chart depicts out of 80% of errors in requisition forms, 20% is contributed by missing information like diagnosis. The development of quality indicators, application of six sigma and Pareto`s principle are quality measures by which not only preanalytical, the total testing process can be improved.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip
2015-08-06
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.
Reliability of a Longitudinal Sequence of Scale Ratings
ERIC Educational Resources Information Center
Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert; Vangeneugden, Tony
2009-01-01
Reliability captures the influence of error on a measurement and, in the classical setting, is defined as one minus the ratio of the error variance to the total variance. Laenen, Alonso, and Molenberghs ("Psychometrika" 73:443-448, 2007) proposed an axiomatic definition of reliability and introduced the R[subscript T] coefficient, a measure of…
Evaluation of a new photomask CD metrology tool
NASA Astrophysics Data System (ADS)
Dubuque, Leonard F.; Doe, Nicholas G.; St. Cin, Patrick
1996-12-01
In the integrated circuit (IC) photomask industry today, dense IC patterns, sub-micron critical dimensions (CD), and narrow tolerances for 64 M technologies and beyond are driving increased demands to minimize and characterize all components of photomask CD variation. This places strict requirements on photomask CD metrology in order to accurately characterize the mask CD error distribution. According to the gauge-maker's rule, measurement error must not exceed 30% of the tolerance on the product dimension measured or the gauge is not considered capable. The traditional single point repeatability tests are a poor measure of overall measurement system error in a dynamic, leading-edge technology environment. In such an environment, measurements may be taken at different points in the field- of-view due to stage in-accuracy, pattern recognition requirements, and throughput considerations. With this in mind, a set of experiments were designed to characterize thoroughly the metrology tool's repeatability and systematic error. Original experiments provided inconclusive results and had to be extended to obtain a full characterization of the system. Tests demonstrated a performance of better than 15 nm total CD error. Using this test as a tool for further development, the authors were able to determine the effects of various system components and measure the improvement with changes in optics, electronics, and software. Optimization of the optical path, electronics, and system software has yielded a new instrument with a total system error of better than 8 nm. Good collaboration between the photomask manufacturer and the equipment supplier has led to a realistic test of system performance and an improved CD measurement instrument.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly
Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less
Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ming; Cygler,
The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and themore » current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.« less
Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca
NASA Astrophysics Data System (ADS)
Matteo, N. A.; Morton, Y. T.
2010-12-01
The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.
Errors in clinical laboratories or errors in laboratory medicine?
Plebani, Mario
2006-01-01
Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.
Determining relative error bounds for the CVBEM
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.
High accuracy diffuse horizontal irradiance measurements without a shadowband
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlemmer, J.A; Michalsky, J.J.
1995-12-31
The standard method for measuring diffuse horizontal irradiance uses a fixed shadowband to block direct solar radiation. This method requires a correction for the excess skylight blocked by the band, and this correction varies with sky conditions. Alternately, diffuse horizontal irradiance may be calculated from total horizontal and direct normal irradiance. This method is in error because of angular (cosine) response of the total horizontal pyranometer to direct beam irradiance. This paper describes an improved calculation of diffuse horizontal irradiance from total horizontal and direct normal irradiance using a predetermination of the angular response of the total horizontal pyranometer. Wemore » compare these diffuse horizontal irradiance calculations with measurements made with a shading-disk pyranometer that shields direct irradiance using a tracking disk. Results indicate significant improvement in most cases. Remaining disagreement most likely arises from undetected tracking errors and instrument leveling.« less
High accuracy diffuse horizontal irradiance measurements without a shadowband
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlemmer, J.A.; Michalsky, J.J.
1995-10-01
The standard method for measuring diffuse horizontal irradiance uses a fixed shadowband to block direct solar radiation. This method requires a correction for the excess skylight blocked by the band, and this correction varies with sky conditions. Alternately, diffuse horizontal irradiance may be calculated from the total horizontal and direct normal irradiance. This method is in error because of the angular (often referred to as cosine) response of the total horizontal pyranometer to direct beam irradiance. This paper describes an improved calculation of diffuse horizontal irradiance from total horizontal and direct normal irradiance using a predetermination of the angular responsemore » of the total horizontal pyranometer. The authors compare these diffuse horizontal irradiance calculations with measurements made with a shading-disk pyranometer that shields direct irradiance using a tracking disk. The results indicate significant improvement in most cases. The remaining disagreement most likely arises from undetected tracking errors and instrument leveling.« less
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
Flouri, Eirini; Panourgia, Constantina
2011-06-01
The aim of this study was to test for gender differences in how negative cognitive errors (overgeneralizing, catastrophizing, selective abstraction, and personalizing) mediate the association between adverse life events and adolescents' emotional and behavioural problems (measured with the Strengths and Difficulties Questionnaire). The sample consisted of 202 boys and 227 girls (aged 11-15 years) from three state secondary schools in disadvantaged areas in one county in the South East of England. Control variables were age, ethnicity, special educational needs, exclusion history, family structure, family socio-economic disadvantage, and verbal cognitive ability. Adverse life events were measured with Tiet et al.'s (1998) Adverse Life Events Scale. For both genders, we assumed a pathway from adverse life events to emotional and behavioural problems via cognitive errors. We found no gender differences in life adversity, cognitive errors, total difficulties, peer problems, or hyperactivity. In both boys and girls, even after adjustment for controls, cognitive errors were related to total difficulties and emotional symptoms, and life adversity was related to total difficulties and conduct problems. The life adversity/conduct problems association was not explained by negative cognitive errors in either gender. However, we found gender differences in how adversity and cognitive errors produced hyperactivity and internalizing problems. In particular, life adversity was not related, after adjustment for controls, to hyperactivity in girls and to peer problems and emotional symptoms in boys. Cognitive errors fully mediated the effect of life adversity on hyperactivity in boys and on peer and emotional problems in girls.
A novel color vision test for detection of diabetic macular edema.
Shin, Young Joo; Park, Kyu Hyung; Hwang, Jeong-Min; Wee, Won Ryang; Lee, Jin Hak; Lee, In Bum; Hyon, Joon Young
2014-01-02
To determine the sensitivity of the Seoul National University (SNU) computerized color vision test for detecting diabetic macular edema. From May to September 2003, a total of 73 eyes of 73 patients with diabetes mellitus were examined using the SNU computerized color vision test and optical coherence tomography (OCT). Color deficiency was quantified as the total error score on the SNU test and as error scores for each of four color quadrants corresponding to yellows (Q1), greens (Q2), blues (Q3), and reds (Q4). SNU error scores were assessed as a function of OCT foveal thickness and total macular volume (TMV). The error scores in Q1, Q2, Q3, and Q4 measured by the SNU color vision test increased with foveal thickness (P < 0.05), whereas they were not correlated with TMV. Total error scores, the summation of Q1 and Q3, the summation of Q2 and Q4, and blue-yellow (B-Y) error scores were significantly correlated with foveal thickness (P < 0.05), but not with TMV. The observed correlation between SNU color test error scores and foveal thickness indicates that the SNU test may be useful for detection and monitoring of diabetic macular edema.
NASA Astrophysics Data System (ADS)
Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann
2016-05-01
The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.
Nimbus-7 Total Ozone Mapping Spectrometer (TOMS) Data Products User's Guide
NASA Technical Reports Server (NTRS)
McPeters, Richard D.; Bhartia, P. K.; Krueger, Arlin J.; Herman, Jay R.; Schlesinger, Barry M.; Wellemeyer, Charles G.; Seftor, Colin J.; Jaross, Glen; Taylor, Steven L.; Swissler, Tom;
1996-01-01
Two data products from the Total Ozone Mapping Spectrometer (TOMS) onboard Nimbus-7 have been archived at the Distributed Active Archive Center, in the form of Hierarchical Data Format files. The instrument measures backscattered Earth radiance and incoming solar irradiance; their ratio is used in ozone retrievals. Changes in the instrument sensitivity are monitored by a spectral discrimination technique using measurements of the intrinsically stable wavelength dependence of derived surface reflectivity. The algorithm to retrieve total column ozone compares measured Earth radiances at sets of three wavelengths with radiances calculated for different total ozone values, solar zenith angles, and optical paths. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard deviation random error is 2 percent, and drift is less than 1.0 percent per decade. The Level-2 product contains the measured radiances, the derived total ozone amount, and reflectivity information for each scan position. The Level-3 product contains daily total ozone amount and reflectivity in a I - degree latitude by 1.25 degrees longitude grid. The Level-3 product also is available on CD-ROM. Detailed descriptions of both HDF data files and the CD-ROM product are provided.
Interpreting SBUV Smoothing Errors: an Example Using the Quasi-biennial Oscillation
NASA Technical Reports Server (NTRS)
Kramarova, N. A.; Bhartia, Pawan K.; Frith, S. M.; McPeters, R. D.; Stolarski, R. S.
2013-01-01
The Solar Backscattered Ultraviolet (SBUV) observing system consists of a series of instruments that have been measuring both total ozone and the ozone profile since 1970. SBUV measures the profile in the upper stratosphere with a resolution that is adequate to resolve most of the important features of that region. In the lower stratosphere the limited vertical resolution of the SBUV system means that there are components of the profile variability that SBUV cannot measure. The smoothing error, as defined in the optimal estimation retrieval method, describes the components of the profile variability that the SBUV observing system cannot measure. In this paper we provide a simple visual interpretation of the SBUV smoothing error by comparing SBUV ozone anomalies in the lower tropical stratosphere associated with the quasi-biennial oscillation (QBO) to anomalies obtained from the Aura Microwave Limb Sounder (MLS). We describe a methodology for estimating the SBUV smoothing error for monthly zonal mean (mzm) profiles. We construct covariance matrices that describe the statistics of the inter-annual ozone variability using a 6 yr record of Aura MLS and ozonesonde data. We find that the smoothing error is of the order of 1percent between 10 and 1 hPa, increasing up to 15-20 percent in the troposphere and up to 5 percent in the mesosphere. The smoothing error for total ozone columns is small, mostly less than 0.5 percent. We demonstrate that by merging the partial ozone columns from several layers in the lower stratosphere/troposphere into one thick layer, we can minimize the smoothing error. We recommend using the following layer combinations to reduce the smoothing error to about 1 percent: surface to 25 hPa (16 hPa) outside (inside) of the narrow equatorial zone 20 S-20 N.
Useful measures and models for analytical quality management in medical laboratories.
Westgard, James O
2016-02-01
The 2014 Milan Conference "Defining analytical performance goals 15 years after the Stockholm Conference" initiated a new discussion of issues concerning goals for precision, trueness or bias, total analytical error (TAE), and measurement uncertainty (MU). Goal-setting models are critical for analytical quality management, along with error models, quality-assessment models, quality-planning models, as well as comprehensive models for quality management systems. There are also critical underlying issues, such as an emphasis on MU to the possible exclusion of TAE and a corresponding preference for separate precision and bias goals instead of a combined total error goal. This opinion recommends careful consideration of the differences in the concepts of accuracy and traceability and the appropriateness of different measures, particularly TAE as a measure of accuracy and MU as a measure of traceability. TAE is essential to manage quality within a medical laboratory and MU and trueness are essential to achieve comparability of results across laboratories. With this perspective, laboratory scientists can better understand the many measures and models needed for analytical quality management and assess their usefulness for practical applications in medical laboratories.
Error in Dasibi flight measurements of atmospheric ozone due to instrument wall-loss
NASA Technical Reports Server (NTRS)
Ainsworth, J. E.; Hagemeyer, J. R.; Reed, E. I.
1981-01-01
Theory suggests that in laminar flow the percent loss of a trace constituent to the walls of a measuring instrument varies as P to the -2/3, where P is the total gas pressure. Preliminary laboratory ozone wall-loss measurements confirm this P to the -2/3 dependence. Accurate assessment of wall-loss is thus of particular importance for those balloon-borne instruments utilizing laminar flow at ambient pressure, since the ambient pressure decreases by a factor of 350 during ascent to 40 km. Measurements and extrapolations made for a Dasibi ozone monitor modified for balloon flight indicate that the wall-loss error at 40 km was between 6 and 30 percent and that the wall-loss error in the derived total ozone column-content for the region from the surface to 40 km altitude was between 2 and 10 percent. At 1000 mb, turbulence caused an order of magnitude increase in the Dasibi wall-loss.
NASA Technical Reports Server (NTRS)
Tsaoussi, Lucia S.; Koblinsky, Chester J.
1994-01-01
In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.
Takada, Koki; Takahashi, Kana; Hirao, Kazuki
2018-01-17
Although the self-report version of Liebowitz Social Anxiety Scale (LSAS) is frequently used to measure social anxiety, data is lacking on the smallest detectable change (SDC), an important index of measurement error. We therefore aimed to determine the SDC of LSAS. Japanese adults aged 20-69 years were invited from a panel managed by a nationwide internet research agency. We then conducted a test-retest internet survey with a two-week interval to estimate the SDC at the individual (SDC ind ) and group (SDC group ) levels. The analysis included 1300 participants. The SDC ind and SDC group for the total fear subscale (scoring range: 0-72) were 23.52 points (32.7%) and 0.65 points (0.9%), respectively. The SDC ind and SDC group for the total avoidance subscale (scoring range: 0-72) were 32.43 points (45.0%) and 0.90 points (1.2%), respectively. The SDC ind and SDC group for the overall total score (scoring range: 0-144) were 45.90 points (31.9%) and 1.27 points (0.9%), respectively. Measurement error is large and indicate the potential for major problems when attempting to use the LSAS to detect changes at the individual level. These results should be considered when using the LSAS as measures of treatment change.
Development of a scale of executive functioning for the RBANS.
Spencer, Robert J; Kitchen Andren, Katherine A; Tolle, Kathryn A
2018-01-01
The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) is a cognitive battery that contains scales of several cognitive abilities, but no scale in the instrument is exclusively dedicated to executive functioning. Although the subtests allow for observation of executive-type errors, each error is of fairly low base rate, and healthy and clinical normative data are lacking on the frequency of these types of errors, making their significance difficult to interpret in isolation. The aim of this project was to create an RBANS executive errors scale (RBANS EE) with items comprised of qualitatively dysexecutive errors committed throughout the test. Participants included Veterans referred for outpatient neuropsychological testing. Items were initially selected based on theoretical literature and were retained based on item-total correlations. The RBANS EE (a percentage calculated by dividing the number of dysexecutive errors by the total number of responses) was moderately related to each of seven established measures of executive functioning and was strongly predictive of dichotomous classification of executive impairment. Thus, the scale had solid concurrent validity, justifying its use as a supplementary scale. The RBANS EE requires no additional administration time and can provide a quantified measure of otherwise unmeasured aspects of executive functioning.
Reliability of Total Test Scores When Considered as Ordinal Measurements
ERIC Educational Resources Information Center
Biswas, Ajoy Kumar
2006-01-01
This article studies the ordinal reliability of (total) test scores. This study is based on a classical-type linear model of observed score (X), true score (T), and random error (E). Based on the idea of Kendall's tau-a coefficient, a measure of ordinal reliability for small-examinee populations is developed. This measure is extended to large…
Liu, Xingguo; Niu, Jianwei; Ran, Linghua; Liu, Taijie
2017-08-01
This study aimed to develop estimation formulae for the total human body volume (BV) of adult males using anthropometric measurements based on a three-dimensional (3D) scanning technique. Noninvasive and reliable methods to predict the total BV from anthropometric measurements based on a 3D scan technique were addressed in detail. A regression analysis of BV based on four key measurements was conducted for approximately 160 adult male subjects. Eight total models of human BV show that the predicted results fitted by the regression models were highly correlated with the actual BV (p < 0.001). Two metrics, the mean value of the absolute difference between the actual and predicted BV (V error ) and the mean value of the ratio between V error and actual BV (RV error ), were calculated. The linear model based on human weight was recommended as the most optimal due to its simplicity and high efficiency. The proposed estimation formulae are valuable for estimating total body volume in circumstances in which traditional underwater weighing or air displacement plethysmography is not applicable or accessible. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Hampp, Emily L; Chughtai, Morad; Scholl, Laura Y; Sodhi, Nipun; Bhowmik-Stoker, Manoshi; Jacofsky, David J; Mont, Michael A
2018-05-01
This study determined if robotic-arm assisted total knee arthroplasty (RATKA) allows for more accurate and precise bone cuts and component position to plan compared with manual total knee arthroplasty (MTKA). Specifically, we assessed the following: (1) final bone cuts, (2) final component position, and (3) a potential learning curve for RATKA. On six cadaver specimens (12 knees), a MTKA and RATKA were performed on the left and right knees, respectively. Bone-cut and final-component positioning errors relative to preoperative plans were compared. Median errors and standard deviations (SDs) in the sagittal, coronal, and axial planes were compared. Median values of the absolute deviation from plan defined the accuracy to plan. SDs described the precision to plan. RATKA bone cuts were as or more accurate to plan based on nominal median values in 11 out of 12 measurements. RATKA bone cuts were more precise to plan in 8 out of 12 measurements ( p ≤ 0.05). RATKA final component positions were as or more accurate to plan based on median values in five out of five measurements. RATKA final component positions were more precise to plan in four out of five measurements ( p ≤ 0.05). Stacked error results from all cuts and implant positions for each specimen in procedural order showed that RATKA error was less than MTKA error. Although this study analyzed a small number of cadaver specimens, there were clear differences that separated these two groups. When compared with MTKA, RATKA demonstrated more accurate and precise bone cuts and implant positioning to plan. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
A 1400-MHz survey of 1478 Abell clusters of galaxies
NASA Technical Reports Server (NTRS)
Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.
1982-01-01
Observations of 1478 Abell clusters of galaxies with the NRAO 91-m telescope at 1400 MHz are reported. The measured beam shape was deconvolved from the measured source Gaussian fits in order to estimate the source size and position angle. All detected sources within 0.5 corrected Abell cluster radii are listed, including the cluster number, richness class, distance class, magnitude of the tenth brightest galaxy, redshift estimate, corrected cluster radius in arcmin, right ascension and error, declination and error, total flux density and error, and angular structure for each source.
Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.
Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586
Short-term Variability of Extinction by Broadband Stellar Photometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musat, I.C.; Ellingson, R.G.
2005-03-18
Aerosol optical depth variation over short-term time intervals is determined from broadband observations of stars with a whole sky imager. The main difficulty in such measurements consists of accurately separating the star flux value from the non-stellar diffuse skylight. Using correction method to overcome this difficulty, the monochromatic extinction at the ground due to aerosols is extracted from heterochromatic measurements. A form of closure is achieved by comparison with simultaneous or temporally close measurements with other instruments, and the total error of the method, as a combination of random error of measurements and systematic error of calibration and model, ismore » assessed as being between 2.6 and 3% rms.« less
Highlights of TOMS Version 9 Total Ozone Algorithm
NASA Technical Reports Server (NTRS)
Bhartia, Pawan; Haffner, David
2012-01-01
The fundamental basis of TOMS total ozone algorithm was developed some 45 years ago by Dave and Mateer. It was designed to estimate total ozone from satellite measurements of the backscattered UV radiances at few discrete wavelengths in the Huggins ozone absorption band (310-340 nm). Over the years, as the need for higher accuracy in measuring total ozone from space has increased, several improvements to the basic algorithms have been made. They include: better correction for the effects of aerosols and clouds, an improved method to account for the variation in shape of ozone profiles with season, latitude, and total ozone, and a multi-wavelength correction for remaining profile shape errors. These improvements have made it possible to retrieve total ozone with just 3 spectral channels of moderate spectral resolution (approx. 1 nm) with accuracy comparable to state-of-the-art spectral fitting algorithms like DOAS that require high spectral resolution measurements at large number of wavelengths. One of the deficiencies of the TOMS algorithm has been that it doesn't provide an error estimate. This is a particular problem in high latitudes when the profile shape errors become significant and vary with latitude, season, total ozone, and instrument viewing geometry. The primary objective of the TOMS V9 algorithm is to account for these effects in estimating the error bars. This is done by a straightforward implementation of the Rodgers optimum estimation method using a priori ozone profiles and their error covariances matrices constructed using Aura MLS and ozonesonde data. The algorithm produces a vertical ozone profile that contains 1-2.5 pieces of information (degrees of freedom of signal) depending upon solar zenith angle (SZA). The profile is integrated to obtain the total column. We provide information that shows the altitude range in which the profile is best determined by the measurements. One can use this information in data assimilation and analysis. A side benefit of this algorithm is that it is considerably simpler than the present algorithm that uses a database of 1512 profiles to retrieve total ozone. These profiles are tedious to construct and modify. Though conceptually similar to the SBUV V8 algorithm that was developed about a decade ago, the SBUV and TOMS V9 algorithms differ in detail. The TOMS algorithm uses 3 wavelengths to retrieve the profile while the SBUV algorithm uses 6-9 wavelengths, so TOMS provides less profile information. However both algorithms have comparable total ozone information and TOMS V9 can be easily adapted to use additional wavelengths from instruments like GOME, OMI and OMPS to provide better profile information at smaller SZAs. The other significant difference between the two algorithms is that while the SBUV algorithm has been optimized for deriving monthly zonal means by making an appropriate choice of the a priori error covariance matrix, the TOMS algorithm has been optimized for tracking short-term variability using month and latitude dependent covariance matrices.
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang
2016-10-14
First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.
Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang
2016-01-01
First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412
Introduction to total- and partial-pressure measurements in vacuum systems
NASA Technical Reports Server (NTRS)
Outlaw, R. A.; Kern, F. A.
1989-01-01
An introduction to the fundamentals of total and partial pressure measurement in the vacuum regime (760 x 10 to the -16th power Torr) is presented. The instrument most often used in scientific fields requiring vacuum measurement are discussed with special emphasis on ionization type gauges and quadrupole mass spectrometers. Some attention is also given to potential errors in measurement as well as calibration techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Khawaja, M. Sami; Rushton, Josh
Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.
Reference-free error estimation for multiple measurement methods.
Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga
2018-01-01
We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.
NASA Technical Reports Server (NTRS)
Abdelwahab, Mahmood; Biesiadny, Thomas J.; Silver, Dean
1987-01-01
An uncertainty analysis was conducted to determine the bias and precision errors and total uncertainty of measured turbojet engine performance parameters. The engine tests were conducted as part of the Uniform Engine Test Program which was sponsored by the Advisory Group for Aerospace Research and Development (AGARD). With the same engines, support hardware, and instrumentation, performance parameters were measured twice, once during tests conducted in test cell number 3 and again during tests conducted in test cell number 4 of the NASA Lewis Propulsion Systems Laboratory. The analysis covers 15 engine parameters, including engine inlet airflow, engine net thrust, and engine specific fuel consumption measured at high rotor speed of 8875 rpm. Measurements were taken at three flight conditions defined by the following engine inlet pressure, engine inlet total temperature, and engine ram ratio: (1) 82.7 kPa, 288 K, 1.0, (2) 82.7 kPa, 288 K, 1.3, and (3) 20.7 kPa, 288 K, 1.3. In terms of bias, precision, and uncertainty magnitudes, there were no differences between most measurements made in test cells number 3 and 4. The magnitude of the errors increased for both test cells as engine pressure level decreased. Also, the level of the bias error was two to three times larger than that of the precision error.
Optical surface pressure measurements: Accuracy and application field evaluation
NASA Astrophysics Data System (ADS)
Bukov, A.; Mosharov, V.; Orlov, A.; Pesetsky, V.; Radchenko, V.; Phonov, S.; Matyash, S.; Kuzmin, M.; Sadovskii, N.
1994-07-01
Optical pressure measurement (OPM) is a new pressure measurement method rapidly developed in several aerodynamic research centers: TsAGI (Russia), Boeing, NASA, McDonnell Douglas (all USA), and DLR (Germany). Present level of OPM-method provides its practice as standard experimental method of aerodynamic investigations in definite application fields. Applications of OPM-method are determined mainly by its accuracy. The accuracy of OPM-method is determined by the errors of three following groups: (1) errors of the luminescent pressure sensor (LPS) itself, such as uncompensated temperature influence, photo degradation, temperature and pressure hysteresis, variation of the LPS parameters from point to point on the model surface, etc.; (2) errors of the measurement system, such as noise of the photodetector, nonlinearity and nonuniformity of the photodetector, time and temperature offsets, etc.; and (3) methodological errors, owing to displacement and deformation of the model in an airflow, a contamination of the model surface, scattering of the excitation and luminescent light from the model surface and test section walls, etc. OPM-method allows getting total error of measured pressure not less than 1 percent. This accuracy is enough to visualize the pressure field and allows determining total and distributed aerodynamic loads and solving some problems of local aerodynamic investigations at transonic and supersonic velocities. OPM is less effective at low subsonic velocities (M less than 0.4), and for precise measurements, for example, an airfoil optimization. Current limitations of the OPM-method are discussed on an example of the surface pressure measurements and calculations of the integral loads on the wings of canard-aircraft model. The pressure measurement system and data reduction methods used on these tests are also described.
Standardising analysis of carbon monoxide rebreathing for application in anti-doping.
Alexander, Anthony C; Garvican, Laura A; Burge, Caroline M; Clark, Sally A; Plowman, James S; Gore, Christopher J
2011-03-01
Determination of total haemoglobin mass (Hbmass) via carbon monoxide (CO) depends critically on repeatable measurement of percent carboxyhaemoglobin (%HbCO) in blood with a hemoximeter. The main aim of this study was to determine, for an OSM3 hemoximeter, the number of replicate measures as well as the theoretical change in percent carboxyhaemoglobin required to yield a random error of analysis (Analyser Error) of ≤1%. Before and after inhalation of CO, nine participants provided a total of 576 blood samples that were each analysed five times for percent carboxyhaemoglobin on one of three OSM3 hemoximeters; with approximately one-third of blood samples analysed on each OSM3. The Analyser Error was calculated for the first two (duplicate), first three (triplicate) and first four (quadruplicate) measures on each OSM3, as well as for all five measures (quintuplicates). Two methods of CO-rebreathing, a 2-min and 10-min procedure, were evaluated for Analyser Error. For duplicate analyses of blood, the Analyser Error for the 2-min method was 3.7, 4.0 and 5.0% for the three OSM3s when the percent carboxyhaemoglobin increased by two above resting values. With quintuplicate analyses of blood, the corresponding errors reduced to .8, .9 and 1.0% for the 2-min method when the percent carboxyhaemoglobin increased by 5.5 above resting values. In summary, to minimise the Analyser Error to ∼≤1% on an OSM3 hemoximeter, researchers should make ≥5 replicates of percent carboxyhaemoglobin and the volume of CO administered should be sufficient increase percent carboxyhaemoglobin by ≥5.5 above baseline levels. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
Deurenberg, P; Andreoli, A; de Lorenzo, A
1996-01-01
Total body water and extracellular water were measured by deuterium oxide and bromide dilution respectively in 23 healthy males and 25 healthy females. In addition, total body impedance was measured at 17 frequencies, ranging from 1 kHz to 1350 kHz. Modelling programs were used to extrapolate impedance values to frequency zero (extracellular resistance) and frequency infinity (total body water resistance). Impedance indexes (height2/Zf) were computed at all 17 frequencies. The estimation errors of extracellular resistance and total body water resistance were 1% and 3%, respectively. Impedance and impedance index at low frequency were correlated with extracellular water, independent of the amount of total body water. Total body water showed the greatest correlation with impedance and impedance index at high frequencies. Extrapolated impedance values did not show a higher correlation compared to measured values. Prediction formulas from the literature applied to fixed frequencies showed the best mean and individual predictions for both extracellular water and total body water. It is concluded that, at least in healthy individuals with normal body water distribution, modelling impedance data has no advantage over impedance values measured at fixed frequencies, probably due to estimation errors in the modelled data.
Height-diameter equations for thirteen midwestern bottomland hardwood species
Kenneth C. Colbert; David R. Larsen; James R. Lootens
2002-01-01
Height-diameter equations are often used to predict the mean total tree height for trees when only diameter at breast height (dbh) is measured. Measuring dbh is much easier and is subject to less measurement error than total tree height. However, predicted heights only reflect the average height for trees of a particular diameter. In this study, we present a set of...
ADEOS Total Ozone Mapping Spectrometer (TOMS) Data Products User's Guide
NASA Technical Reports Server (NTRS)
Krueger, A.; Bhartia, P. K.; McPeters, R.; Herman, J.; Wellemeyer, C.; Jaross, G.; Seftor, C.; Torres, O.; Labow, G.; Byerly, W.;
1998-01-01
Two data products from the Total Ozone Mapping Spectrometer (ADEOS/TOMS) have been archived at the Distributed Active Archive Center, in the form of Hierarchical Data Format files. The ADEOS/ TOMS began taking measurements on September 11, 1996, and ended on June 29, 1997. The instrument measured backscattered Earth radiance and incoming solar irradiance; their ratio was used in ozone retrievals. Changes in the reflectivity of the solar diffuser used for the irradiance measurement were monitored using a carousel of three diffusers, each exposed to the degrading effects of solar irradiation at different rates. The algorithm to retrieve total column ozone compares measured Earth radiances at sets of three wavelengths with radiances calculated for different total ozone values, solar zenith angles, and optical paths. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard deviation random error is 2 percent, and the drift is less than 0.5 percent over the 9-month data record. The Level 2 product contains the measured radiances, the derived total ozone amount, and reflectivity information for each scan position. The Level 3 product contains daily total ozone and reflectivity in a 1-degree latitude by 1.25 degrees longitude grid. The Level 3 files containing estimates of UVB at the Earth surface and tropospheric aerosol information will also be available. Detailed descriptions of both HDF data files and the CDROM product are provided.
Earth Probe Total Ozone Mapping Spectrometer (TOMS) Data Product User's Guide
NASA Technical Reports Server (NTRS)
McPeters, R.; Bhartia, P. K.; Krueger, A.; Herman, J.; Wellemeyer, C.; Seftor, C.; Jaross, G.; Torres, O.; Moy, L.; Labow, G.;
1998-01-01
Two data products from the Earth Probe Total Ozone Mapping Spectrometer (EP/TOMS) have been archived at the Distributed Active Archive Center, in the form of Hierarchical Data Format files. The EP/ TOMS began taking measurements on July 15, 1996. The instrument measures backscattered Earth radiance and incoming solar irradiance; their ratio is used in ozone retrievals. Changes in the reflectivity of the solar diffuser used for the irradiance measurement are monitored using a carousel of three diffusers, each exposed to the degrading effects of solar irradiation at different rates. The algorithm to retrieve total column ozone compares measured Earth radiances at sets of three wavelengths with radiances calculated for different total ozone values. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard deviation random error is 2 percent, and the drift is less than 0.5 percent over the first year of data. The Level-2 product contains the measured radiances, the derived total ozone amount, and reflectivity information for each scan position. The Level-3 product contains daily total ozone and reflectivity in a 1-degree latitude by 1.25 degrees longitude grid. Level-3 files containing estimates of LTVB at the Earth surface and tropospheric aerosol information are also available, Detailed descriptions of both HDF data-files and the CD-ROM product are provided.
Estimating Uncertainty in Long Term Total Ozone Records from Multiple Sources
NASA Technical Reports Server (NTRS)
Frith, Stacey M.; Stolarski, Richard S.; Kramarova, Natalya; McPeters, Richard D.
2014-01-01
Total ozone measurements derived from the TOMS and SBUV backscattered solar UV instrument series cover the period from late 1978 to the present. As the SBUV series of instruments comes to an end, we look to the 10 years of data from the AURA Ozone Monitoring Instrument (OMI) and two years of data from the Ozone Mapping Profiler Suite (OMPS) on board the Suomi National Polar-orbiting Partnership satellite to continue the record. When combining these records to construct a single long-term data set for analysis we must estimate the uncertainty in the record resulting from potential biases and drifts in the individual measurement records. In this study we present a Monte Carlo analysis used to estimate uncertainties in the Merged Ozone Dataset (MOD), constructed from the Version 8.6 SBUV2 series of instruments. We extend this analysis to incorporate OMI and OMPS total ozone data into the record and investigate the impact of multiple overlapping measurements on the estimated error. We also present an updated column ozone trend analysis and compare the size of statistical error (error from variability not explained by our linear regression model) to that from instrument uncertainty.
Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Sharpe, Jacob A.
2014-01-01
A code for predicting supersonic jet broadband shock-associated noise was assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify deficiencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the measured data, a sensitivity analysis of the model parameters with emphasis on the definition of the convection velocity parameter, and a least-squares fit of the predicted to the measured shock-associated noise component spectra, resulted in a new definition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.
Zonal average earth radiation budget measurements from satellites for climate studies
NASA Technical Reports Server (NTRS)
Ellis, J. S.; Haar, T. H. V.
1976-01-01
Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are superior in performance compared to other radiosondes, with average 26 km errors of -0.12 hPa or +0.61 percent O3MR error. iMet-P radiosondes had average 26 km errors of -1.95 hPa or +8.75 percent O3MR error. Based on our analysis, we suggest that ozonesondes always be coupled with a GPS-enabled radiosonde and that pressure-dependent variables, such as O3MR, be recalculated-reprocessed using the GPS-measured altitude, especially when 26 km pressure offsets exceed 1.0 hPa 5 percent.
NASA Astrophysics Data System (ADS)
Möhler, Christian; Russ, Tom; Wohlfahrt, Patrick; Elter, Alina; Runz, Armin; Richter, Christian; Greilich, Steffen
2018-01-01
An experimental setup for consecutive measurement of ion and x-ray absorption in tissue or other materials is introduced. With this setup using a 3D-printed sample container, the reference stopping-power ratio (SPR) of materials can be measured with an uncertainty of below 0.1%. A total of 65 porcine and bovine tissue samples were prepared for measurement, comprising five samples each of 13 tissue types representing about 80% of the total body mass (three different muscle and fatty tissues, liver, kidney, brain, heart, blood, lung and bone). Using a standard stoichiometric calibration for single-energy CT (SECT) as well as a state-of-the-art dual-energy CT (DECT) approach, SPR was predicted for all tissues and then compared to the measured reference. With the SECT approach, the SPRs of all tissues were predicted with a mean error of (-0.84 ± 0.12)% and a mean absolute error of (1.27 ± 0.12)%. In contrast, the DECT-based SPR predictions were overall consistent with the measured reference with a mean error of (-0.02 ± 0.15)% and a mean absolute error of (0.10 ± 0.15)%. Thus, in this study, the potential of DECT to decrease range uncertainty could be confirmed in biological tissue.
Tyo, J Scott; LaCasse, Charles F; Ratliff, Bradley M
2009-10-15
Microgrid polarimeters operate by integrating a focal plane array with an array of micropolarizers. The Stokes parameters are estimated by comparing polarization measurements from pixels in a neighborhood around the point of interest. The main drawback is that the measurements used to estimate the Stokes vector are made at different locations, leading to a false polarization signature owing to instantaneous field-of-view (IFOV) errors. We demonstrate for the first time, to our knowledge, that spatially band limited polarization images can be ideally reconstructed with no IFOV error by using a linear system framework.
Goulet, Eric D B; Baker, Lindsay B
2017-12-01
The B-722 Laqua Twin is a low cost, portable, and battery operated sodium analyzer, which can be used for the assessment of sweat sodium concentration. The Laqua Twin is reliable and provides a degree of accuracy similar to more expensive analyzers; however, its interunit measurement error remains unknown. The purpose of this study was to compare the sodium concentration values of 70 sweat samples measured using three different Laqua Twin units. Mean absolute errors, random errors and constant errors among the different Laqua Twins ranged respectively between 1.7 mmol/L to 3.5 mmol/L, 2.5 mmol/L to 3.7 mmol/L and -0.6 mmol/L to 3.9 mmol/L. Proportional errors among Laqua Twins were all < 2%. Based on a within-subject biological variability in sweat sodium concentration of ± 12%, the maximal allowable imprecision among instruments was considered to be £ 6%. In that respect, the within (2.9%), between (4.5%), and total (5.4%) measurement error coefficient of variations were all < 6%. For a given sweat sodium concentration value, the largest observed difference in mean and lower and upper bound error of measurements among instruments were, respectively, 4.7 mmol/L, 2.3 mmol/L, and 7.0 mmol/L. In conclusion, our findings show that the interunit measurement error of the B-722 Laqua Twin is low and methodologically acceptable.
Measurement of spine and total body mineral by dual-photon absorptiometry
NASA Technical Reports Server (NTRS)
Mazess, R. B.; Young, D.
1983-01-01
The use of Gd-153 dual-photon absorptiometry at 43 and 100 keV to measure individual-bone and total-body bone minerals is discussed in a survey of recent studies on humans, phantoms, and monkeys. Precision errors of as low as 1 percent have been achieved in vivo, suggesting the use of sequential measurements in studies of immobilization and space-flight effects.
Lane, Sandi J; Troyer, Jennifer L; Dienemann, Jacqueline A; Laditka, Sarah B; Blanchette, Christopher M
2014-01-01
Older adults are at greatest risk of medication errors during the transition period of the first 7 days after admission and readmission to a skilled nursing facility (SNF). The aim of this study was to evaluate structure- and process-related factors that contribute to medication errors and harm during transition periods at a SNF. Data for medication errors and potential medication errors during the 7-day transition period for residents entering North Carolina SNFs were from the Medication Error Quality Initiative-Individual Error database from October 2006 to September 2007. The impact of SNF structure and process measures on the number of reported medication errors and harm from errors were examined using bivariate and multivariate model methods. A total of 138 SNFs reported 581 transition period medication errors; 73 (12.6%) caused harm. Chain affiliation was associated with a reduction in the volume of errors during the transition period. One third of all reported transition errors occurred during the medication administration phase of the medication use process, where dose omissions were the most common type of error; however, dose omissions caused harm less often than wrong-dose errors did. Prescribing errors were much less common than administration errors but were much more likely to cause harm. Both structure and process measures of quality were related to the volume of medication errors.However, process quality measures may play a more important role in predicting harm from errors during the transition of a resident into an SNF. Medication errors during transition could be reduced by improving both prescribing processes and transcription and documentation of orders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koepferl, Christine M.; Robitaille, Thomas P.; Dale, James E., E-mail: koepferl@usm.lmu.de
We use a large data set of realistic synthetic observations (produced in Paper I of this series) to assess how observational techniques affect the measurement physical properties of star-forming regions. In this part of the series (Paper II), we explore the reliability of the measured total gas mass, dust surface density and dust temperature maps derived from modified blackbody fitting of synthetic Herschel observations. We find from our pixel-by-pixel analysis of the measured dust surface density and dust temperature a worrisome error spread especially close to star formation sites and low-density regions, where for those “contaminated” pixels the surface densitiesmore » can be under/overestimated by up to three orders of magnitude. In light of this, we recommend to treat the pixel-based results from this technique with caution in regions with active star formation. In regions of high background typical in the inner Galactic plane, we are not able to recover reliable surface density maps of individual synthetic regions, since low-mass regions are lost in the far-infrared background. When measuring the total gas mass of regions in moderate background, we find that modified blackbody fitting works well (absolute error: + 9%; −13%) up to 10 kpc distance (errors increase with distance). Commonly, the initial images are convolved to the largest common beam-size, which smears contaminated pixels over large areas. The resulting information loss makes this commonly used technique less verifiable as now χ {sup 2} values cannot be used as a quality indicator of a fitted pixel. Our control measurements of the total gas mass (without the step of convolution to the largest common beam size) produce similar results (absolute error: +20%; −7%) while having much lower median errors especially for the high-mass stellar feedback phase. In upcoming papers (Paper III; Paper IV) of this series we test the reliability of measured star formation rate with direct and indirect techniques.« less
Nimbus-7 Total Ozone Mapping Spectrometer (TOMS) data products user's guide
NASA Technical Reports Server (NTRS)
Mcpeters, Richard D.; Krueger, Arlin J.; Bhartia, P. K.; Herman, Jay R.; Oaks, Arnold; Ahmad, Ziuddin; Cebula, Richard P.; Schlesinger, Barry M.; Swissler, Tom; Taylor, Steven L.
1993-01-01
Two tape products from the Total Ozone Mapping Spectrometer (TOMS) aboard the Nimbus-7 have been archived at the National Space Science Data Center. The instrument measures backscattered Earth radiance and incoming solar irradiance; their ratio -- the albedo -- is used in ozone retrievals. In-flight measurements are used to monitor changes in the instrument sensitivity. The algorithm to retrieve total column ozone compares the observed ratios of albedos at pairs of wavelengths with pair ratios calculated for different ozone values, solar zenith angles, and optical paths. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard-deviation random error is 2 percent, and the drift is +/- 1.5 percent over 14.5 years. The High Density TOMS (HDTOMS) tape contains the measured albedos, the derived total ozone amount, reflectivity, and cloud-height information for each scan position. It also contains an index of SO2 contamination for each position. The Gridded TOMS (GRIDTOMS) tape contains daily total ozone and reflectivity in roughly equal area grids (110 km in latitude by about 100-150 km in longitude). Detailed descriptions of the tape structure and record formats are provided.
Overview of the TOPEX/Poseidon Platform Harvest Verification Experiment
NASA Technical Reports Server (NTRS)
Morris, Charles S.; DiNardo, Steven J.; Christensen, Edward J.
1995-01-01
An overview is given of the in situ measurement system installed on Texaco's Platform Harvest for verification of the sea level measurement from the TOPEX/Poseidon satellite. The prelaunch error budget suggested that the total root mean square (RMS) error due to measurements made at this verification site would be less than 4 cm. The actual error budget for the verification site is within these original specifications. However, evaluation of the sea level data from three measurement systems at the platform has resulted in unexpectedly large differences between the systems. Comparison of the sea level measurements from the different tide gauge systems has led to a better understanding of the problems of measuring sea level in relatively deep ocean. As of May 1994, the Platform Harvest verification site has successfully supported 60 TOPEX/Poseidon overflights.
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
USDA-ARS?s Scientific Manuscript database
Measurement error in self-reported total sugars intake may obscure associations between sugars consumption and health outcomes, and the sum of 24 hour urinary sucrose and fructose may serve as a predictive biomarker of total sugars intake. The study of Latinos: Nutrition & Physical Activity Assessme...
Concomitant prescribing and dispensing errors at a Brazilian hospital: a descriptive study
Silva, Maria das Dores Graciano; Rosa, Mário Borges; Franklin, Bryony Dean; Reis, Adriano Max Moreira; Anchieta, Lêni Márcia; Mota, Joaquim Antônio César
2011-01-01
OBJECTIVE: To analyze the prevalence and types of prescribing and dispensing errors occurring with high-alert medications and to propose preventive measures to avoid errors with these medications. INTRODUCTION: The prevalence of adverse events in health care has increased, and medication errors are probably the most common cause of these events. Pediatric patients are known to be a high-risk group and are an important target in medication error prevention. METHODS: Observers collected data on prescribing and dispensing errors occurring with high-alert medications for pediatric inpatients in a university hospital. In addition to classifying the types of error that occurred, we identified cases of concomitant prescribing and dispensing errors. RESULTS: One or more prescribing errors, totaling 1,632 errors, were found in 632 (89.6%) of the 705 high-alert medications that were prescribed and dispensed. We also identified at least one dispensing error in each high-alert medication dispensed, totaling 1,707 errors. Among these dispensing errors, 723 (42.4%) content errors occurred concomitantly with the prescribing errors. A subset of dispensing errors may have occurred because of poor prescription quality. The observed concomitancy should be examined carefully because improvements in the prescribing process could potentially prevent these problems. CONCLUSION: The system of drug prescribing and dispensing at the hospital investigated in this study should be improved by incorporating the best practices of medication safety and preventing medication errors. High-alert medications may be used as triggers for improving the safety of the drug-utilization system. PMID:22012039
Miyashita, Theresa L; Diakogeorgiou, Eleni; Marrie, Kaitlyn
Investigation into the effect of cumulative subconcussive head impacts has yielded various results in the literature, with many supporting a link to neurological deficits. Little research has been conducted on men's lacrosse and associated balance deficits from head impacts. (1) Athletes will commit more errors on the postseason Balance Error Scoring System (BESS) test. (2) There will be a positive correlation to change in BESS scores and head impact exposure data. Prospective longitudinal study. Level 3. Thirty-four Division I men's lacrosse players (age, 19.59 ± 1.42 years) wore helmets instrumented with a sensor to collect head impact exposure data over the course of a competitive season. Players completed a BESS test at the start and end of the competitive season. The number of errors from pre- to postseason increased during the double-leg stance on foam ( P < 0.001), tandem stance on foam ( P = 0.009), total number of errors on a firm surface ( P = 0.042), and total number of errors on a foam surface ( P = 0.007). There were significant correlations only between the total errors on a foam surface and linear acceleration ( P = 0.038, r = 0.36), head injury criteria ( P = 0.024, r = 0.39), and Gadd Severity Index scores ( P = 0.031, r = 0.37). Changes in the total number of errors on a foam surface may be considered a sensitive measure to detect balance deficits associated with cumulative subconcussive head impacts sustained over the course of 1 lacrosse season, as measured by average linear acceleration, head injury criteria, and Gadd Severity Index scores. If there is microtrauma to the vestibular system due to repetitive subconcussive impacts, only an assessment that highly stresses the vestibular system may be able to detect these changes. Cumulative subconcussive impacts may result in neurocognitive dysfunction, including balance deficits, which are associated with an increased risk for injury. The development of a strategy to reduce total number of head impacts may curb the associated sequelae. Incorporation of a modified BESS test, firm surface only, may not be recommended as it may not detect changes due to repetitive impacts over the course of a competitive season.
Ferreira, Tiago B; Ribeiro, Paulo; Ribeiro, Filomena J; O'Neill, João G
2017-12-01
To compare the prediction error in the calculation of toric intraocular lenses (IOLs) associated with methods that estimate the power of the posterior corneal surface (ie, Barrett toric calculator and Abulafia-Koch formula) with that of methods that consider real measures obtained using Scheimpflug imaging: a software that uses vectorial calculation (Panacea toric calculator: http://www.panaceaiolandtoriccalculator.com) and a ray tracing software (PhacoOptics, Aarhus Nord, Denmark). In 107 eyes of 107 patients undergoing cataract surgery with toric IOL implantation (Acrysof IQ Toric; Alcon Laboratories, Inc., Fort Worth, TX), predicted residual astigmatism by each calculation method was compared with manifest refractive astigmatism. Prediction error in residual astigmatism was calculated using vector analysis. All calculation methods resulted in overcorrection of with-the-rule astigmatism and undercorrection of against-the-rule astigmatism. Both estimation methods resulted in lower mean and centroid astigmatic prediction errors, and a larger number of eyes within 0.50 diopters (D) of absolute prediction error than methods considering real measures (P < .001). Centroid prediction error (CPE) was 0.07 D at 172° for the Barrett toric calculator and 0.13 D at 174° for the Abulafia-Koch formula (combined with Holladay calculator). For methods using real posterior corneal surface measurements, CPE was 0.25 D at 173° for the Panacea calculator and 0.29 D at 171° for the ray tracing software. The Barrett toric calculator and Abulafia-Koch formula yielded the lowest astigmatic prediction errors. Directly evaluating total corneal power for toric IOL calculation was not superior to estimating it. [J Refract Surg. 2017;33(12):794-800.]. Copyright 2017, SLACK Incorporated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, L.; Hill, W.J.
A method is proposed to estimate the effect of long-term variations in total ozone on the error incurred in determining a trend in total ozone due to man-made effects. When this method is applied to data from Arosa, Switzerland over the years 1932--1980, a component of the standard error of the trend estimate equal to 0.6 percent per decade is obtained. If this estimate of long-term trend variability at Arosa is not too different from global long-term trend variability, then the threshold ( +- 2 standard errors) for detecting an ozone trend in the 1970's that is outside of whatmore » could be expected from natural variation alone and hence be man-made would range from 1.35% (Reinsel et al, 1981) to 1.8%. The latter value is obtained by combining the Reinsel et al result with the result here, assuming that the error variations that both studies measure are independent and additive. Estimates for long-term trend variation over other time periods are also derived. Simulations that measure the precision of the estimate of long-term variability are reported.« less
Recursive Construction of Noiseless Subsystem for Qudits
NASA Astrophysics Data System (ADS)
Güngördü, Utkan; Li, Chi-Kwong; Nakahara, Mikio; Poon, Yiu-Tung; Sze, Nung-Sing
2014-03-01
When the environmental noise acting on the system has certain symmetries, a subsystem of the total system can avoid errors. Encoding information into such a subsystem is advantageous since it does not require any error syndrome measurements, which may introduce further errors to the system. However, utilizing such a subsystem for large systems gets impractical with the increasing number of qudits. A recursive scheme offers a solution to this problem. Here, we review the recursive construct introduced in, which can asymptotically protect 1/d of the qudits in system against collective errors.
Stepman, Hedwig C M; Stöckl, Dietmar; Acheme, Rosana; Sesini, Sandra; Mazziotta, Daniel; Thienpont, Linda M
2011-11-01
The Fundación Bioquímica Argentina (FBA) performs external quality assessment (EQA) of >3200 laboratories. However, FBA realizes that sample non-commutability and predominant use of heterogeneous systems may bias the estimated performance and standardization status. To eliminate these confounding factors, a study using frozen single donation sera was undertaken with the focus on serum-calcium and -albumin measurement. Target values were established from the results produced with homogeneous systems. In groups of n=7, system effects were investigated. Laboratory performance was evaluated from the correlation coefficient r between the measurement results for all sera and the target values. This allowed ranking of the laboratories and judgment of the deviation for individual samples (total error) against a 10% limit. The total error specification was a deviation for ≥ 5 samples exceeding 10% and/or causing a result outside the laboratory's reference interval. For calcium (n=303) (range: 2.06-2.42 mmol/L), 81 laboratories had an r-value <0.6, 43 even <0.4; the total error was relevant for 97 (10% limit) and 111 (reference interval) laboratories. For albumin (n=311) (range: 34.7-45.7 g/L) r was <0.7 (<0.4) in 44 (16) laboratories; 83 and 36 laboratories exceeded the total error criteria. Laboratories using homogeneous systems were generally ranked higher by correlation. System effects were moderate for calcium, but significant for albumin. The study demonstrated the need to improve the quality and harmonization of calcium and albumin testing in the investigated laboratories. To achieve this objective, we promote co-operation between laboratories, EQA provider and manufacturers.
Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie
2014-01-01
This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldwin, G C
1974-04-30
Research on low energy electron collisions in gases by the time-of- flight velocity selection technique included, as a preliminary to total cross section measurements, investigations of the statistical and systematic errors inherent in the technique. In particular, thermal transpiration and instrumental fluctuation errors in manometry were investigated, and the results embodied in computer programs for data reduction. The instrumental system was improved to permit extended periods of data accumulation without manual attention. Total cross section measurements in helium, made prior to, and in molecular nitrogen, made after the supporting work was completed, are reported. The total cross sec tion ofmore » helium is found to be higher than reported in previous beam determinations. That of nitrogen is found to be structureless at low energies. (auth)« less
Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Fisher, Brad L.; Wolff, David B.
2007-01-01
This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.
Study on profile measurement of extruding tire tread by laser
NASA Astrophysics Data System (ADS)
Wang, LiangCai; Zhang, Wanping; Zhu, Weihu
1996-10-01
This paper presents a new 2D measuring system-profile measurement of extruding tire tread by laser. It includes the thickness measurement of extruding tire tread by laser and the width measurement of extruding tire tread using Moire Fringe. The system has been applied to process line of extruding tire tread. Two measuring results have been obtained. One is a standard profile picture of extruding tire tread including seven measuring values. Another one is a series of thickness and width values. When the scanning speed < 100mm/sec and total width < 800mm. The measuring errors of width < +/- 0.5mm. While the thickness range is < 40mm. The measuring errors of thickness < +/- 0.1mm.
IMRT QA: Selecting gamma criteria based on error detection sensitivity.
Steers, Jennifer M; Fraass, Benedick A
2016-04-01
The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Interobserver error involved in independent attempts to measure cusp base areas of Pan M1s
Bailey, Shara E; Pilbrow, Varsha C; Wood, Bernard A
2004-01-01
Cusp base areas measured from digitized images increase the amount of detailed quantitative information one can collect from post-canine crown morphology. Although this method is gaining wide usage for taxonomic analyses of extant and extinct hominoids, the techniques for digitizing images and taking measurements differ between researchers. The aim of this study was to investigate interobserver error in order to help assess the reliability of cusp base area measurement within extant and extinct hominoid taxa. Two of the authors measured individual cusp base areas and total cusp base area of 23 maxillary first molars (M1) of Pan. From these, relative cusp base areas were calculated. No statistically significant interobserver differences were found for either absolute or relative cusp base areas. On average the hypocone and paracone showed the least interobserver error (< 1%) whereas the protocone and metacone showed the most (2.6–4.5%). We suggest that the larger measurement error in the metacone/protocone is due primarily to either weakly defined fissure patterns and/or the presence of accessory occlusal features. Overall, levels of interobserver error are similar to those found for intraobserver error. The results of our study suggest that if certain prescribed standards are employed then cusp and crown base areas measured by different individuals can be pooled into a single database. PMID:15447691
Navigation errors encountered using weather-mapping radar for helicopter IFR guidance to oil rigs
NASA Technical Reports Server (NTRS)
Phillips, J. D.; Bull, J. S.; Hegarty, D. M.; Dugan, D. C.
1980-01-01
In 1978 a joint NASA-FAA helicopter flight test was conducted to examine the use of weather-mapping radar for IFR guidance during landing approaches to oil rig helipads. The following navigation errors were measured: total system error, radar-range error, radar-bearing error, and flight technical error. Three problem areas were identified: (1) operational problems leading to pilot blunders, (2) poor navigation to the downwind final approach point, and (3) pure homing on final approach. Analysis of these problem areas suggests improvement in the radar equipment, approach procedure, and pilot training, and gives valuable insight into the development of future navigation aids to serve the off-shore oil industry.
Electrodermal lability as an indicator for subjective sleepiness during total sleep deprivation.
Michael, Lars; Passmann, Sven; Becker, Ruth
2012-08-01
The present study addresses the suitability of electrodermal lability as an indicator of individual vulnerability to the effects of total sleep deprivation. During two complete circadian cycles, the effects of 48h of total sleep deprivation on physiological measures (electrodermal activity and body temperature), subjective sleepiness (measured by visual analogue scale and tiredness symptom scale) and task performance (reaction time and errors in a go/no go task) were investigated. Analyses of variance with repeated measures revealed substantial decreases of the number of skin conductance responses, body temperature, and increases for subjective sleepiness, reaction time and error rates. For all changes, strong circadian oscillations could be observed as well. The electrodermal more labile subgroup reported higher subjective sleepiness compared with electrodermal more stable participants, but showed no differences in the time courses of body temperature and task performance. Therefore, electrodermal lability seems to be a specific indicator for the changes in subjective sleepiness due to total sleep deprivation and circadian oscillations, but not a suitable indicator for vulnerability to the effects of sleep deprivation per se. © 2011 European Sleep Research Society.
An improved procedure for the validation of satellite-based precipitation estimates
NASA Astrophysics Data System (ADS)
Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad
2015-09-01
The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model provides a clear and concise picture of the systematic and random errors, with both versions of 3B42RT have higher errors in varying degrees than their research (post-real-time) counterparts. The new V7 algorithm shows obvious improvements in reducing random errors in both winter and summer seasons, compared to its predecessors V6. Stage IV, as expected, surpasses the satellite-based datasets in all the metrics over CONUS. Based on the results, we recommend the new procedure be adopted for routine validation of satellite-based precipitation datasets, and we expect the procedure will work effectively for higher resolution data to be produced in the Global Precipitation Measurement (GPM) era.
Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems
Li, Zhining; Zhang, Yingtang; Yin, Gang
2018-01-01
The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544
Retrieval of the aerosol optical thickness from UV global irradiance measurements
NASA Astrophysics Data System (ADS)
Costa, M. J.; Salgueiro, V.; Bortoli, D.; Obregón, M. A.; Antón, M.; Silva, A. M.
2015-12-01
The UV irradiance is measured at Évora since several years, where a CIMEL sunphotometer integrated in AERONET is also installed. In the present work, measurements of UVA (315 - 400 nm) irradiances taken with Kipp&Zonen radiometers, as well as satellite data of ozone total column values, are used in combination with radiative transfer calculations, to estimate the aerosol optical thickness (AOT) in the UV. The retrieved UV AOT in Évora is compared with AERONET AOT (at 340 and 380 nm) and a fairly good agreement is found with a root mean square error of 0.05 (normalized root mean square error of 8.3%) and a mean absolute error of 0.04 (mean percentage error of 2.9%). The methodology is then used to estimate the UV AOT in Sines, an industrialized site on the Atlantic western coast, where the UV irradiance is monitored since 2013 but no aerosol information is available.
Irradiance measurement errors due to the assumption of a Lambertian reference panel
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Kirchner, J. A.
1982-01-01
A technique is presented for determining the error in diurnal irradiance measurements that results from the non-Lambertian behavior of a reference panel under various irradiance conditions. Spectral biconical reflectance factors of a spray-painted barium sulfate panel, along with simulated sky radiance data for clear and hazy skies at six solar zenith angles, were used to calculate the estimated panel irradiances and true irradiances for a nadir-looking sensor in two wavelength bands. The inherent errors in total spectral irradiance (0.68 microns) for a clear sky were 0.60, 6.0, 13.0, and 27.0% for solar zenith angles of 0, 45, 60, and 75 deg, respectively. The technique can be used to characterize the error of a specific panel used in field measurements, and thus eliminate any ambiguity of the effects of the type, preparation, and aging of the paint.
Nimbus 7 solar backscatter ultraviolet (SBUV) ozone products user's guide
NASA Technical Reports Server (NTRS)
Fleig, Albert J.; Mcpeters, R. D.; Bhartia, P. K.; Schlesinger, Barry M.; Cebula, Richard P.; Klenk, K. F.; Taylor, Steven L.; Heath, Donald F.
1990-01-01
Three ozone tape products from the Solar Backscatter Ultraviolet (SBUV) experiment aboard Nimbus 7 were archived at the National Space Science Data Center. The experiment measures the fraction of incoming radiation backscattered by the Earth's atmosphere at 12 wavelengths. In-flight measurements were used to monitor changes in the instrument sensitivity. Total column ozone is derived by comparing the measurements with calculations of what would be measured for different total ozone amounts. The altitude distribution is retrieved using an optimum statistical technique for the inversion. The estimated initial error in the absolute scale for total ozone is 2 percent, with a 3 percent drift over 8 years. The profile error depends on latitude and height, smallest at 3 to 10 mbar; the drift increases with increasing altitude. Three tape products are described. The High Density SBUV (HDSBUV) tape contains the final derived products - the total ozone and the vertical ozone profile - as well as much detailed diagnostic information generated during the retrieval process. The Compressed Ozone (CPOZ) tape contains only that subset of HDSBUV information, including total ozone and ozone profiles, considered most useful for scientific studies. The Zonal Means Tape (ZMT) contains daily, weekly, monthly and quarterly averages of the derived quantities over 10 deg latitude zones.
Development and Assessment of a Medication Safety Measurement Program in a Long-Term Care Pharmacy.
Hertig, John B; Hultgren, Kyle E; Parks, Scott; Rondinelli, Rick
2016-02-01
Medication errors continue to be a major issue in the health care system, including in long-term care facilities. While many hospitals and health systems have developed methods to identify, track, and prevent these errors, long-term care facilities historically have not invested in these error-prevention strategies. The objective of this study was two-fold: 1) to develop a set of medication-safety process measures for dispensing in a long-term care pharmacy, and 2) to analyze the data from those measures to determine the relative safety of the process. The study was conducted at In Touch Pharmaceuticals in Valparaiso, Indiana. To assess the safety of the medication-use system, each step was documented using a comprehensive flowchart (process flow map) tool. Once completed and validated, the flowchart was used to complete a "failure modes and effects analysis" (FMEA) identifying ways a process may fail. Operational gaps found during FMEA were used to identify points of measurement. The research identified a set of eight measures as potential areas of failure; data were then collected on each one of these. More than 133,000 medication doses (opportunities for errors) were included in the study during the research time frame (April 1, 2014, and ended on June 4, 2014). Overall, there was an approximate order-entry error rate of 15.26%, with intravenous errors at 0.37%. A total of 21 errors migrated through the entire medication-use system. These 21 errors in 133,000 opportunities resulted in a final check error rate of 0.015%. A comprehensive medication-safety measurement program was designed and assessed. This study demonstrated the ability to detect medication errors in a long-term pharmacy setting, thereby making process improvements measureable. Future, larger, multi-site studies should be completed to test this measurement program.
Sources of Error in Substance Use Prevalence Surveys
Johnson, Timothy P.
2014-01-01
Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified. PMID:27437511
New evidence of factor structure and measurement invariance of the SDQ across five European nations.
Ortuño-Sierra, Javier; Fonseca-Pedrero, Eduardo; Aritio-Solana, Rebeca; Velasco, Alvaro Moreno; de Luis, Edurne Chocarro; Schumann, Gunter; Cattrell, Anna; Flor, Herta; Nees, Frauke; Banaschewski, Tobias; Bokde, Arun; Whelan, Rob; Buechel, Christian; Bromberg, Uli; Conrod, Patricia; Frouin, Vincent; Papadopoulos, Dimitri; Gallinat, Juergen; Garavan, Hugh; Heinz, Andreas; Walter, Henrik; Struve, Maren; Gowland, Penny; Paus, Tomáš; Poustka, Luise; Martinot, Jean-Luc; Paillère-Martinot, Marie-Laure; Vetter, Nora C; Smolka, Michael N; Lawrence, Claire
2015-12-01
The main purpose of the present study was to analyse the internal structure and to test the measurement invariance of the Strengths and Difficulties Questionnaire (SDQ), self-reported version, in five European countries. The sample consisted of 3012 adolescents aged between 12 and 17 years (M = 14.20; SD = 0.83). The five-factor model (with correlated errors added), and the five-factor model (with correlated errors added) with the reverse-worded items allowed to cross-load on the Prosocial subscale, displayed adequate goodness of-fit indices. Multi-group confirmatory factor analysis showed that the five-factor model (with correlated errors added) had partial strong measurement invariance by countries. A total of 11 of the 25 items were non-invariant across samples. The level of internal consistency of the Total difficulties score was 0.84, ranging between 0.69 and 0.78 for the SDQ subscales. The findings indicate that the SDQ's subscales need to be modified in various ways for screening emotional and behavioural problems in the five European countries that were analysed.
Hernan, Andrea; Philpot, Benjamin; Janus, Edward D; Dunbar, James A
2012-07-08
Error in self-reported measures of obesity has been frequently described, but the effect of self-reported error on recruitment into diabetes prevention programs is not well established. The aim of this study was to examine the effect of using self-reported obesity data from the Finnish diabetes risk score (FINDRISC) on recruitment into the Greater Green Triangle Diabetes Prevention Project (GGT DPP). The GGT DPP was a structured group-based lifestyle modification program delivered in primary health care settings in South-Eastern Australia. Between 2004-05, 850 FINDRISC forms were collected during recruitment for the GGT DPP. Eligible individuals, at moderate to high risk of developing diabetes, were invited to undertake baseline tests, including anthropometric measurements performed by specially trained nurses. In addition to errors in calculating total risk scores, accuracy of self-reported data (height, weight, waist circumference (WC) and Body Mass Index (BMI)) from FINDRISCs was compared with baseline data, with impact on participation eligibility presented. Overall, calculation errors impacted on eligibility in 18 cases (2.1%). Of n = 279 GGT DPP participants with measured data, errors (total score calculation, BMI or WC) in self-report were found in n = 90 (32.3%). These errors were equally likely to result in under- or over-reported risk. Under-reporting was more common in those reporting lower risk scores (Spearman-rho = -0.226, p-value < 0.001). However, underestimation resulted in only 6% of individuals at high risk of diabetes being incorrectly categorised as moderate or low risk of diabetes. Overall FINDRISC was found to be an effective tool to screen and recruit participants at moderate to high risk of diabetes, accurately categorising levels of overweight and obesity using self-report data. The results could be generalisable to other diabetes prevention programs using screening tools which include self-reported levels of obesity.
Poster Presentation: Optical Test of NGST Developmental Mirrors
NASA Technical Reports Server (NTRS)
Hadaway, James B.; Geary, Joseph; Reardon, Patrick; Peters, Bruce; Keidel, John; Chavers, Greg
2000-01-01
An Optical Testing System (OTS) has been developed to measure the figure and radius of curvature of NGST developmental mirrors in the vacuum, cryogenic environment of the X-Ray Calibration Facility (XRCF) at Marshall Space Flight Center (MSFC). The OTS consists of a WaveScope Shack-Hartmann sensor from Adaptive Optics Associates as the main instrument, a Point Diffraction Interferometer (PDI), a Point Spread Function (PSF) imager, an alignment system, a Leica Disto Pro distance measurement instrument, and a laser source palette (632.8 nm wavelength) that is fiber-coupled to the sensor instruments. All of the instruments except the laser source palette are located on a single breadboard known as the Wavefront Sensor Pallet (WSP). The WSP is located on top of a 5-DOF motion system located at the center of curvature of the test mirror. Two PC's are used to control the OTS. The error in the figure measurement is dominated by the WaveScope's measurement error. An analysis using the absolute wavefront gradient error of 1/50 wave P-V (at 0.6328 microns) provided by the manufacturer leads to a total surface figure measurement error of approximately 1/100 wave rms. This easily meets the requirement of 1/10 wave P-V. The error in radius of curvature is dominated by the Leica's absolute measurement error of VI.5 mm and the focus setting error of Vi.4 mm, giving an overall error of V2 mm. The OTS is currently being used to test the NGST Mirror System Demonstrators (NMSD's) and the Subscale Beryllium Mirror Demonstrator (SBNM).
Multi-muscle FES force control of the human arm for arbitrary goals.
Schearer, Eric M; Liao, Yu-Wei; Perreault, Eric J; Tresch, Matthew C; Memberg, William D; Kirsch, Robert F; Lynch, Kevin M
2014-05-01
We present a method for controlling a neuroprosthesis for a paralyzed human arm using functional electrical stimulation (FES) and characterize the errors of the controller. The subject has surgically implanted electrodes for stimulating muscles in her shoulder and arm. Using input/output data, a model mapping muscle stimulations to isometric endpoint forces measured at the subject's hand was identified. We inverted the model of this redundant and coupled multiple-input multiple-output system by minimizing muscle activations and used this inverse for feedforward control. The magnitude of the total root mean square error over a grid in the volume of achievable isometric endpoint force targets was 11% of the total range of achievable forces. Major sources of error were random error due to trial-to-trial variability and model bias due to nonstationary system properties. Because the muscles working collectively are the actuators of the skeletal system, the quantification of errors in force control guides designs of motion controllers for multi-joint, multi-muscle FES systems that can achieve arbitrary goals.
NASA Astrophysics Data System (ADS)
Bruns, Donald
2016-05-01
In 1919, astronomers performed an experiment during a solar eclipse, attempting to measure the deflection of stars near the sun, in order to verify Einstein's theory of general relativity. The experiment was very difficult and the results were marginal, but the success made Albert Einstein famous around the world. Astronomers last repeated the experiment in 1973, achieving an error of 11%. In 2017, using amateur equipment and modern technology, I plan to repeat the experiment and achieve a 1% error. The best available star catalog will be used for star positions. Corrections for optical distortion and atmospheric refraction are better than 0.01 arcsec. During totality, I expect 7 or 8 measurable stars down to magnitude 9.5, based on analysis of previous eclipse measurements taken by amateurs. Reference images, taken near the sun during totality, will be used for precise calibration. Preliminary test runs performed during twilight in April 2016 and April 2017 can accurately simulate the sky conditions during totality, providing an accurate estimate of the final uncertainty.
Skeletal and body composition evaluation
NASA Technical Reports Server (NTRS)
Mazess, R. B.
1983-01-01
Research on radiation detectors for absorptiometry; analysis of errors affective single photon absorptiometry and development of instrumentation; analysis of errors affecting dual photon absorptiometry and development of instrumentation; comparison of skeletal measurements with other techniques; cooperation with NASA projects for skeletal evaluation in spaceflight (Experiment MO-78) and in laboratory studies with immobilized animals; studies of postmenopausal osteoporosis; organization of scientific meetings and workshops on absorptiometric measurement; and development of instrumentation for measurement of fluid shifts in the human body were performed. Instrumentation was developed that allows accurate and precise (2% error) measurements of mineral content in compact and trabecular bone and of the total skeleton. Instrumentation was also developed to measure fluid shifts in the extremities. Radiation exposure with those procedures is low (2-10 MREM). One hundred seventy three technical reports and one hundred and four published papers of studies from the University of Wisconsin Bone Mineral Lab are listed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Y; National Cancer Center, Kashiwa, Chiba; Tachibana, H
Purpose: Total body irradiation (TBI) and total marrow irradiation (TMI) using Tomotherapy have been reported. A gantry-based linear accelerator uses one isocenter during one rotational irradiation. Thus, 3–5 isocenter points should be used for a whole plan of TBI-VMAT during smoothing out the junctional dose distribution. IGRT provides accurate and precise patient setup for the multiple junctions, however it is evident that some setup errors should occur and affect accuracy of dose distribution in the area. In this study, we evaluated the robustness for patient’s setup error in VMAT-TBI. Methods: VMAT-TBI Planning was performed in an adult whole-body human phantommore » using Eclipse. Eight full arcs with four isocenter points using 6MV-X were used to cover the entire whole body. Dose distribution was optimized using two structures of patient’s body as PTV and lung. The two arcs were shared with one isocenter and the two arcs were 5 cm-overlapped with the other two arcs. Point absolute dose using ionization-chamber and planer relative dose distribution using film in the junctional regions were performed using water-equivalent slab phantom. In the measurements, several setup errors of (+5∼−5mm) were added. Results: The result of the chamber measurement shows the deviations were within ±3% when the setup errors were within ±3 mm. In the planer evaluation, the pass ratio of gamma evaluation (3%/2mm) shows more than 90% if the errors within ±3 mm. However, there were hot/cold areas in the edge of the junction even with acceptable gamma pass ratio. 5 mm setup error caused larger hot and cold areas and the dosimetric acceptable areas were decreased in the overlapped areas. Conclusion: It can be clinically acceptable for VMAT-TBI when patient setup error is within ±3mm. Averaging effects from patient random error would be helpful to blur the hot/cold area in the junction.« less
A comparison of advanced overlay technologies
NASA Astrophysics Data System (ADS)
Dasari, Prasad; Smith, Nigel; Goelzer, Gary; Liu, Zhuan; Li, Jie; Tan, Asher; Koh, Chin Hwee
2010-03-01
The extension of optical lithography to 22nm and beyond by Double Patterning Technology is often challenged by CDU and overlay control. With reduced overlay measurement error budgets in the sub-nm range, relying on traditional Total Measurement Uncertainty (TMU) estimates alone is no longer sufficient. In this paper we will report scatterometry overlay measurements data from a set of twelve test wafers, using four different target designs. The TMU of these measurements is under 0.4nm, within the process control requirements for the 22nm node. Comparing the measurement differences between DBO targets (using empirical and model based analysis) and with image-based overlay data indicates the presence of systematic and random measurement errors that exceeds the TMU estimate.
Dichrometer errors resulting from large signals or improper modulator phasing.
Sutherland, John C
2012-09-01
A single-beam spectrometer equipped with a photoelastic modulator can be configured to measure a number of different parameters useful in characterizing chemical and biochemical materials including natural and magnetic circular dichroism, linear dichroism, natural and magnetic fluorescence-detected circular dichroism, and fluorescence polarization anisotropy as well as total absorption and fluorescence. The derivations of the mathematical expressions used to extract these parameters from ultraviolet, visible, and near-infrared light-induced electronic signals in a dichrometer assume that the dichroic signals are sufficiently small that certain mathematical approximations will not introduce significant errors. This article quantifies errors resulting from these assumptions as a function of the magnitude of the dichroic signals. In the case of linear dichroism, improper modulator programming can result in errors greater than those resulting from the assumption of small signal size, whereas for fluorescence polarization anisotropy, improper modulator phase alone gives incorrect results. Modulator phase can also impact the values of total absorbance recorded simultaneously with linear dichroism and total fluorescence. Copyright © 2012 Wiley Periodicals, Inc., A Wiley Company.
NASA Astrophysics Data System (ADS)
Eskes, H. J.; Piters, A. J. M.; Levelt, P. F.; Allaart, M. A. F.; Kelder, H. M.
1999-10-01
A four-dimensional data-assimilation method is described to derive synoptic ozone fields from total-column ozone satellite measurements. The ozone columns are advected by a 2D tracer-transport model, using ECMWF wind fields at a single pressure level. Special attention is paid to the modeling of the forecast error covariance and quality control. The temporal and spatial dependence of the forecast error is taken into account, resulting in a global error field at any instant in time that provides a local estimate of the accuracy of the assimilated field. The authors discuss the advantages of the 4D-variational (4D-Var) approach over sequential assimilation schemes. One of the attractive features of the 4D-Var technique is its ability to incorporate measurements at later times t > t0 in the analysis at time t0, in a way consistent with the time evolution as described by the model. This significantly improves the offline analyzed ozone fields.
Sensitivity of planetary cruise navigation to earth orientation calibration errors
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Folkner, W. M.
1995-01-01
A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.
IMRT QA: Selecting gamma criteria based on error detection sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steers, Jennifer M.; Fraass, Benedick A., E-mail: benedick.fraass@cshs.org
Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique,more » and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. Conclusions: We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.« less
Ecological footprint model using the support vector machine technique.
Ma, Haibo; Chang, Wenjuan; Cui, Guangbai
2012-01-01
The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.
The Sunyaev-Zeldovich Effect in Abell 370
NASA Technical Reports Server (NTRS)
Grego, Laura; Carlstrom, John E.; Joy, Marshall K.; Reese, Erik D.; Holder, Gilbert P.; Patel, Sandeep; Cooray, Asantha R.; Holzappel, William L.
2000-01-01
We present interferometric measurements of the Sunyaev-Zeldovich (SZ) effect toward the galaxy cluster Abell 370. These measurements, which directly probe the pressure of the cluster's gas, show the gas distribution to be strongly aspherical, as do the X-ray and gravitational lensing observations. We calculate the cluster's gas mass fraction in two ways. We first compare the gas mass derived from the SZ measurements to the lensing-derived gravitational mass near the critical lensing radius. We also calculate the gas mass fraction from the SZ data by deprojecting the three-dimensional gas density distribution and deriving the total mass under the assumption that the gas is in hydrostatic equilibrium (HSE). We test the assumptions in the HSE method by comparing the total cluster mass implied by the two methods and find that they agree within the errors of the measurement. We discuss the possible system- atic errors in the gas mass fraction measurement and the constraints it places on the matter density parameter, Omega(sub M).
NASA Astrophysics Data System (ADS)
Colins, Karen; Li, Liqian; Liu, Yu
2017-05-01
Mass production of widely used semiconductor digital integrated circuits (ICs) has lowered unit costs to the level of ordinary daily consumables of a few dollars. It is therefore reasonable to contemplate the idea of an engineered system that consumes unshielded low-cost ICs for the purpose of measuring gamma radiation dose. Underlying the idea is the premise of a measurable correlation between an observable property of ICs and radiation dose. Accumulation of radiation-damage-induced state changes or error events is such a property. If correct, the premise could make possible low-cost wide-area radiation dose measurement systems, instantiated as wireless sensor networks (WSNs) with unshielded consumable ICs as nodes, communicating error events to a remote base station. The premise has been investigated quantitatively for the first time in laboratory experiments and related analyses performed at the Canadian Nuclear Laboratories. State changes or error events were recorded in real time during irradiation of samples of ICs of different types in a 60Co gamma cell. From the error-event sequences, empirical distribution functions of dose were generated. The distribution functions were inverted and probabilities scaled by total error events, to yield plots of the relationship between dose and error tallies. Positive correlation was observed, and discrete functional dependence of dose quantiles on error tallies was measured, demonstrating the correctness of the premise. The idea of an engineered system that consumes unshielded low-cost ICs in a WSN, for the purpose of measuring gamma radiation dose over wide areas, is therefore tenable.
Airmass dependence of the Dobson total ozone measurements
NASA Technical Reports Server (NTRS)
Degorska, M.; Rajewska-Wiech, B.
1994-01-01
For many years the airmass dependence of total ozone measurements at Belsk has been observed to vary noticeably from one day to another. Series of AD wavelength pairs measurements taken out to high airmass were analyzed and compared with the two parameter stray light model presented by Basher. The analysis extended to the series of CD measurements indicates the role of atmospheric attenuation in appearing the airmass dependence. The minor noon decline of total ozone has been observed in the CD measurement series similarly as in those of the AD wavelength pairs. Such errors may seriously affect the accuracy of CD measurements at high latitude stations and the observations derived in winter at middle latitude stations.
Vaskinn, Anja; Andersson, Stein; Østefjells, Tiril; Andreassen, Ole A; Sundet, Kjetil
2018-06-05
Theory of mind (ToM) can be divided into cognitive and affective ToM, and a distinction can be made between overmentalizing and undermentalizing errors. Research has shown that ToM in schizophrenia is associated with non-social and social cognition, and with clinical symptoms. In this study, we investigate cognitive and clinical predictors of different ToM processes. Ninety-one individuals with schizophrenia participated. ToM was measured with the Movie for the Assessment of Social Cognition (MASC) yielding six scores (total ToM, cognitive ToM, affective ToM, overmentalizing errors, undermentalizing errors and no mentalizing errors). Neurocognition was indexed by a composite score based on the non-social cognitive tests in the MATRICS Consensus Cognitive Battery (MCCB). Emotion perception was measured with Emotion in Biological Motion (EmoBio), a point-light walker task. Clinical symptoms were assessed with the Positive and Negative Syndrome Scale (PANSS). Seventy-one healthy control (HC) participants completed the MASC. Individuals with schizophrenia showed large impairments compared to HC for all MASC scores, except overmentalizing errors. Hierarchical regression analyses with the six different MASC scores as dependent variables revealed that MCCB was a significant predictor of all MASC scores, explaining 8-18% of the variance. EmoBio increased the explained variance significantly, to 17-28%, except for overmentalizing errors. PANSS excited symptoms increased explained variance for total ToM, affective ToM and no mentalizing errors. Both social and non-social cognition were significant predictors of ToM. Overmentalizing was only predicted by non-social cognition. Excited symptoms contributed to overall and affective ToM, and to no mentalizing errors. Copyright © 2018 Elsevier Inc. All rights reserved.
Mueller, David S.
2017-01-01
This paper presents a method using Monte Carlo simulations for assessing uncertainty of moving-boat acoustic Doppler current profiler (ADCP) discharge measurements using a software tool known as QUant, which was developed for this purpose. Analysis was performed on 10 data sets from four Water Survey of Canada gauging stations in order to evaluate the relative contribution of a range of error sources to the total estimated uncertainty. The factors that differed among data sets included the fraction of unmeasured discharge relative to the total discharge, flow nonuniformity, and operator decisions about instrument programming and measurement cross section. As anticipated, it was found that the estimated uncertainty is dominated by uncertainty of the discharge in the unmeasured areas, highlighting the importance of appropriate selection of the site, the instrument, and the user inputs required to estimate the unmeasured discharge. The main contributor to uncertainty was invalid data, but spatial inhomogeneity in water velocity and bottom-track velocity also contributed, as did variation in the edge velocity, uncertainty in the edge distances, edge coefficients, and the top and bottom extrapolation methods. To a lesser extent, spatial inhomogeneity in the bottom depth also contributed to the total uncertainty, as did uncertainty in the ADCP draft at shallow sites. The estimated uncertainties from QUant can be used to assess the adequacy of standard operating procedures. They also provide quantitative feedback to the ADCP operators about the quality of their measurements, indicating which parameters are contributing most to uncertainty, and perhaps even highlighting ways in which uncertainty can be reduced. Additionally, QUant can be used to account for self-dependent error sources such as heading errors, which are a function of heading. The results demonstrate the importance of a Monte Carlo method tool such as QUant for quantifying random and bias errors when evaluating the uncertainty of moving-boat ADCP measurements.
Orbit determination of highly elliptical Earth orbiters using improved Doppler data-processing modes
NASA Technical Reports Server (NTRS)
Estefan, J. A.
1995-01-01
A navigation error covariance analysis of four highly elliptical Earth orbits is described, with apogee heights ranging from 20,000 to 76,800 km and perigee heights ranging from 1,000 to 5,000 km. This analysis differs from earlier studies in that improved navigation data-processing modes were used to reduce the radio metric data. For this study, X-band (8.4-GHz) Doppler data were assumed to be acquired from two Deep Space Network radio antennas and reconstructed orbit errors propagated over a single day. Doppler measurements were formulated as total-count phase measurements and compared to the traditional formulation of differenced-count frequency measurements. In addition, an enhanced data-filtering strategy was used, which treated the principal ground system calibration errors affecting the data as filter parameters. Results suggest that a 40- to 60-percent accuracy improvement may be achievable over traditional data-processing modes in reconstructed orbit errors, with a substantial reduction in reconstructed velocity errors at perigee. Historically, this has been a regime in which stringent navigation requirements have been difficult to meet by conventional methods.
Hadronic Contribution to Muon g-2 with Systematic Error Correlations
NASA Astrophysics Data System (ADS)
Brown, D. H.; Worstell, W. A.
1996-05-01
We have performed a new evaluation of the hadronic contribution to a_μ=(g-2)/2 of the muon with explicit correlations of systematic errors among the experimental data on σ( e^+e^- → hadrons ). Our result for the lowest order hadronic vacuum polarization contribution is a_μ^hvp = 701.7(7.6)(13.4) × 10-10 where the total systematic error contributions from below and above √s = 1.4 GeV are (12.5) × 10-10 and (4.8) × 10-10 respectively. Therefore new measurements on σ( e^+e^- → hadrons ) below 1.4 GeV in Novosibirsk, Russia can significantly reduce the total error on a_μ^hvp. This contrasts with a previous evaluation which indicated that the dominant error is due to the energy region above 1.4 GeV. The latter analysis correlated systematic errors at each energy point separately but not across energy ranges as we have done. Combination with higher order hadronic contributions is required for a new measurement of a_μ at Brookhaven National Laboratory to be sensitive to electroweak and possibly supergravity and muon substructure effects. Our analysis may also be applied to calculations of hadronic contributions to the running of α(s) at √s= M_Z, the hyperfine structure of muonium, and the running of sin^2 θW in Møller scattering. The analysis of the new Novosibirsk data will also be given.
A combined analysis of the hadronic and leptonic decays of the Z 0
NASA Astrophysics Data System (ADS)
Akrawy, M. Z.; Alexander, G.; Allison, J.; Allport, P. P.; Anderson, K. J.; Armitage, J. C.; Arnison, G. T. J.; Ashton, P.; Azuelos, G.; Baines, J. T. M.; Ball, A. H.; Banks, J.; Barker, G. J.; Barlow, R. J.; Batley, J. R.; Becker, J.; Behnke, T.; Bell, K. W.; Bella, G.; Bethke, S.; Biebel, O.; Binder, U.; Bloodworth, I. J.; Bock, P.; Breuker, H.; Brown, R. M.; Brun, R.; Buijs, A.; Burckhart, H. J.; Capiluppi, P.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Chrin, J. T. M.; Cohen, I.; Collins, W. J.; Conboy, J. E.; Couch, M.; Coupland, M.; Cuffiani, M.; Dado, S.; Dallavalle, G. M.; Deninno, M. M.; Dieckmann, A.; Dittmar, M.; Dixit, M. S.; Duchovni, E.; Duerdoth, I. P.; Dumas, D.; El Mamouni, H.; Elcombe, P. A.; Estabrooks, P. G.; Etzion, E.; Fabbri, F.; Farthouat, P.; Fischer, H. M.; Fong, D. G.; French, M. T.; Fukunaga, C.; Gandois, B.; Ganel, O.; Gary, J. W.; Gascon, J.; Geddes, N. I.; Gee, C. N. P.; Geich-Gimbel, C.; Gensler, S. W.; Gentit, F. X.; Giacomelli, G.; Gibson, V.; Gibson, W. R.; Gillies, J. D.; Goldberg, J.; Goodrick, M. J.; Gorn, W.; Granite, D.; Gross, E.; Grosse-Wiesmann, P.; Grunhaus, J.; Hagedorn, H.; Hagemann, J.; Hansroul, M.; Hargrove, C. K.; Hart, J.; Hattersley, P. M.; Hauschild, M.; Hawkes, C. M.; Heflin, E.; Hemingway, R. J.; Heuer, R. D.; Hill, J. C.; Hillier, S. J.; Ho, C.; Hobbs, J. D.; Hobson, P. R.; Hochman, D.; Holl, B.; Homer, R. J.; Hou, S. R.; Howarth, C. P.; Hughes-Jones, R. E.; Igo-Kemenes, P.; Ihssen, H.; Imrie, D. C.; Jawahery, A.; Jeffreys, P. W.; Jeremie, H.; Jimack, M.; Jobes, M.; Jones, R. W. L.; Jovanovic, P.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Kellogg, R. G.; Kennedy, B. W.; Kleinwort, C.; Klem, D. E.; Knop, G.; Kobayashi, T.; Kokott, T. P.; Köpke, L.; Kowalewski, R.; Kreutzmann, H.; Von Krogh, J.; Kroll, J.; Kuwano, M.; Kyberd, P.; Lafferty, G. D.; Lamarche, F.; Larson, W. J.; Lasota, M. M. B.; Layter, J. G.; Le Du, P.; Leblanc, P.; Lee, A. M.; Lellouch, D.; Lennert, P.; Lessard, L.; Levinson, L.; Lloyd, S. L.; Loebinger, F. K.; Lorah, J. M.; Lorazo, B.; Losty, M. J.; Ludwig, J.; Lupu, N.; Ma, J.; Macbeth, A. A.; Mannelli, M.; Marcellini, S.; Maringer, G.; Martin, A. J.; Martin, J. P.; Mashimo, T.; Mättig, P.; Maur, U.; McMahon, T. J.; McPherson, A. C.; Meijers, F.; Menszner, D.; Merritt, F. S.; Mes, H.; Michelini, A.; Middleton, R. P.; Mikenberg, G.; Miller, D. J.; Milstene, C.; Minowa, M.; Mohr, W.; Montanari, A.; Mori, T.; Moss, M. W.; Muller, A.; Murphy, P. G.; Murray, W. J.; Nellen, B.; Nguyen, H. H.; Nozaki, M.; O'Dowd, A. J. P.; O'Neale, S. W.; O'Neill, B. P.; Oakham, F. G.; Odorici, F.; Ogg, M.; Oh, H.; Oreglia, M. J.; Orito, S.; Patrick, G. N.; Pawley, S. J.; Pfister, P.; Pilcher, J. E.; Pinfold, J. L.; Plane, D. E.; Poli, B.; Pouladdej, A.; Pritchard, T. W.; Quast, G.; Raab, J.; Redmond, M. W.; Rees, D. L.; Regimbald, M.; Riles, K.; Roach, C. M.; Robins, S. A.; Rollnik, A.; Roney, J. M.; Rossberg, S.; Rossi, A. M.; Routenburg, P.; Runge, K.; Runolfsson, O.; Sanghera, S.; Sansum, R. A.; Sasaki, M.; Saunders, B. J.; Schaile, A. D.; Schaile, O.; Schappert, W.; Scharff-Hansen, P.; Von der Schmitt, H.; Schreiber, S.; Schwarz, J.; Shapira, A.; Shen, B. C.; Sherwood, P.; Simon, A.; Siroli, G. P.; Skuja, A.; Smith, A. M.; Smith, T. J.; Snow, G. A.; Spreadbury, E. J.; Springer, R. W.; Sproston, M.; Stephens, K.; Stier, H. E.; Ströhmer, R.; Strom, D.; Takeda, H.; Takeshita, T.; Tsukamoto, T.; Turner, M. F.; Tysarczyk-Niemeyer, G.; Van den Plas, D.; Vandalen, G. J.; Virtue, C. J.; Wagner, A.; Wahl, C.; Ward, C. P.; Ward, D. R.; Waterhouse, J.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Weber, M.; Weisz, S.; Wermes, N.; Weymann, M.; Wilson, G. W.; Wilson, J. A.; Wingerter, I.; Winterer, V.-H.; Wood, N. C.; Wotton, S.; Wuensch, B.; Wyatt, T. R.; Yaari, R.; Yang, Y.; Yekutieli, G.; Yoshida, T.; Zeuner, W.; Zorn, G. T.; Zylberajch, S.; OPAL Collaboration
1990-04-01
We report on a measurement of the mass of the Z 0 boson, its total width, and its partial decay widths into hadrons and leptons. On the basis of 25 801 hadronic decays and 1999 decays into electrons, muons or taus, selected over eleven energy points between 88.28 GeV and 95.04 GeV, we obtain from a combined fit to hadrons and leptons a mass of Mz=91.154±0.021 (exp)±0.030 (LEP) GeV, and a total width of Γz=2.536±0.045 GeV. The errors on Mz have been separated into the experimental error and the uncertainty due to the LEP beam energy. The measured leptonic partial widths are Γee=81.2±2.6 MeV, Γμμ=82.6± 5.8 MeV, and Γττ=85.7±7.1 MeV, consistent with lepton universality. From a fit assuming lepton universality we obtain Γℓ + ℓ - = 81.9±2.0 MeV. The hadronic partial width is Γhad=1838±46 MeV. From the measured total and partial widths a model independent value for the invisible width is calculated to be Γinv=453±44 MeV. The errors quoted include both the statistical and the systematic uncertainties.
Bruza, Petr; Gollub, Sarah L; Andreozzi, Jacqueline M; Tendler, Irwin I; Williams, Benjamin B; Jarvis, Lesley A; Gladstone, David J; Pogue, Brian W
2018-05-02
The purpose of this study was to measure surface dose by remote time-gated imaging of plastic scintillators. A novel technique for time-gated, intensified camera imaging of scintillator emission was demonstrated, and key parameters influencing the signal were analyzed, including distance, angle and thickness. A set of scintillator samples was calibrated by using thermo-luminescence detector response as reference. Examples of use in total skin electron therapy are described. The data showed excellent room light rejection (signal-to-noise ratio of scintillation SNR ≈ 470), ideal scintillation dose response linearity, and 2% dose rate error. Individual sample scintillation response varied by 7% due to sample preparation. Inverse square distance dependence correction and lens throughput error (8% per meter) correction were needed. At scintillator-to-source angle and observation angle <50°, the radiant energy fluence error was smaller than 1%. The achieved standard error of the scintillator cumulative dose measurement compared to the TLD dose was 5%. The results from this proof-of-concept study documented the first use of small scintillator targets for remote surface dosimetry in ambient room lighting. The measured dose accuracy renders our method to be comparable to thermo-luminescent detector dosimetry, with the ultimate realization of accuracy likely to be better than shown here. Once optimized, this approach to remote dosimetry may substantially reduce the time and effort required for surface dosimetry.
NASA Astrophysics Data System (ADS)
Bruza, Petr; Gollub, Sarah L.; Andreozzi, Jacqueline M.; Tendler, Irwin I.; Williams, Benjamin B.; Jarvis, Lesley A.; Gladstone, David J.; Pogue, Brian W.
2018-05-01
The purpose of this study was to measure surface dose by remote time-gated imaging of plastic scintillators. A novel technique for time-gated, intensified camera imaging of scintillator emission was demonstrated, and key parameters influencing the signal were analyzed, including distance, angle and thickness. A set of scintillator samples was calibrated by using thermo-luminescence detector response as reference. Examples of use in total skin electron therapy are described. The data showed excellent room light rejection (signal-to-noise ratio of scintillation SNR ≈ 470), ideal scintillation dose response linearity, and 2% dose rate error. Individual sample scintillation response varied by 7% due to sample preparation. Inverse square distance dependence correction and lens throughput error (8% per meter) correction were needed. At scintillator-to-source angle and observation angle <50°, the radiant energy fluence error was smaller than 1%. The achieved standard error of the scintillator cumulative dose measurement compared to the TLD dose was 5%. The results from this proof-of-concept study documented the first use of small scintillator targets for remote surface dosimetry in ambient room lighting. The measured dose accuracy renders our method to be comparable to thermo-luminescent detector dosimetry, with the ultimate realization of accuracy likely to be better than shown here. Once optimized, this approach to remote dosimetry may substantially reduce the time and effort required for surface dosimetry.
Galloway, Joel M.; Ortiz, Roderick F.; Bales, Jerad D.; Mau, David P.
2008-01-01
Pueblo Reservoir is west of Pueblo, Colorado, and is an important water resource for southeastern Colorado. The reservoir provides irrigation, municipal, and industrial water to various entities throughout the region. In anticipation of increased population growth, the cities of Colorado Springs, Fountain, Security, and Pueblo West have proposed building a pipeline that would be capable of conveying 78 million gallons of raw water per day (240 acre-feet) from Pueblo Reservoir. The U.S. Geological Survey, in cooperation with Colorado Springs Utilities and the Bureau of Reclamation, developed, calibrated, and verified a hydrodynamic and water-quality model of Pueblo Reservoir to describe the hydrologic, chemical, and biological processes in Pueblo Reservoir that can be used to assess environmental effects in the reservoir. Hydrodynamics and water-quality characteristics in Pueblo Reservoir were simulated using a laterally averaged, two-dimensional model that was calibrated using data collected from October 1985 through September 1987. The Pueblo Reservoir model was calibrated based on vertical profiles of water temperature and dissolved-oxygen concentration, and water-quality constituent concentrations collected in the epilimnion and hypolimnion at four sites in the reservoir. The calibrated model was verified with data from October 1999 through September 2002, which included a relatively wet year (water year 2000), an average year (water year 2001), and a dry year (water year 2002). Simulated water temperatures compared well to measured water temperatures in Pueblo Reservoir from October 1985 through September 1987. Spatially, simulated water temperatures compared better to measured water temperatures in the downstream part of the reservoir than in the upstream part of the reservoir. Differences between simulated and measured water temperatures also varied through time. Simulated water temperatures were slightly less than measured water temperatures from March to May 1986 and 1987, and slightly greater than measured data in August and September 1987. Relative to the calibration period, simulated water temperatures during the verification period did not compare as well to measured water temperatures. In general, simulated dissolved-oxygen concentrations for the calibration period compared well to measured concentrations in Pueblo Reservoir. Spatially, simulated concentrations deviated more from the measured values at the downstream part of the reservoir than at other locations in the reservoir. Overall, the absolute mean error ranged from 1.05 (site 1B) to 1.42 milligrams per liter (site 7B), and the root mean square error ranged from 1.12 (site 1B) to 1.67 milligrams per liter (site 7B). Simulated dissolved oxygen in the verification period compared better to the measured concentrations than in the calibration period. The absolute mean error ranged from 0.91 (site 5C) to 1.28 milligrams per liter (site 7B), and the root mean square error ranged from 1.03 (site 5C) to 1.46 milligrams per liter (site 7B). Simulated total dissolved solids generally were less than measured total dissolved-solids concentrations in Pueblo Reservoir from October 1985 through September 1987. The largest differences between simulated and measured total dissolved solids were observed at the most downstream sites in Pueblo Reservoir during the second year of the calibration period. Total dissolved-solids data were not available from reservoir sites during the verification period, so in-reservoir specific-conductance data were compared to simulated total dissolved solids. Simulated total dissolved solids followed the same patterns through time as the measured specific conductance data during the verification period. Simulated total nitrogen concentrations compared relatively well to measured concentrations in the Pueblo Reservoir model. The absolute mean error ranged from 0.21 (site 1B) to 0.27 milligram per liter as nitrogen (sites 3B and 7
Tsang, William W N; Lam, Nazca K Y; Lau, Kit N L; Leung, Harry C H; Tsang, Crystal M S; Lu, Xi
2013-12-01
To investigate the effects of aging on postural control and cognitive performance in single- and dual-tasking. A cross-sectional comparative design was conducted in a university motion analysis laboratory. Young adults (n = 30; age 21.9 ± 2.4 years) and older adults (n = 30; age 71.9 ± 6.4 years) were recruited. Postural control after stepping down was measured with and without performing a concurrent auditory response task. Measurement included: (1) reaction time and (2) error rate in performing the cognitive task; (3) total sway path and (4) total sway area after stepping down. Our findings showed that the older adults had significantly longer reaction times and higher error rates than the younger subjects in both the single-tasking and dual-tasking conditions. The older adults had significantly longer reaction times and higher error rates when dual-tasking compared with single-tasking, but the younger adults did not. The older adults demonstrated significantly less total sway path, but larger total sway area in single-leg stance after stepping down than the young adults. The older adults showed no significant change in total sway path and area between the dual-tasking and when compared with single-tasking conditions, while the younger adults showed significant decreases in sway. Older adults prioritize postural control by sacrificing cognitive performance when faced with dual-tasking.
Moving beyond the total sea ice extent in gauging model biases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, Detelina P.; Gleckler, Peter J.; Taylor, Karl E.
Here, reproducing characteristics of observed sea ice extent remains an important climate modeling challenge. This study describes several approaches to improve how model biases in total sea ice distribution are quantified, and applies them to historically forced simulations contributed to phase 5 of the Coupled Model Intercomparison Project (CMIP5). The quantity of hemispheric total sea ice area, or some measure of its equatorward extent, is often used to evaluate model performance. A new approach is introduced that investigates additional details about the structure of model errors, with an aim to reduce the potential impact of compensating errors when gauging differencesmore » between simulated and observed sea ice. Using multiple observational datasets, several new methods are applied to evaluate the climatological spatial distribution and the annual cycle of sea ice cover in 41 CMIP5 models. It is shown that in some models, error compensation can be substantial, for example resulting from too much sea ice in one region and too little in another. Error compensation tends to be larger in models that agree more closely with the observed total sea ice area, which may result from model tuning. The results herein suggest that consideration of only the total hemispheric sea ice area or extent can be misleading when quantitatively comparing how well models agree with observations. Further work is needed to fully develop robust methods to holistically evaluate the ability of models to capture the finescale structure of sea ice characteristics; however, the “sector scale” metric used here aids in reducing the impact of compensating errors in hemispheric integrals.« less
Moving beyond the total sea ice extent in gauging model biases
Ivanova, Detelina P.; Gleckler, Peter J.; Taylor, Karl E.; ...
2016-11-29
Here, reproducing characteristics of observed sea ice extent remains an important climate modeling challenge. This study describes several approaches to improve how model biases in total sea ice distribution are quantified, and applies them to historically forced simulations contributed to phase 5 of the Coupled Model Intercomparison Project (CMIP5). The quantity of hemispheric total sea ice area, or some measure of its equatorward extent, is often used to evaluate model performance. A new approach is introduced that investigates additional details about the structure of model errors, with an aim to reduce the potential impact of compensating errors when gauging differencesmore » between simulated and observed sea ice. Using multiple observational datasets, several new methods are applied to evaluate the climatological spatial distribution and the annual cycle of sea ice cover in 41 CMIP5 models. It is shown that in some models, error compensation can be substantial, for example resulting from too much sea ice in one region and too little in another. Error compensation tends to be larger in models that agree more closely with the observed total sea ice area, which may result from model tuning. The results herein suggest that consideration of only the total hemispheric sea ice area or extent can be misleading when quantitatively comparing how well models agree with observations. Further work is needed to fully develop robust methods to holistically evaluate the ability of models to capture the finescale structure of sea ice characteristics; however, the “sector scale” metric used here aids in reducing the impact of compensating errors in hemispheric integrals.« less
Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Ulbrich, N.; L'Esperance, A.
2017-01-01
A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalapurakal, John A., E-mail: j-kalapurakal@northwestern.edu; Zafirovski, Aleksandar; Smith, Jeffery
Purpose: This report describes the value of a voluntary error reporting system and the impact of a series of quality assurance (QA) measures including checklists and timeouts on reported error rates in patients receiving radiation therapy. Methods and Materials: A voluntary error reporting system was instituted with the goal of recording errors, analyzing their clinical impact, and guiding the implementation of targeted QA measures. In response to errors committed in relation to treatment of the wrong patient, wrong treatment site, and wrong dose, a novel initiative involving the use of checklists and timeouts for all staff was implemented. The impactmore » of these and other QA initiatives was analyzed. Results: From 2001 to 2011, a total of 256 errors in 139 patients after 284,810 external radiation treatments (0.09% per treatment) were recorded in our voluntary error database. The incidence of errors related to patient/tumor site, treatment planning/data transfer, and patient setup/treatment delivery was 9%, 40.2%, and 50.8%, respectively. The compliance rate for the checklists and timeouts initiative was 97% (P<.001). These and other QA measures resulted in a significant reduction in many categories of errors. The introduction of checklists and timeouts has been successful in eliminating errors related to wrong patient, wrong site, and wrong dose. Conclusions: A comprehensive QA program that regularly monitors staff compliance together with a robust voluntary error reporting system can reduce or eliminate errors that could result in serious patient injury. We recommend the adoption of these relatively simple QA initiatives including the use of checklists and timeouts for all staff to improve the safety of patients undergoing radiation therapy in the modern era.« less
Yang, Rui; Tong, Juxiu; Hu, Bill X; Li, Jiayun; Wei, Wenshuo
2017-06-01
Agricultural non-point source pollution is a major factor in surface water and groundwater pollution, especially for nitrogen (N) pollution. In this paper, an experiment was conducted in a direct-seeded paddy field under traditional continuously flooded irrigation (CFI). The water movement and N transport and transformation were simulated via the Hydrus-1D model, and the model was calibrated using field measurements. The model had a total water balance error of 0.236 cm and a relative error (error/input total water) of 0.23%. For the solute transport model, the N balance error and relative error (error/input total N) were 0.36 kg ha -1 and 0.40%, respectively. The study results indicate that the plow pan plays a crucial role in vertical water movement in paddy fields. Water flow was mainly lost through surface runoff and underground drainage, with proportions to total input water of 32.33 and 42.58%, respectively. The water productivity in the study was 0.36 kg m -3 . The simulated N concentration results revealed that ammonia was the main form in rice uptake (95% of total N uptake), and its concentration was much larger than for nitrate under CFI. Denitrification and volatilization were the main losses, with proportions to total consumption of 23.18 and 14.49%, respectively. Leaching (10.28%) and surface runoff loss (2.05%) were the main losses of N pushed out of the system by water. Hydrus-1D simulation was an effective method to predict water flow and N concentrations in the three different forms. The study provides results that could be used to guide water and fertilization management and field results for numerical studies of water flow and N transport and transformation in the future.
Comparison of different tree sap flow up-scaling procedures using Monte-Carlo simulations
NASA Astrophysics Data System (ADS)
Tatarinov, Fyodor; Preisler, Yakir; Roahtyn, Shani; Yakir, Dan
2015-04-01
An important task in determining forest ecosystem water balance is the estimation of stand transpiration, allowing separating evapotranspiration into transpiration and soil evaporation. This can be based on up-scaling measurements of sap flow in representative trees (SF), which can be done by different mathematical algorithms. The aim of the present study was to evaluate the error associated with different up-scaling algorithms under different conditions. Other types of errors (such as, measurement error, within tree SF variability, choice of sample plot etc.) were not considered here. A set of simulation experiments using Monte-Carlo technique was carried out and three up-scaling procedures were tested. (1) Multiplying mean stand sap flux density based on unit sapwood cross-section area (SFD) by total sapwood area (Klein et al, 2014); (2) deriving of linear dependence of tree sap flow on tree DBH and calculating SFstand using predicted SF by DBH classes and stand DBH distribution (Cermak et al., 2004); (3) same as method 2 but using non-linear dependency. Simulations were performed under different SFD(DBH) slope (bs, positive, negative, zero); different DBH and SFD standard deviations (Δd and Δs, respectively) and DBH class size. It was assumed that all trees in a unit area are measured and the total SF of all trees in the experimental plot was taken as the reference SFstand value. Under negative bs all models tend to overestimate SFstand and the error increases exponentially with decreasing bs. Under bs >0 all models tend to underestimate SFstand, but the error is much smaller than for bs
A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.
Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema
2016-01-01
A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.
NASA Technical Reports Server (NTRS)
Joyce, T. M.; Dunworth, J. A.; Schubert, D. M.; Stalcup, M. C.; Barbour, R. L.
1988-01-01
The degree to which Acoustic-Doppler Current Profiler (ADCP) and expendable bathythermograph (XBT) data can provide quantitative measurements of the velocity structure and transport of the Gulf Stream is addressed. An algorithm is used to generate salinity from temperature and depth using an historical Temperature/Salinity relation for the NW Atlantic. Results have been simulated using CTD data and comparing real and pseudo salinity files. Errors are typically less than 2 dynamic cm for the upper 800 m out of a total signal of 80 cm (across the Gulf Stream). When combined with ADCP data for a near-surface reference velocity, transport errors in isopycnal layers are less than about 1 Sv (10 to the 6th power cu m/s), as is the difference in total transport for the upper 800 m between real and pseudo data. The method is capable of measuring the real variability of the Gulf Stream, and when combined with altimeter data, can provide estimates of the geoid slope with oceanic errors of a few parts in 10 to the 8th power over horizontal scales of 500 km.
Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Sharpe, Jacob A.
2014-01-01
A code for predicting supersonic jet broadband shock-associated noise was assessed us- ing a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify de ciencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the mea- sured data, a sensitivity analysis of the model parameters with emphasis on the de nition of the convection velocity parameter, and a least-squares t of the predicted to the mea- sured shock-associated noise component spectra, resulted in a new de nition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne
2014-01-01
Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.
The effect of bandwidth on filter instrument total ozone accuracy
NASA Technical Reports Server (NTRS)
Basher, R. E.
1977-01-01
The effect of the width and shape of the New Zealand filter instrument's passbands on measured total-ozone accuracy is determined using a numerical model of the spectral measurement process. The model enables the calculation of corrections for the 'bandwidth-effect' error and shows that highly attenuating passband skirts and well-suppressed leakage bands are at least as important as narrow half-bandwidths. Over typical ranges of airmass and total ozone, the range in the bandwidth-effect correction is about 2% in total ozone for the filter instrument, compared with about 1% for the Dobson instrument.
What is the acceptable hemolysis index for the measurements of plasma potassium, LDH and AST?
Rousseau, Nathalie; Pige, Raphaëlle; Cohen, Richard; Pecquet, Matthieu
2016-06-01
Hemolysis is a cause of variability in test results for plasma potassium, LDH and AST and is a non-negligible part of measurement uncertainty. However, allowable levels of hemolysis provided by reagent suppliers take neither analytical variability (trueness and precision) nor the measurand into account. Using a calibration range of hemolysis, we measured the plasma concentrations of potassium, LDH and AST, and hemolysis indices with a Cobas C501 analyzer (Roche Diagnostics(®), Meylan, France). Based on the allowable total error (according to Ricós et al.) and the expanded measurement uncertainty equation we calculated the maximum allowable bias for two concentrations of each measurand. Finally, we determined the allowable hemolysis indices for all three measurands. We observed a linear relationship between the observed increases of concentration and hemolysis indices. The LDH measurement was the most sensitive to hemolysis, followed by AST and potassium measurements. The determination of the allowable hemolysis index depends on the targeted measurand, its concentration and the chosen level of requirement of allowable total error.
Impact of Tropospheric Aerosol Absorption on Ozone Retrieval from buv Measurements
NASA Technical Reports Server (NTRS)
Torres, O.; Bhartia, P. K.
1998-01-01
The impact of tropospheric aerosols on the retrieval of column ozone amounts using spaceborne measurements of backscattered ultraviolet radiation is examined. Using radiative transfer calculations, we show that uv-absorbing desert dust may introduce errors as large as 10% in ozone column amount, depending on the aerosol layer height and optical depth. Smaller errors are produced by carbonaceous aerosols that result from biomass burning. Though the error is produced by complex interactions between ozone absorption (both stratospheric and tropospheric), aerosol scattering, and aerosol absorption, a surprisingly simple correction procedure reduces the error to about 1%, for a variety of aerosols and for a wide range of aerosol loading. Comparison of the corrected TOMS data with operational data indicates that though the zonal mean total ozone derived from TOMS are not significantly affected by these errors, localized affects in the tropics can be large enough to seriously affect the studies of tropospheric ozone that are currently undergoing using the TOMS data.
New Methods for Assessing and Reducing Uncertainty in Microgravity Studies
NASA Astrophysics Data System (ADS)
Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.
2017-12-01
Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.
Export of nutrients and major ionic solutes from a rain forest catchment in the Central Amazon Basin
NASA Astrophysics Data System (ADS)
Lesack, Lance F. W.
1993-03-01
The relative roles of base flow runoff versus storm flow runoff versus subsurface outflow in controlling total export of solutes from a 23.4-ha catchment of undisturbed rain forest in the central Amazon Basin were evaluated from water and solute flux measurements performed over a 1 year period. Solutes exported via 173 storms during the study were estimated from stream water samples collected during base flow conditions and during eight storms, and by utilizing a hydrograph separation technique in combination with a mixing model to partition storm flow from base flow fluxes. Solutes exported by subsurface outflow were estimated from groundwater samples from three nests of piezometers installed into the streambed, and concurrent measurements of hydraulic conductivity and hydraulic head gradients. Base flow discharge represented 92% of water outflow from the basin and was the dominant pathway of solute export. Although storm flow discharge represented only 5% of total water outflow, storm flow solute fluxes represented up to 25% of the total annual export flux, though for many solutes the portion was less. Subsurface outflow represented only 2.5% of total water outflow, and subsurface solute fluxes never represented more than 5% of the total annual export flux. Measurement errors were relatively high for storm flow and subsurface outflow fluxes, but cumulative measurement errors associated with the total solute fluxes exported from the catchment, in most cases, ranged from only ±7% to 14% because base flow fluxes were measured relatively well. The export fluxes of most solutes are substantially less than previously reported for comparable small catchments in the Amazon basin, and these differences cannot be reconciled by the fact that storm flow and subsurface outflows were not appropriately measured in previous studies.
Direct measurement of the poliovirus RNA polymerase error frequency in vitro
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B.
1988-02-01
The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. Amore » fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg{sup 2+} (pH 7.0) to 7.0 mM Mg{sup 2+} (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study.« less
Assessing the Performance of Human-Automation Collaborative Planning Systems
2011-06-01
process- ing and incorporating vast amounts of incoming information into their solutions. How- ever, these algorithms are brittle and unable to account for...planning system, a descriptive Mission Performance measure may address the total travel time on the path or the cost of the path (e.g. total work...minimizing costs or collisions [4, 32, 33]. Error measures for such a path planning system may track how many collisions occur or how much threat
Hsiao, Hongwei; Weaver, Darlene; Hsiao, James; Whitestone, Jennifer; Kau, Tsui-Ying; Whisler, Richard; Ferri, Robert
2016-01-01
This study evaluated the accuracy of self-reported body weight and height compared to measured values among firefighters and identified factors associated with reporting error. A total of 863 male and 88 female firefighters in four US regions participated in the study. The results showed that both men and women underestimated their body weight (−0.4 ± 4.1, −1.1 ± 3.6 kg) and overestimated their height (29 ± 18, 17 ± 16 mm). Women underestimated more than men on weight (p = 0.022) and men overestimated more than women on height (p < 0.001). Reporting errors on weight were increased with overweight status (p < 0.001) and were disproportionate among subgroups. About 27% men and 24% women had reporting errors on weight greater than ± 2.2 kg, and 59% men and 28% women had reporting errors on height greater than 25 mm. PMID:25198061
A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding
NASA Astrophysics Data System (ADS)
Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui
2016-02-01
In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.
Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.
2011-01-01
Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio
Phase measurement error in summation of electron holography series.
McLeod, Robert A; Bergen, Michael; Malac, Marek
2014-06-01
Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Cognitive flexibility correlates with gambling severity in young adults.
Leppink, Eric W; Redden, Sarah A; Chamberlain, Samuel R; Grant, Jon E
2016-10-01
Although gambling disorder (GD) is often characterized as a problem of impulsivity, compulsivity has recently been proposed as a potentially important feature of addictive disorders. The present analysis assessed the neurocognitive and clinical relationship between compulsivity on gambling behavior. A sample of 552 non-treatment seeking gamblers age 18-29 was recruited from the community for a study on gambling in young adults. Gambling severity levels included both casual and disordered gamblers. All participants completed the Intra/Extra-Dimensional Set Shift (IED) task, from which the total adjusted errors were correlated with gambling severity measures, and linear regression modeling was used to assess three error measures from the task. The present analysis found significant positive correlations between problems with cognitive flexibility and gambling severity (reflected by the number of DSM-5 criteria, gambling frequency, amount of money lost in the past year, and gambling urge/behavior severity). IED errors also showed a positive correlation with self-reported compulsive behavior scores. A significant correlation was also found between IED errors and non-planning impulsivity from the BIS. Linear regression models based on total IED errors, extra-dimensional (ED) shift errors, or pre-ED shift errors indicated that these factors accounted for a significant portion of the variance noted in several variables. These findings suggest that cognitive flexibility may be an important consideration in the assessment of gamblers. Results from correlational and linear regression analyses support this possibility, but the exact contributions of both impulsivity and cognitive flexibility remain entangled. Future studies will ideally be able to assess the longitudinal relationships between gambling, compulsivity, and impulsivity, helping to clarify the relative contributions of both impulsive and compulsive features. Copyright © 2016 Elsevier Ltd. All rights reserved.
The effect of divided attention on novices and experts in laparoscopic task performance.
Ghazanfar, Mudassar Ali; Cook, Malcolm; Tang, Benjie; Tait, Iain; Alijani, Afshin
2015-03-01
Attention is important for the skilful execution of surgery. The surgeon's attention during surgery is divided between surgery and outside distractions. The effect of this divided attention has not been well studied previously. We aimed to compare the effect of dividing attention of novices and experts on a laparoscopic task performance. Following ethical approval, 25 novices and 9 expert surgeons performed a standardised peg transfer task in a laboratory setup under three randomly assigned conditions: silent as control condition and two standardised auditory distracting tasks requiring response (easy and difficult) as study conditions. Human reliability assessment was used for surgical task analysis. Primary outcome measures were correct auditory responses, task time, number of surgical errors and instrument movements. Secondary outcome measures included error rate, error probability and hand specific differences. Non-parametric statistics were used for data analysis. 21109 movements and 9036 total errors were analysed. Novices had increased mean task completion time (seconds) (171 ± 44SD vs. 149 ± 34, p < 0.05), number of total movements (227 ± 27 vs. 213 ± 26, p < 0.05) and number of errors (127 ± 51 vs. 96 ± 28, p < 0.05) during difficult study conditions compared to control. The correct responses to auditory stimuli were less frequent in experts (68 %) compared to novices (80 %). There was a positive correlation between error rate and error probability in novices (r (2) = 0.533, p < 0.05) but not in experts (r (2) = 0.346, p > 0.05). Divided attention conditions in theatre environment require careful consideration during surgical training as the junior surgeons are less able to focus their attention during these conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopan, O; Kalet, A; Smith, W
2016-06-15
Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included inmore » the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in detecting those errors with low detection rates.« less
Center-to-Limb Variation of Deprojection Errors in SDO/HMI Vector Magnetograms
NASA Astrophysics Data System (ADS)
Falconer, David; Moore, Ronald; Barghouty, Nasser; Tiwari, Sanjiv K.; Khazanov, Igor
2015-04-01
For use in investigating the magnetic causes of coronal heating in active regions and for use in forecasting an active region’s productivity of major CME/flare eruptions, we have evaluated various sunspot-active-region magnetic measures (e.g., total magnetic flux, free-magnetic-energy proxies, magnetic twist measures) from HMI Active Region Patches (HARPs) after the HARP has been deprojected to disk center. From a few tens of thousand HARP vector magnetograms (of a few hundred sunspot active regions) that have been deprojected to disk center, we have determined that the errors in the whole-HARP magnetic measures from deprojection are negligibly small for HARPS deprojected from distances out to 45 heliocentric degrees. For some purposes the errors from deprojection are tolerable out to 60 degrees. We obtained this result by the following process. For each whole-HARP magnetic measure: 1) for each HARP disk passage, normalize the measured values by the measured value for that HARP at central meridian; 2) then for each 0.05 Rs annulus, average the values from all the HARPs in the annulus. This results in an average normalized value as a function of radius for each measure. Assuming no deprojection errors and that, among a large set of HARPs, the measure is as likely to decrease as to increase with HARP distance from disk center, the average of each annulus is expected to be unity, and, for a statistically large sample, the amount of deviation of the average from unity estimates the error from deprojection effects. The deprojection errors arise from 1) errors in the transverse field being deprojected into the vertical field for HARPs observed at large distances from disk center, 2) increasingly larger foreshortening at larger distances from disk center, and 3) possible errors in transverse-field-direction ambiguity resolution.From the compiled set of measured vales of whole-HARP magnetic nonpotentiality parameters measured from deprojected HARPs, we have examined the relation between each nonpotentiality parameter and the speed of CMEs from the measured active regions. For several different nonpotentiality parameters we find there is an upper limit to the CME speed, the limit increasing as the value of the parameter increases.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
An Evaluation of the Measurement Requirements for an In-Situ Wake Vortex Detection System
NASA Technical Reports Server (NTRS)
Fuhrmann, Henri D.; Stewart, Eric C.
1996-01-01
Results of a numerical simulation are presented to determine the feasibility of estimating the location and strength of a wake vortex from imperfect in-situ measurements. These estimates could be used to provide information to a pilot on how to avoid a hazardous wake vortex encounter. An iterative algorithm based on the method of secants was used to solve the four simultaneous equations describing the two-dimensional flow field around a pair of parallel counter-rotating vortices of equal and constant strength. The flow field information used by the algorithm could be derived from measurements from flow angle sensors mounted on the wing-tip of the detecting aircraft and an inertial navigation system. The study determined the propagated errors in the estimated location and strength of the vortex which resulted from random errors added to theoretically perfect measurements. The results are summarized in a series of charts and a table which make it possible to estimate these propagated errors for many practical situations. The situations include several generator-detector airplane combinations, different distances between the vortex and the detector airplane, as well as different levels of total measurement error.
Quotation accuracy in medical journal articles-a systematic review and meta-analysis.
Jergas, Hannah; Baethge, Christopher
2015-01-01
Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose-quotation errors-may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress.
NASA Astrophysics Data System (ADS)
Kimijiama, S.; Nagai, M.
2016-06-01
With telecommunication development in Myanmar, person trip survey is supposed to shift from conversational questionnaire to GPS survey. Integration of both historical questionnaire data to GPS survey and visualizing them are very important to evaluate chronological trip changes with socio-economic and environmental events. The objectives of this paper are to: (a) visualize questionnaire-based person trip data, (b) compare the errors between questionnaire and GPS data sets with respect to sex and age and (c) assess the trip behaviour in time-series. Totally, 345 individual respondents were selected through random stratification to assess person trip using a questionnaire and GPS survey for each. Conversion of trip information such as a destination from the questionnaires was conducted by using GIS. The results show that errors between the two data sets in the number of trips, total trip distance and total trip duration are 25.5%, 33.2% and 37.2%, respectively. The smaller errors are found among working-age females mainly employed with the project-related activities generated by foreign investment. The trip distant was yearly increased. The study concluded that visualization of questionnaire-based person trip data and integrating them to current quantitative measurements are very useful to explore historical trip changes and understand impacts from socio-economic events.
Modelling size-fractionated primary production in the Atlantic Ocean from remote sensing
NASA Astrophysics Data System (ADS)
Brewin, Robert J. W.; Tilstone, Gavin H.; Jackson, Thomas; Cain, Terry; Miller, Peter I.; Lange, Priscila K.; Misra, Ankita; Airs, Ruth L.
2017-11-01
Marine primary production influences the transfer of carbon dioxide between the ocean and atmosphere, and the availability of energy for the pelagic food web. Both the rate and the fate of organic carbon from primary production are dependent on phytoplankton size. A key aim of the Atlantic Meridional Transect (AMT) programme has been to quantify biological carbon cycling in the Atlantic Ocean and measurements of total primary production have been routinely made on AMT cruises, as well as additional measurements of size-fractionated primary production on some cruises. Measurements of total primary production collected on the AMT have been used to evaluate remote-sensing techniques capable of producing basin-scale estimates of primary production. Though models exist to estimate size-fractionated primary production from satellite data, these have not been well validated in the Atlantic Ocean, and have been parameterised using measurements of phytoplankton pigments rather than direct measurements of phytoplankton size structure. Here, we re-tune a remote-sensing primary production model to estimate production in three size fractions of phytoplankton (<2 μm, 2-10 μm and >10 μm) in the Atlantic Ocean, using measurements of size-fractionated chlorophyll and size-fractionated photosynthesis-irradiance experiments conducted on AMT 22 and 23 using sequential filtration-based methods. The performance of the remote-sensing technique was evaluated using: (i) independent estimates of size-fractionated primary production collected on a number of AMT cruises using 14C on-deck incubation experiments and (ii) Monte Carlo simulations. Considering uncertainty in the satellite inputs and model parameters, we estimate an average model error of between 0.27 and 0.63 for log10-transformed size-fractionated production, with lower errors for the small size class (<2 μm), higher errors for the larger size classes (2-10 μm and >10 μm), and errors generally higher in oligotrophic waters. Application to satellite data in 2007 suggests the contribution of cells <2 μm and >2 μm to total primary production is approximately equal in the Atlantic Ocean.
NASA Astrophysics Data System (ADS)
Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi
2012-07-01
The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.
40 CFR 1066.705 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2012 CFR
2012-07-01
... series n total number of pulses in a series R dynamometer roll revolutions revolutions per minute rpm 2·π... torque (moment of force) newton meter N·m m2·kg·s−2 t time second s s Δt time interval, period, 1... atmospheric b base c coastdown e effective error error exp expected quantity i an individual of a series final...
NASA Astrophysics Data System (ADS)
Yoshida, Kenichiro; Nishidate, Izumi; Ojima, Nobutoshi; Iwata, Kayoko
2014-01-01
To quantitatively evaluate skin chromophores over a wide region of curved skin surface, we propose an approach that suppresses the effect of the shading-derived error in the reflectance on the estimation of chromophore concentrations, without sacrificing the accuracy of that estimation. In our method, we use multiple regression analysis, assuming the absorbance spectrum as the response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as the predictor variables. The concentrations of melanin and total hemoglobin are determined from the multiple regression coefficients using compensation formulae (CF) based on the diffuse reflectance spectra derived from a Monte Carlo simulation. To suppress the shading-derived error, we investigated three different combinations of multiple regression coefficients for the CF. In vivo measurements with the forearm skin demonstrated that the proposed approach can reduce the estimation errors that are due to shading-derived errors in the reflectance. With the best combination of multiple regression coefficients, we estimated that the ratio of the error to the chromophore concentrations is about 10%. The proposed method does not require any measurements or assumptions about the shape of the subjects; this is an advantage over other studies related to the reduction of shading-derived errors.
Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L
2010-04-01
The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.
Goharpey, Nahal; Crewther, David P; Crewther, Sheila G
2013-12-01
This study investigated the developmental trajectory of problem solving ability in children with intellectual disability (ID) of different etiologies (Down Syndrome, Idiopathic ID or low functioning Autism) as measured on the Raven's Colored Progressive Matrices test (RCPM). Children with typical development (TD) and children with ID were matched on total correct performance (i.e., non-verbal mental age) on the RCPM. RCPM total correct performance and the sophistication of error types were found to be associated with receptive vocabulary in all participants, suggesting that verbal ability plays a role in more sophisticated problem solving tasks. Children with ID made similar errors on the RCPM as younger children with TD as well as more positional error types. This result suggests that children with ID who are deficient in their cognitive processing resort to developmentally immature problem solving strategies when unable to determine the correct answer. Overall, the findings support the use of RCPM as a valid means of matching intellectual capacity of children with TD and ID. Copyright © 2013 Elsevier Ltd. All rights reserved.
Simplified model of pinhole imaging for quantifying systematic errors in image shape
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, Laura Robin; Izumi, N.; Khan, S. F.
In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less
Centroid Position as a Function of Total Counts in a Windowed CMOS Image of a Point Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurtz, R E; Olivier, S; Riot, V
2010-05-27
We obtained 960,200 22-by-22-pixel windowed images of a pinhole spot using the Teledyne H2RG CMOS detector with un-cooled SIDECAR readout. We performed an analysis to determine the precision we might expect in the position error signals to a telescope's guider system. We find that, under non-optimized operating conditions, the error in the computed centroid is strongly dependent on the total counts in the point image only below a certain threshold, approximately 50,000 photo-electrons. The LSST guider camera specification currently requires a 0.04 arcsecond error at 10 Hertz. Given the performance measured here, this specification can be delivered with a singlemore » star at 14th to 18th magnitude, depending on the passband.« less
Simplified model of pinhole imaging for quantifying systematic errors in image shape
Benedetti, Laura Robin; Izumi, N.; Khan, S. F.; ...
2017-10-30
In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less
Prescribing Errors Involving Medication Dosage Forms
Lesar, Timothy S
2002-01-01
CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138
Blumenfeld, Philip; Hata, Nobuhiko; DiMaio, Simon; Zou, Kelly; Haker, Steven; Fichtinger, Gabor; Tempany, Clare M C
2007-09-01
To quantify needle placement accuracy of magnetic resonance image (MRI)-guided core needle biopsy of the prostate. A total of 10 biopsies were performed with 18-gauge (G) core biopsy needle via a percutaneous transperineal approach. Needle placement error was assessed by comparing the coordinates of preplanned targets with the needle tip measured from the intraprocedural coherent gradient echo images. The source of these errors was subsequently investigated by measuring displacement caused by needle deflection and needle susceptibility artifact shift in controlled phantom studies. Needle placement error due to misalignment of the needle template guide was also evaluated. The mean and standard deviation (SD) of errors in targeted biopsies was 6.5 +/- 3.5 mm. Phantom experiments showed significant placement error due to needle deflection with a needle with an asymmetrically beveled tip (3.2-8.7 mm depending on tissue type) but significantly smaller error with a symmetrical bevel (0.6-1.1 mm). Needle susceptibility artifacts observed a shift of 1.6 +/- 0.4 mm from the true needle axis. Misalignment of the needle template guide contributed an error of 1.5 +/- 0.3 mm. Needle placement error was clinically significant in MRI-guided biopsy for diagnosis of prostate cancer. Needle placement error due to needle deflection was the most significant cause of error, especially for needles with an asymmetrical bevel. (c) 2007 Wiley-Liss, Inc.
Lau, Billy T; Ji, Hanlee P
2017-09-21
RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.
Economic impact of medication error: a systematic review.
Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P
2017-05-01
Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Hayashino, Yasuaki; Utsugi-Ozaki, Makiko; Feldman, Mitchell D.; Fukuhara, Shunichi
2012-01-01
The presence of hope has been found to influence an individual's ability to cope with stressful situations. The objective of this study is to evaluate the relationship between medical errors, hope and burnout among practicing physicians using validated metrics. Prospective cohort study was conducted among hospital based physicians practicing in Japan (N = 836). Measures included the validated Burnout Scale, self-assessment of medical errors and Herth Hope Index (HHI). The main outcome measure was the frequency of self-perceived medical errors, and Poisson regression analysis was used to evaluate the association between hope and medical error. A total of 361 errors were reported in 836 physician-years. We observed a significant association between hope and self-report of medical errors. Compared with the lowest tertile category of HHI, incidence rate ratios (IRRs) of self-perceived medical errors of physicians in the highest category were 0.44 (95%CI, 0.34 to 0.58) and 0.54 (95%CI, 0.42 to 0.70) respectively, for the 2nd and 3rd tertile. In stratified analysis by hope score, among physicians with a low hope score, those who experienced higher burnout reported higher incidence of errors; physicians with high hope scores did not report high incidences of errors, even if they experienced high burnout. Self-perceived medical errors showed a strong association with physicians' hope, and hope modified the association between physicians' burnout and self-perceived medical errors. PMID:22530055
Optics measurement algorithms and error analysis for the proton energy frontier
NASA Astrophysics Data System (ADS)
Langner, A.; Tomás, R.
2015-03-01
Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.
Weaver, Amy L; Stutzman, Sonja E; Supnet, Charlene; Olson, DaiWai M
2016-03-01
The emergency department (ED) is demanding and high risk. The impact of sleep quantity has been hypothesized to impact patient care. This study investigated the hypothesis that fatigue and impaired mentation, due to sleep disturbance and shortened overall sleeping hours, would lead to increased nursing errors. This is a prospective observational study of 30 ED nurses using self-administered survey and sleep architecture measured by wrist actigraphy as predictors of self-reported error rates. An actigraphy device was worn prior to working a 12-hour shift and nurses completed the Pittsburgh Sleep Quality Index (PSQI). Error rates were reported on a visual analog scale at the end of a 12-hour shift. The PSQI responses indicated that 73.3% of subjects had poor sleep quality. Lower sleep quality measured by actigraphy (hours asleep/hours in bed) was associated with higher self-perceived minor errors. Sleep quantity (total hours slept) was not associated with minor, moderate, nor severe errors. Our study found that ED nurses' sleep quality, immediately prior to a working 12-hour shift, is more predictive of error than sleep quantity. These results present evidence that a "good night's sleep" prior to working a nursing shift in the ED is beneficial for reducing minor errors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Impact of Frequent Interruption on Nurses' Patient-Controlled Analgesia Programming Performance.
Campoe, Kristi R; Giuliano, Karen K
2017-12-01
The purpose was to add to the body of knowledge regarding the impact of interruption on acute care nurses' cognitive workload, total task completion times, nurse frustration, and medication administration error while programming a patient-controlled analgesia (PCA) pump. Data support that the severity of medication administration error increases with the number of interruptions, which is especially critical during the administration of high-risk medications. Bar code technology, interruption-free zones, and medication safety vests have been shown to decrease administration-related errors. However, there are few published data regarding the impact of number of interruptions on nurses' clinical performance during PCA programming. Nine acute care nurses completed three PCA pump programming tasks in a simulation laboratory. Programming tasks were completed under three conditions where the number of interruptions varied between two, four, and six. Outcome measures included cognitive workload (six NASA Task Load Index [NASA-TLX] subscales), total task completion time (seconds), nurse frustration (NASA-TLX Subscale 6), and PCA medication administration error (incorrect final programming). Increases in the number of interruptions were associated with significant increases in total task completion time ( p = .003). We also found increases in nurses' cognitive workload, nurse frustration, and PCA pump programming errors, but these increases were not statistically significant. Complex technology use permeates the acute care nursing practice environment. These results add new knowledge on nurses' clinical performance during PCA pump programming and high-risk medication administration.
Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Tian, Yudong
2011-01-01
Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.
Economic measurement of medical errors using a hospital claims database.
David, Guy; Gunnarsson, Candace L; Waters, Heidi C; Horblyuk, Ruslan; Kaplan, Harold S
2013-01-01
The primary objective of this study was to estimate the occurrence and costs of medical errors from the hospital perspective. Methods from a recent actuarial study of medical errors were used to identify medical injuries. A visit qualified as an injury visit if at least 1 of 97 injury groupings occurred at that visit, and the percentage of injuries caused by medical error was estimated. Visits with more than four injuries were removed from the population to avoid overestimation of cost. Population estimates were extrapolated from the Premier hospital database to all US acute care hospitals. There were an estimated 161,655 medical errors in 2008 and 170,201 medical errors in 2009. Extrapolated to the entire US population, there were more than 4 million unique injury visits containing more than 1 million unique medical errors each year. This analysis estimated that the total annual cost of measurable medical errors in the United States was $985 million in 2008 and just over $1 billion in 2009. The median cost per error to hospitals was $892 for 2008 and rose to $939 in 2009. Nearly one third of all medical injuries were due to error in each year. Medical errors directly impact patient outcomes and hospitals' profitability, especially since 2008 when Medicare stopped reimbursing hospitals for care related to certain preventable medical errors. Hospitals must rigorously analyze causes of medical errors and implement comprehensive preventative programs to reduce their occurrence as the financial burden of medical errors shifts to hospitals. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Li, Qi-Quan; Wang, Chang-Quan; Zhang, Wen-Jiang; Yu, Yong; Li, Bing; Yang, Juan; Bai, Gen-Chuan; Cai, Yan
2013-02-01
In this study, a radial basis function neural network model combined with ordinary kriging (RBFNN_OK) was adopted to predict the spatial distribution of soil nutrients (organic matter and total N) in a typical hilly region of Sichuan Basin, Southwest China, and the performance of this method was compared with that of ordinary kriging (OK) and regression kriging (RK). All the three methods produced the similar soil nutrient maps. However, as compared with those obtained by multiple linear regression model, the correlation coefficients between the measured values and the predicted values of soil organic matter and total N obtained by neural network model increased by 12. 3% and 16. 5% , respectively, suggesting that neural network model could more accurately capture the complicated relationships between soil nutrients and quantitative environmental factors. The error analyses of the prediction values of 469 validation points indicated that the mean absolute error (MAE) , mean relative error (MRE), and root mean squared error (RMSE) of RBFNN_OK were 6.9%, 7.4%, and 5. 1% (for soil organic matter), and 4.9%, 6.1% , and 4.6% (for soil total N) smaller than those of OK (P<0.01), and 2.4%, 2.6% , and 1.8% (for soil organic matter), and 2.1%, 2.8%, and 2.2% (for soil total N) smaller than those of RK, respectively (P<0.05).
De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets
NASA Astrophysics Data System (ADS)
Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.
2017-08-01
The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
A video method to study Drosophila sleep.
Zimmerman, John E; Raizen, David M; Maycock, Matthew H; Maislin, Greg; Pack, Allan I
2008-11-01
To use video to determine the accuracy of the infrared beam-splitting method for measuring sleep in Drosophila and to determine the effect of time of day, sex, genotype, and age on sleep measurements. A digital image analysis method based on frame subtraction principle was developed to distinguish a quiescent from a moving fly. Data obtained using this method were compared with data obtained using the Drosophila Activity Monitoring System (DAMS). The location of the fly was identified based on its centroid location in the subtracted images. The error associated with the identification of total sleep using DAMS ranged from 7% to 95% and depended on genotype, sex, age, and time of day. The degree of the total sleep error was dependent on genotype during the daytime (P < 0.001) and was dependent on age during both the daytime and the nighttime (P < 0.001 for both). The DAMS method overestimated sleep bout duration during both the day and night, and the degree of these errors was genotype dependent (P < 0.001). Brief movements that occur during sleep bouts can be accurately identified using video. Both video and DAMS detected a homeostatic response to sleep deprivation. Video digital analysis is more accurate than DAMS in fly sleep measurements. In particular, conclusions drawn from DAMS measurements regarding daytime sleep and sleep architecture should be made with caution. Video analysis also permits the assessment of fly position and brief movements during sleep.
Ozone measurement system for NASA global air sampling program
NASA Technical Reports Server (NTRS)
Tiefermann, M. W.
1979-01-01
The ozone measurement system used in the NASA Global Air Sampling Program is described. The system uses a commercially available ozone concentration monitor that was modified and repackaged so as to operate unattended in an aircraft environment. The modifications required for aircraft use are described along with the calibration techniques, the measurement of ozone loss in the sample lines, and the operating procedures that were developed for use in the program. Based on calibrations with JPL's 5-meter ultraviolet photometer, all previously published GASP ozone data are biased high by 9 percent. A system error analysis showed that the total system measurement random error is from 3 to 8 percent of reading (depending on the pump diaphragm material) or 3 ppbv, whichever are greater.
Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria
2017-08-01
Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.
Measuring human remains in the field: Grid technique, total station, or MicroScribe?
Sládek, Vladimír; Galeta, Patrik; Sosna, Daniel
2012-09-10
Although three-dimensional (3D) coordinates for human intra-skeletal landmarks are among the most important data that anthropologists have to record in the field, little is known about the reliability of various measuring techniques. We compared the reliability of three techniques used for 3D measurement of human remain in the field: grid technique (GT), total station (TS), and MicroScribe (MS). We measured 365 field osteometric points on 12 skeletal sequences excavated at the Late Medieval/Early Modern churchyard in Všeruby, Czech Republic. We compared intra-observer, inter-observer, and inter-technique variation using mean difference (MD), mean absolute difference (MAD), standard deviation of difference (SDD), and limits of agreement (LA). All three measuring techniques can be used when accepted error ranges can be measured in centimeters. When a range of accepted error measurable in millimeters is needed, MS offers the best solution. TS can achieve the same reliability as does MS, but only when the laser beam is accurately pointed into the center of the prism. When the prism is not accurately oriented, TS produces unreliable data. TS is more sensitive to initialization than is MS. GT measures human skeleton with acceptable reliability for general purposes but insufficiently when highly accurate skeletal data are needed. We observed high inter-technique variation, indicating that just one technique should be used when spatial data from one individual are recorded. Subadults are measured with slightly lower error than are adults. The effect of maximum excavated skeletal length has little practical significance in field recording. When MS is not available, we offer practical suggestions that can help to increase reliability when measuring human skeleton in the field. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Shim, Hyungsub; Hurley, Robert S; Rogalski, Emily; Mesulam, M-Marsel
2012-07-01
This study evaluates spelling errors in the three subtypes of primary progressive aphasia (PPA): agrammatic (PPA-G), logopenic (PPA-L), and semantic (PPA-S). Forty-one PPA patients and 36 age-matched healthy controls were administered a test of spelling. The total number of errors and types of errors in spelling to dictation of regular words, exception words and nonwords, were recorded. Error types were classified based on phonetic plausibility. In the first analysis, scores were evaluated by clinical diagnosis. Errors in spelling exception words and phonetically plausible errors were seen in PPA-S. Conversely, PPA-G was associated with errors in nonword spelling and phonetically implausible errors. In the next analysis, spelling scores were correlated to other neuropsychological language test scores. Significant correlations were found between exception word spelling and measures of naming and single word comprehension. Nonword spelling correlated with tests of grammar and repetition. Global language measures did not correlate significantly with spelling scores, however. Cortical thickness analysis based on MRI showed that atrophy in several language regions of interest were correlated with spelling errors. Atrophy in the left supramarginal gyrus and inferior frontal gyrus (IFG) pars orbitalis correlated with errors in nonword spelling, while thinning in the left temporal pole and fusiform gyrus correlated with errors in exception word spelling. Additionally, phonetically implausible errors in regular word spelling correlated with thinning in the left IFG pars triangularis and pars opercularis. Together, these findings suggest two independent systems for spelling to dictation, one phonetic (phoneme to grapheme conversion), and one lexical (whole word retrieval). Copyright © 2012 Elsevier Ltd. All rights reserved.
Dual-joint modeling for estimation of total knee replacement contact forces during locomotion.
Hast, Michael W; Piazza, Stephen J
2013-02-01
Model-based estimation of in vivo contact forces arising between components of a total knee replacement is challenging because such forces depend upon accurate modeling of muscles, tendons, ligaments, contact, and multibody dynamics. Here we describe an approach to solving this problem with results that are tested by comparison to knee loads measured in vivo for a single subject and made available through the Grand Challenge Competition to Predict in vivo Tibiofemoral Loads. The approach makes use of a "dual-joint" paradigm in which the knee joint is alternately represented by (1) a ball-joint knee for inverse dynamic computation of required muscle controls and (2) a 12 degree-of-freedom (DOF) knee with elastic foundation contact at the tibiofemoral and patellofemoral articulations for forward dynamic integration. Measured external forces and kinematics were applied as a feedback controller and static optimization attempted to track measured knee flexion angles and electromyographic (EMG) activity. The resulting simulations showed excellent tracking of knee flexion (average RMS error of 2.53 deg) and EMG (muscle activations within ±10% envelopes of normalized measured EMG signals). Simulated tibiofemoral contact forces agreed qualitatively with measured contact forces, but their RMS errors were approximately 25% of the peak measured values. These results demonstrate the potential of a dual-joint modeling approach to predict joint contact forces from kinesiological data measured in the motion laboratory. It is anticipated that errors in the estimation of contact force will be reduced as more accurate subject-specific models of muscles and other soft tissues are developed.
Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin
2008-05-01
For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.
2013-01-01
Background Measurements of the morphology of the ankle joint, performed mostly for surgical planning of total ankle arthroplasty and for collecting data for total ankle prosthesis design, are often made on planar radiographs, and therefore can be very sensitive to the positioning of the joint during imaging. The current study aimed to compare ankle morphological measurements using CT-generated 2D images with gold standard values obtained from 3D CT data; to determine the sensitivity of the 2D measurements to mal-positioning of the ankle during imaging; and to quantify the repeatability of the 2D measurements under simulated positioning conditions involving random errors. Method Fifty-eight cadaveric ankles fixed in the neutral joint position (standard pose) were CT scanned, and the data were used to simulate lateral and frontal radiographs under various positioning conditions using digitally reconstructed radiographs (DRR). Results and discussion In the standard pose for imaging, most ankle morphometric parameters measured using 2D images were highly correlated (R > 0.8) to the gold standard values defined by the 3D CT data. For measurements made on the lateral views, the only parameters sensitive to rotational pose errors were longitudinal distances between the most anterior and the most posterior points of the tibial mortise and the tibial profile, which have important implications for determining the optimal cutting level of the bone during arthroplasty. Measurements of the trochlea tali width on the frontal views underestimated the standard values by up to 31.2%, with only a moderate reliability, suggesting that pre-surgical evaluations based on the trochlea tali width should be made with caution in order to avoid inappropriate selection of prosthesis sizes. Conclusions While highly correlated with 3D morphological measurements, some 2D measurements were affected by the bone poses in space during imaging, which may affect surgical decision-making in total ankle arthroplasty, including the amount of bone resection and the selection of the implant sizes. The linear regression equations for the relationship between 2D and 3D measurements will be helpful for correcting the errors in 2D morphometric measurements for clinical applications. PMID:24359413
Quotation accuracy in medical journal articles—a systematic review and meta-analysis
Jergas, Hannah
2015-01-01
Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose—quotation errors—may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress. PMID:26528420
Numerical modeling of the divided bar measurements
NASA Astrophysics Data System (ADS)
LEE, Y.; Keehm, Y.
2011-12-01
The divided-bar technique has been used to measure thermal conductivity of rocks and fragments in heat flow studies. Though widely used, divided-bar measurements can have errors, which are not systematically quantified yet. We used an FEM and performed a series of numerical studies to evaluate various errors in divided-bar measurements and to suggest more reliable measurement techniques. A divided-bar measurement should be corrected against lateral heat loss on the sides of rock samples, and the thermal resistance at the contacts between the rock sample and the bar. We first investigated how the amount of these corrections would change by the thickness and thermal conductivity of rock samples through numerical modeling. When we fixed the sample thickness as 10 mm and varied thermal conductivity, errors in the measured thermal conductivity ranges from 2.02% for 1.0 W/m/K to 7.95% for 4.0 W/m/K. While we fixed thermal conductivity as 1.38 W/m/K and varied the sample thickness, we found that the error ranges from 2.03% for the 30 mm-thick sample to 11.43% for the 5 mm-thick sample. After corrections, a variety of error analyses for divided-bar measurements were conducted numerically. Thermal conductivity of two thin standard disks (2 mm in thickness) located at the top and the bottom of the rock sample slightly affects the accuracy of thermal conductivity measurements. When the thermal conductivity of a sample is 3.0 W/m/K and that of two standard disks is 0.2 W/m/K, the relative error in measured thermal conductivity is very small (~0.01%). However, the relative error would reach up to -2.29% for the same sample when thermal conductivity of two disks is 0.5 W/m/K. The accuracy of thermal conductivity measurements strongly depends on thermal conductivity and the thickness of thermal compound that is applied to reduce thermal resistance at contacts between the rock sample and the bar. When the thickness of thermal compound (0.29 W/m/K) is 0.03 mm, we found that the relative error in measured thermal conductivity is 4.01%, while the relative error can be very significant (~12.2%) if the thickness increases to 0.1 mm. Then, we fixed the thickness (0.03 mm) and varied thermal conductivity of the thermal compound. We found that the relative error with an 1.0 W/m/K compound is 1.28%, and the relative error with a 0.29 W/m/K is 4.06%. When we repeated this test with a different thickness of the thermal compound (0.1 mm), the relative error with an 1.0 W/m/K compound is 3.93%, and that with a 0.29 W/m/K is 12.2%. In addition, the cell technique by Sass et al.(1971), which is widely used to measure thermal conductivity of rock fragments, was evaluated using the FEM modeling. A total of 483 isotropic and homogeneous spherical rock fragments in the sample holder were used to test numerically the accuracy of the cell technique. The result shows the relative error of -9.61% for rock fragments with the thermal conductivity of 2.5 W/m/K. In conclusion, we report quantified errors in the divided-bar and the cell technique for thermal conductivity measurements for rocks and fragments. We found that the FEM modeling can accurately mimic these measurement techniques and can help us to estimate measurement errors quantitatively.
TOMS total ozone data compared with northern latitude Dobson ground stations
NASA Technical Reports Server (NTRS)
Heese, B.; Barthel, K.; Hov, O.
1994-01-01
Ozone measurements from the Total Ozone Mapping Spectrometer on the Nimbus 7 satellite are compared with ground-based measurements from five Dobson stations at northern latitudes to evaluate the accuracy of the TOMS data, particularly in regions north of 50 deg N. The measurements from the individual stations show mean differences from -2.5 percent up to plus 8.3 percent relative to TOMS measurements and two of the ground stations, Oslo and Longyearbyen, show a significant drift of plus 1.2 percent and plus 3.7 percent per year, respectively. It can be shown from nearly simultaneous measurements in two different wavelength double pairs at Oslo that at least 2 percent of the differences result from the use of the CC' wavelength double pair instead of the standard AD wavelength double pair. Since all Norwegian stations used the CC' wavelength double pair exclusively a similar error can be assumed for Tromso and Longyearbyren. A comparison between the tropospheric ozone content in TOMS data and from ECC ozonesonde measurements at Ny-Alesund and Bear Island shows that the amount of tropospheric ozone in the standard profiles used in the TOMS algorithm is too low, which leads to an error of about 2 percent in total ozone. Particularly at high solar zenith angles (greater than 80 deg), Dobson measurements become unreliable. They are up to 20 percent lower than TOMS measurements averaged over solar zenith angles of 88 deg to 89 deg.
Atwood, E.L.
1958-01-01
Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.
Optimized keratometry and total corneal astigmatism for toric intraocular lens calculation.
Savini, Giacomo; Næser, Kristian; Schiano-Lomoriello, Domenico; Ducoli, Pietro
2017-09-01
To compare keratometric astigmatism (KA) and different modalities of measuring total corneal astigmatism (TCA) for toric intraocular lens (IOL) calculation and optimize corneal measurements to eliminate the residual refractive astigmatism. G.B. Bietti Foundation IRCCS, Rome, Italy. Prospective case series. Patients who had a toric IOL were enrolled. Preoperatively, a Scheimpflug camera (Pentacam HR) was used to measure TCA through ray tracing. Different combinations of measurements at a 3.0 mm diameter, centered on the pupil or the corneal vertex and performed along a ring or within it, were compared. Keratometric astigmatism was measured using the same Scheimpflug camera and a corneal topographer (Keratron). Astigmatism was analyzed with Næser's polar value method. The optimized preoperative corneal astigmatism was back-calculated from the postoperative refractive astigmatism. The study comprised 62 patients (64 eyes). With both devices, KA produced an overcorrection of with-the-rule (WTR) astigmatism by 0.6 diopter (D) and an undercorrection of against-the-rule (ATR) astigmatism by 0.3 D. The lowest meridional error in refractive astigmatism was achieved by the TCA pupil/zone measurement in WTR eyes (0.27 D overcorrection) and the TCA apex/zone measurement in ATR eyes (0.07 D undercorrection). In the whole sample, no measurement allowed more than 43.75% of eyes to yield an absolute error in astigmatism magnitude lower than 0.5 D. Optimized astigmatism values increased the percentage of eyes with this error up to 57.81%, with no difference compared with the Barrett calculator and the Abulafia-Koch calculator. Compared with KA, TCA improved calculations for toric IOLs; however, optimization of corneal astigmatism measurements led to more accurate results. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Horner, Neilann K; Patterson, Ruth E; Neuhouser, Marian L; Lampe, Johanna W; Beresford, Shirley A; Prentice, Ross L
2002-10-01
Errors in self-reported dietary intake threaten inferences from studies relying on instruments such as food-frequency questionnaires (FFQs), food records, and food recalls. The objective was to quantify the magnitude, direction, and predictors of errors associated with energy intakes estimated from the Women's Health Initiative FFQ. Postmenopausal women (n = 102) provided data on sociodemographic and psychosocial characteristics that relate to errors in self-reported energy intake. Energy intake was objectively estimated as total energy expenditure, physical activity expenditure, and the thermic effect of food (10% addition to other components of total energy expenditure). Participants underreported energy intake on the FFQ by 20.8%; this error trended upward with younger age (P = 0.07) and social desirability (P = 0.09) but was not associated with body mass index (P = 0.95). The correlation coefficient between reported energy intake and total energy expenditure was 0.24; correlations were higher among women with less education, higher body mass index, and greater fat-free mass, social desirability, and dissatisfaction with perceived body size (all P < 0.10). Energy intake is generally underreported, and both the magnitude of the error and the association of the self-reporting with objectively estimated intake appear to vary by participant characteristics. Studies relying on self-reported intake should include objective measures of energy expenditure in a subset of participants to identify person-specific bias within the study population for the dietary self-reporting tool; these data should be used to calibrate the self-reported data as an integral aspect of diet and disease association studies.
NASA Astrophysics Data System (ADS)
Otero, R., Jr.; Lowe, K. T.; Ng, W. F.
2018-01-01
In previous studies, sonic anemometry and thermometry have generally been used to measure low subsonic Mach flow conditions. Recently, a novel configuration was proposed and used to measure unheated jet velocities up to Mach 0.83 non-intrusively. The objective of this investigation is to test the novel configuration in higher temperature conditions and explore the effects of fluid temperature on mean velocity and temperature measurement accuracy. The current work presents non-intrusive acoustic measurements of single-stream jet conditions up to Mach 0.7 and total temperatures from 299 K to 700 K. Comparison of acoustically measured velocity and static temperature with probe data indicate root mean square (RMS) velocity errors of 2.6 m s-1 (1.1% of the maximum jet centerline velocity), 4.0 m s-1 (1.2%), and 8.5 m s-1 (2.4%), respectively, for 299, 589, and 700 K total temperature flows up to Mach 0.7. RMS static temperature errors of 7.5 K (2.5% of total temperature), 8.1 K (1.3%), and 23.3 K (3.3%) were observed for the same respective total temperature conditions. To the authors’ knowledge, this is the first time a non-intrusive acoustic technique has been used to simultaneously measure mean fluid velocity and static temperatures in high subsonic Mach numbers up to 0.7. Overall, the findings of this work support the use of acoustics for non-intrusive flow monitoring. The ability to measure mean flow conditions at high subsonic Mach numbers and temperatures makes this technique a viable candidate for gas turbine applications, in particular.
Although ambient concentrations of particulate matter ≤ 10μm (PM10) are often used as proxies for total personal exposure, correlation (r) between ambient and personal PM10 concentrations varies. Factors underlying this variation and its effect on he...
Gómez-Cabello, Alba; Vicente-Rodríguez, Germán; Albers, Ulrike; Mata, Esmeralda; Rodriguez-Marroyo, Jose A.; Olivares, Pedro R.; Gusi, Narcis; Villa, Gerardo; Aznar, Susana; Gonzalez-Gross, Marcela; Casajús, Jose A.; Ara, Ignacio
2012-01-01
Background The elderly EXERNET multi-centre study aims to collect normative anthropometric data for old functionally independent adults living in Spain. Purpose To describe the standardization process and reliability of the anthropometric measurements carried out in the pilot study and during the final workshop, examining both intra- and inter-rater errors for measurements. Materials and Methods A total of 98 elderly from five different regions participated in the intra-rater error assessment, and 10 different seniors living in the city of Toledo (Spain) participated in the inter-rater assessment. We examined both intra- and inter-rater errors for heights and circumferences. Results For height, intra-rater technical errors of measurement (TEMs) were smaller than 0.25 cm. For circumferences and knee height, TEMs were smaller than 1 cm, except for waist circumference in the city of Cáceres. Reliability for heights and circumferences was greater than 98% in all cases. Inter-rater TEMs were 0.61 cm for height, 0.75 cm for knee-height and ranged between 2.70 and 3.09 cm for the circumferences measured. Inter-rater reliabilities for anthropometric measurements were always higher than 90%. Conclusion The harmonization process, including the workshop and pilot study, guarantee the quality of the anthropometric measurements in the elderly EXERNET multi-centre study. High reliability and low TEM may be expected when assessing anthropometry in elderly population. PMID:22860013
A review of setup error in supine breast radiotherapy using cone-beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au; Liverpool and Macarthur Cancer Therapy Centres, New South Wales; Ingham Institute of Applied Medical Research, Sydney, New South Wales
2016-10-01
Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registeringmore » CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.« less
Zajicek, James L.; Tillitt, Donald E.; Huckins, James N.; Petty, Jimmie D.; Potts, Michael E.; Nardone, David A.
1996-01-01
Determination of PCBs in biological tissue extracts by enzyme-linked immunosorbent assays (ELISAs) can be problematic, since the hydrophobic solvents used for their extraction and isolation from interfering biochemicals have limited compatibility with the polar solvents (e.g. methanol/water) and the immunochemical reagents used in ELISA. Our studies of these solvent effects indicate that significant errors can occur when microliter volumes of PCB containing extracts, in hydrophobic solvents, are diluted directly into methanol/water diluents. Errors include low recovery and excess variability among sub-samples taken from the same sample dilution. These errors are associated with inhomogeneity of the dilution, which is readily visualized by the use of a hydrophobic dye, Solvent Blue 35. Solvent Blue 35 is also used to visualize the evaporative removal of hydrophobic solvent and the dissolution of the resulting PCB/dye residue by pure methanol and 50% (v/v) methanol/water, typical ELISA diluents. Evaporative removal of isooctane by an ambient temperature nitrogen purge with subsequent dissolution in 100% methanol gives near quantitative recovery of model PCB congeners. We also compare concentrations of total PCBs from ELISA (ePCB) to their corresponding concentrations determined from capillary gas chromatography (GC) in selected fish sample extracts and dialysates of semipermeable membrane device (SPMD) passive samplers using an optimized solvent exchange procedure. Based on Aroclor 1254 calibrations, ePCBs (ng/mL) determined in fish extracts are positively correlated with total PCB concentrations (ng/mL) determined by GC: ePCB = 1.16 * total-cPCB - 5.92. Measured ePCBs (ng/3 SPMDs) were also positively correlated (r2 = 0.999) with PCB totals (ng/3 SPMDs) measured by GC for dialysates of SPMDs: ePCB = 1.52 * total PCB - 212. Therefore, this ELISA system for PCBs can be a rapid alternative to traditional GC analyses for determination of PCBs in extracts of biota or in SPMD dialysates.
Evaluation of analytical errors in a clinical chemistry laboratory: a 3 year experience.
Sakyi, As; Laing, Ef; Ephraim, Rk; Asibey, Of; Sadique, Ok
2015-01-01
Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified.
Speckle reduction in optical coherence tomography by adaptive total variation method
NASA Astrophysics Data System (ADS)
Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun
2015-12-01
An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.
Cortical dipole imaging using truncated total least squares considering transfer matrix error.
Hori, Junichi; Takeuchi, Kosuke
2013-01-01
Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.
Cross sections for H(-) and Cl(-) production from HCl by dissociative electron attachment
NASA Technical Reports Server (NTRS)
Orient, O. J.; Srivastava, S. K.
1985-01-01
A crossed target beam-electron beam collision geometry and a quadrupole mass spectrometer have been used to conduct dissociative electron attachment cross section measurements for the case of H(-) and Cl(-) production from HCl. The relative flow technique is used to determine the absolute values of cross sections. A tabulation is given of the attachment energies corresponding to various cross section maxima. Error sources contributing to total errors are also estimated.
Medication administration errors in nursing homes using an automated medication dispensing system.
van den Bemt, Patricia M L A; Idzinga, Jetske C; Robertz, Hans; Kormelink, Dennis Groot; Pels, Neske
2009-01-01
OBJECTIVE To identify the frequency of medication administration errors as well as their potential risk factors in nursing homes using a distribution robot. DESIGN The study was a prospective, observational study conducted within three nursing homes in the Netherlands caring for 180 individuals. MEASUREMENTS Medication errors were measured using the disguised observation technique. Types of medication errors were described. The correlation between several potential risk factors and the occurrence of medication errors was studied to identify potential causes for the errors. RESULTS In total 2,025 medication administrations to 127 clients were observed. In these administrations 428 errors were observed (21.2%). The most frequently occurring types of errors were use of wrong administration techniques (especially incorrect crushing of medication and not supervising the intake of medication) and wrong time errors (administering the medication at least 1 h early or late).The potential risk factors female gender (odds ratio (OR) 1.39; 95% confidence interval (CI) 1.05-1.83), ATC medication class antibiotics (OR 11.11; 95% CI 2.66-46.50), medication crushed (OR 7.83; 95% CI 5.40-11.36), number of dosages/day/client (OR 1.03; 95% CI 1.01-1.05), nursing home 2 (OR 3.97; 95% CI 2.86-5.50), medication not supplied by distribution robot (OR 2.92; 95% CI 2.04-4.18), time classes "7-10 am" (OR 2.28; 95% CI 1.50-3.47) and "10 am-2 pm" (OR 1.96; 1.18-3.27) and day of the week "Wednesday" (OR 1.46; 95% CI 1.03-2.07) are associated with a higher risk of administration errors. CONCLUSIONS Medication administration in nursing homes is prone to many errors. This study indicates that the handling of the medication after removing it from the robot packaging may contribute to this high error frequency, which may be reduced by training of nurse attendants, by automated clinical decision support and by measures to reduce workload.
Kehl, Sven; Eckert, Sven; Sütterlin, Marc; Neff, K Wolfgang; Siemer, Jörn
2011-06-01
Three-dimensional (3D) sonographic volumetry is established in gynecology and obstetrics. Assessment of the fetal lung volume by magnetic resonance imaging (MRI) in congenital diaphragmatic hernias has become a routine examination. In vitro studies have shown a good correlation between 3D sonographic measurements and MRI. The aim of this study was to compare the lung volumes of healthy fetuses assessed by 3D sonography to MRI measurements and to investigate the impact of different rotation angles. A total of 126 fetuses between 20 and 40 weeks' gestation were measured by 3D sonography, and 27 of them were also assessed by MRI. The sonographic volumes were calculated by the rotational technique (virtual organ computer-aided analysis) with rotation angles of 6° and 30°. To evaluate the accuracy of 3D sonographic volumetry, percentage error and absolute percentage error values were calculated using MRI volumes as reference points. Formulas to calculate total, right, and left fetal lung volumes according to gestational age and biometric parameters were derived by stepwise regression analysis. Three-dimensional sonographic volumetry showed a high correlation compared to MRI (6° angle, R(2) = 0.971; 30° angle, R(2) = 0.917) with no systematic error for the 6° angle. Moreover, using the 6° rotation angle, the median absolute percentage error was significantly lower compared to the 30° angle (P < .001). The new formulas to calculate total lung volume in healthy fetuses only included gestational age and no biometric parameters (R(2) = 0.853). Three-dimensional sonographic volumetry of lung volumes in healthy fetuses showed a good correlation with MRI. We recommend using an angle of 6° because it assessed the lung volume more accurately. The specifically designed equations help estimate lung volumes in healthy fetuses.
The intention to disclose medical errors among doctors in a referral hospital in North Malaysia.
Hs, Arvinder-Singh; Rashid, Abdul
2017-01-23
In this study, medical errors are defined as unintentional patient harm caused by a doctor's mistake. This topic, due to limited research, is poorly understood in Malaysia. The objective of this study was to determine the proportion of doctors intending to disclose medical errors, and their attitudes/perception pertaining to medical errors. This cross-sectional study was conducted at a tertiary public hospital from July- December 2015 among 276 randomly selected doctors. Data was collected using a standardized and validated self-administered questionnaire intending to measure disclosure and attitudes/perceptions. The scale had four vignettes in total two medical and two surgical. Each vignette consisted of five questions and each question measured the disclosure. Disclosure was categorised as "No Disclosure", "Partial Disclosure" or "Full Disclosure". Data was keyed in and analysed using STATA v 13.0. Only 10.1% (n = 28) intended to disclose medical errors. Most respondents felt that they possessed an attitude/perception of adequately disclosing errors to patients. There was a statistically significant difference (p < 0.001) when comparing the intention of disclosure with perceived disclosures. Most respondents were in common agreement that disclosing an error would make them less likely to get sued, that minor errors should be reported and that they experienced relief from disclosing errors. Most doctors in this study would not disclose medical errors although they perceived that the errors were serious and felt responsible for it. Poor disclosure could be due the fear of litigations and improper mechanisms/procedures available for disclosure.
NASA Astrophysics Data System (ADS)
Baylon, Jorge L.; Stremme, Wolfgang; Grutter, Michel; Hase, Frank; Blumenstock, Thomas
2017-07-01
In this investigation we analyze two common optical configurations to retrieve CO2 total column amounts from solar absorption infrared spectra. The noise errors using either a KBr or a CaF2 beam splitter, a main component of a Fourier transform infrared spectrometer (FTIR), are quantified in order to assess the relative precisions of the measurements. The configuration using a CaF2 beam splitter, as deployed by the instruments which contribute to the Total Carbon Column Observing Network (TCCON), shows a slightly better precision. However, we show that the precisions in XCO2 ( = 0.2095 ṡ Total Column CO2Total Column O2) retrieved from > 96 % of the spectra measured with a KBr beam splitter fall well below 0.2 %. A bias in XCO2 (KBr - CaF2) of +0.56 ± 0.25 ppm was found when using an independent data set as reference. This value, which corresponds to +0.14 ± 0.064 %, is slightly larger than the mean precisions obtained. A 3-year XCO2 time series from FTIR measurements at the high-altitude site of Altzomoni in central Mexico presents clear annual and diurnal cycles, and a trend of +2.2 ppm yr-1 could be determined.
Within-Tunnel Variations in Pressure Data for Three Transonic Wind Tunnels
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2014-01-01
This paper compares the results of pressure measurements made on the same test article with the same test matrix in three transonic wind tunnels. A comparison is presented of the unexplained variance associated with polar replicates acquired in each tunnel. The impact of a significance component of systematic (not random) unexplained variance is reviewed, and the results of analyses of variance are presented to assess the degree of significant systematic error in these representative wind tunnel tests. Total uncertainty estimates are reported for 140 samples of pressure data, quantifying the effects of within-polar random errors and between-polar systematic bias errors.
Formal Verification of Safety Buffers for Sate-Based Conflict Detection and Resolution
NASA Technical Reports Server (NTRS)
Herencia-Zapana, Heber; Jeannin, Jean-Baptiste; Munoz, Cesar A.
2010-01-01
The information provided by global positioning systems is never totally exact, and there are always errors when measuring position and velocity of moving objects such as aircraft. This paper studies the effects of these errors in the actual separation of aircraft in the context of state-based conflict detection and resolution. Assuming that the state information is uncertain but that bounds on the errors are known, this paper provides an analytical definition of a safety buffer and sufficient conditions under which this buffer guarantees that actual conflicts are detected and solved. The results are presented as theorems, which were formally proven using a mechanical theorem prover.
Hsieh, Shulan; Li, Tzu-Hsien; Tsai, Ling-Ling
2010-04-01
To examine whether monetary incentives attenuate the negative effects of sleep deprivation on cognitive performance in a flanker task that requires higher-level cognitive-control processes, including error monitoring. Twenty-four healthy adults aged 18 to 23 years were randomly divided into 2 subject groups: one received and the other did not receive monetary incentives for performance accuracy. Both subject groups performed a flanker task and underwent electroencephalographic recordings for event-related brain potentials after normal sleep and after 1 night of total sleep deprivation in a within-subject, counterbalanced, repeated-measures study design. Monetary incentives significantly enhanced the response accuracy and reaction time variability under both normal sleep and sleep-deprived conditions, and they reduced the effects of sleep deprivation on the subjective effort level, the amplitude of the error-related negativity (an error-related event-related potential component), and the latency of the P300 (an event-related potential variable related to attention processes). However, monetary incentives could not attenuate the effects of sleep deprivation on any measures of behavior performance, such as the response accuracy, reaction time variability, or posterror accuracy adjustments; nor could they reduce the effects of sleep deprivation on the amplitude of the Pe, another error-related event-related potential component. This study shows that motivation incentives selectively reduce the effects of total sleep deprivation on some brain activities, but they cannot attenuate the effects of sleep deprivation on performance decrements in tasks that require high-level cognitive-control processes. Thus, monetary incentives and sleep deprivation may act through both common and different mechanisms to affect cognitive performance.
Fang, Cheng; Butler, David Lee
2013-05-01
In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.
Multiparameter measurement utilizing poloidal polarimeter for burning plasma reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi
2014-08-21
The authors have made the basic and applied research on the polarimeter for plasma diagnostics. Recently, the authors have proposed an application of multiparameter measurement (magnetic field, B, electron density, n{sub e}, electron temperature, T{sub e}, and total plasma current, I{sub p}) utilizing polarimeter to future fusion reactors. In this proceedings, the brief review of the polarimeter, the principle of the multiparameter measurement and the progress of the research on the multiparameter measurement are explained. The measurement method that the authors have proposed is suitable for the reactor for the following reasons; multiparameters can be obtained from a small numbermore » of diagnostics, the proposed method does not depend on time-history, and far-infrared light utilized by the polarimeter is less sensitive to degradation of of optical components. Taking into account the measuring error, performance assessment of the proposed method was carried. Assuming that the error of Δθ and Δε were 0.1° and 0.6°, respectively, the error of reconstructed j{sub φ}, n{sub e} and T{sub e} were 12 %, 8.4 % and 31 %, respectively. This study has shown that the reconstruction error can be decreased by increasing the number of the wavelength of the probing laser and by increasing the number of the viewing chords. For example, By increasing the number of viewing chords to forty-five, the error of j{sub φ}, n{sub e} and T{sub e} were reduced to 4.4 %, 4.4 %, and 17 %, respectively.« less
PTV margin determination in conformal SRT of intracranial lesions
Parker, Brent C.; Shiu, Almon S.; Maor, Moshe H.; Lang, Frederick F.; Liu, H. Helen; White, R. Allen; Antolak, John A.
2002-01-01
The planning target volume (PTV) includes the clinical target volume (CTV) to be irradiated and a margin to account for uncertainties in the treatment process. Uncertainties in miniature multileaf collimator (mMLC) leaf positioning, CT scanner spatial localization, CT‐MRI image fusion spatial localization, and Gill‐Thomas‐Cosman (GTC) relocatable head frame repositioning were quantified for the purpose of determining a minimum PTV margin that still delivers a satisfactory CTV dose. The measured uncertainties were then incorporated into a simple Monte Carlo calculation for evaluation of various margin and fraction combinations. Satisfactory CTV dosimetric criteria were selected to be a minimum CTV dose of 95% of the PTV dose and at least 95% of the CTV receiving 100% of the PTV dose. The measured uncertainties were assumed to be Gaussian distributions. Systematic errors were added linearly and random errors were added in quadrature assuming no correlation to arrive at the total combined error. The Monte Carlo simulation written for this work examined the distribution of cumulative dose volume histograms for a large patient population using various margin and fraction combinations to determine the smallest margin required to meet the established criteria. The program examined 5 and 30 fraction treatments, since those are the only fractionation schemes currently used at our institution. The fractionation schemes were evaluated using no margin, a margin of just the systematic component of the total uncertainty, and a margin of the systematic component plus one standard deviation of the total uncertainty. It was concluded that (i) a margin of the systematic error plus one standard deviation of the total uncertainty is the smallest PTV margin necessary to achieve the established CTV dose criteria, and (ii) it is necessary to determine the uncertainties introduced by the specific equipment and procedures used at each institution since the uncertainties may vary among locations. PACS number(s): 87.53.Kn, 87.53.Ly PMID:12132939
Quantum stopwatch: how to store time in a quantum memory.
Yang, Yuxiang; Chiribella, Giulio; Hayashi, Masahito
2018-05-01
Quantum mechanics imposes a fundamental trade-off between the accuracy of time measurements and the size of the systems used as clocks. When the measurements of different time intervals are combined, the errors due to the finite clock size accumulate, resulting in an overall inaccuracy that grows with the complexity of the set-up. Here, we introduce a method that, in principle, eludes the accumulation of errors by coherently transferring information from a quantum clock to a quantum memory of the smallest possible size. Our method could be used to measure the total duration of a sequence of events with enhanced accuracy, and to reduce the amount of quantum communication needed to stabilize clocks in a quantum network.
Quantitative application of sigma metrics in medical biochemistry.
Nanda, Sunil Kumar; Ray, Lopamudra
2013-12-01
Laboratory errors are result of a poorly designed quality system in the laboratory. Six Sigma is an error reduction methodology that has been successfully applied at Motorola and General Electric. Sigma (σ) is the mathematical symbol for standard deviation (SD). Sigma methodology can be applied wherever an outcome of a process has to be measured. A poor outcome is counted as an error or defect. This is quantified as defects per million (DPM). A six sigma process is one in which 99.999666% of the products manufactured are statistically expected to be free of defects. Six sigma concentrates, on regulating a process to 6 SDs, represents 3.4 DPM (defects per million) opportunities. It can be inferred that as sigma increases, the consistency and steadiness of the test improves, thereby reducing the operating costs. We aimed to gauge performance of our laboratory parameters by sigma metrics. Evaluation of sigma metrics in interpretation of parameter performance in clinical biochemistry. The six month internal QC (October 2012 to march 2013) and EQAS (external quality assurance scheme) were extracted for the parameters-Glucose, Urea, Creatinine, Total Bilirubin, Total Protein, Albumin, Uric acid, Total Cholesterol, Triglycerides, Chloride, SGOT, SGPT and ALP. Coefficient of variance (CV) were calculated from internal QC for these parameters. Percentage bias for these parameters was calculated from the EQAS. Total allowable errors were followed as per Clinical Laboratory Improvement Amendments (CLIA) guidelines. Sigma metrics were calculated from CV, percentage bias and total allowable error for the above mentioned parameters. For parameters - Total bilirubin, uric acid, SGOT, SGPT and ALP, the sigma values were found to be more than 6. For parameters - glucose, Creatinine, triglycerides, urea, the sigma values were found to be between 3 to 6. For parameters - total protein, albumin, cholesterol and chloride, the sigma values were found to be less than 3. ALP was the best performer when it was gauzed on the sigma scale, with a sigma metrics value of 8.4 and chloride had the least sigma metrics value of 1.4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M; Suh, T; Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul
2015-06-15
Purpose: To develop and validate an innovative method of using depth sensing cameras and 3D printing techniques for Total Body Irradiation (TBI) treatment planning and compensator fabrication. Methods: A tablet with motion tracking cameras and integrated depth sensing was used to scan a RANDOTM phantom arranged in a TBI treatment booth to detect and store the 3D surface in a point cloud (PC) format. The accuracy of the detected surface was evaluated by comparison to extracted measurements from CT scan images. The thickness, source to surface distance and off-axis distance of the phantom at different body section was measured formore » TBI treatment planning. A 2D map containing a detailed compensator design was calculated to achieve uniform dose distribution throughout the phantom. The compensator was fabricated using a 3D printer, silicone molding and tungsten powder. In vivo dosimetry measurements were performed using optically stimulated luminescent detectors (OSLDs). Results: The whole scan of the anthropomorphic phantom took approximately 30 seconds. The mean error for thickness measurements at each section of phantom compare to CT was 0.44 ± 0.268 cm. These errors resulted in approximately 2% dose error calculation and 0.4 mm tungsten thickness deviation for the compensator design. The accuracy of 3D compensator printing was within 0.2 mm. In vivo measurements for an end-to-end test showed the overall dose difference was within 3%. Conclusion: Motion cameras and depth sensing techniques proved to be an accurate and efficient tool for TBI patient measurement and treatment planning. 3D printing technique improved the efficiency and accuracy of the compensator production and ensured a more accurate treatment delivery.« less
Yung, Marcus; Manji, Rahim; Wells, Richard P
2017-11-01
Our aim was to explore the relationship between fatigue and operation system performance during a simulated light precision task over an 8-hr period using a battery of physical (central and peripheral) and cognitive measures. Fatigue may play an important role in the relationship between poor ergonomics and deficits in quality and productivity. However, well-controlled laboratory studies in this area have several limitations, including the lack of work relevance of fatigue exposures and lack of both physical and cognitive measures. There remains a need to understand the relationship between physical and cognitive fatigue and task performance at exposure levels relevant to realistic production or light precision work. Errors and fatigue measures were tracked over the course of a micropipetting task. Fatigue responses from 10 measures and errors in pipetting technique, precision, and targeting were submitted to principal component analysis to descriptively analyze features and patterns. Fatigue responses and error rates contributed to three principal components (PCs), accounting for 50.9% of total variance. Fatigue responses grouped within the three PCs reflected central and peripheral upper extremity fatigue, postural sway, and changes in oculomotor behavior. In an 8-hr light precision task, error rates shared similar patterns to both physical and cognitive fatigue responses, and/or increases in arousal level. The findings provide insight toward the relationship between fatigue and operation system performance (e.g., errors). This study contributes to a body of literature documenting task errors and fatigue, reflecting physical (both central and peripheral) and cognitive processes.
Tsukeoka, Tadashi; Tsuneizumi, Yoshikazu; Yoshino, Kensuke; Suzuki, Mashiko
2018-05-01
The aim of this study was to determine factors that contribute to bone cutting errors of conventional instrumentation for tibial resection in total knee arthroplasty (TKA) as assessed by an image-free navigation system. The hypothesis is that preoperative varus alignment is a significant contributory factor to tibial bone cutting errors. This was a prospective study of a consecutive series of 72 TKAs. The amount of the tibial first-cut errors with reference to the planned cutting plane in both coronal and sagittal planes was measured by an image-free computer navigation system. Multiple regression models were developed with the amount of tibial cutting error in the coronal and sagittal planes as dependent variables and sex, age, disease, height, body mass index, preoperative alignment, patellar height (Insall-Salvati ratio) and preoperative flexion angle as independent variables. Multiple regression analysis showed that sex (male gender) (R = 0.25 p = 0.047) and preoperative varus alignment (R = 0.42, p = 0.001) were positively associated with varus tibial cutting errors in the coronal plane. In the sagittal plane, none of the independent variables was significant. When performing TKA in varus deformity, careful confirmation of the bone cutting surface should be performed to avoid varus alignment. The results of this study suggest technical considerations that can help a surgeon achieve more accurate component placement. IV.
Estimating a child's age from an image using whole body proportions.
Lucas, Teghan; Henneberg, Maciej
2017-09-01
The use and distribution of child pornography is an increasing problem. Forensic anthropologists are often asked to estimate a child's age from a photograph. Previous studies have attempted to estimate the age of children from photographs using ratios of the face. Here, we propose to include body measurement ratios into age estimates. A total of 1603 boys and 1833 girls aged 5-16 years were measured over a 10-year period. They are 'Cape Coloured' children from South Africa. Their age was regressed on ratios derived from anthropometric measurements of the head as well as the body. Multiple regression equations including four ratios for each sex (head height to shoulder and hip width, knee width, leg length and trunk length) have a standard error of 1.6-1.7 years. The error is of the same order as variation of differences between biological and chronological ages of the children. Thus, the error cannot be minimised any further as it is a direct reflection of a naturally occurring phenomenon.
Rain rate range profiling from a spaceborne radar
NASA Technical Reports Server (NTRS)
Meneghini, R.
1980-01-01
At certain frequencies and incidence angles the relative invariance of the surface scattering properites over land can be used to estimate the total attenuation and the integrated rain from a spaceborne attenuation-wavelength radar. The technique is generalized so that rain rate profiles along the radar beam can be estimated, i.e., rain rate determination at each range bin. This is done by modifying the standard algorithm for an attenuating-wavelength radar to include in it the measurement of the total attenuation. Simple error analyses of the estimates show that this type of profiling is possible if the total attenuation can be measured with a modest degree of accuracy.
NASA Astrophysics Data System (ADS)
Torres, A. D.; Rasmussen, K. L.; Bodine, D. J.; Dougherty, E.
2015-12-01
Plains Elevated Convection At Night (PECAN) was a large field campaign that studied nocturnal mesoscale convective systems (MCSs), convective initiation, bores, and low-level jets across the central plains in the United States. MCSs are responsible for over half of the warm-season precipitation across the central U.S. plains. The rainfall from deep convection of these systems over land have been observed to be underestimated by satellite radar rainfall-retrieval algorithms by as much as 40 percent. These algorithms have a strong dependence on the generally unmeasured rain drop-size distribution (DSD). During the campaign, our group measured rainfall DSDs, precipitation fall velocities, and total precipitation in the convective and stratiform regions of MCSs using Ott Parsivel optical laser disdrometers. The disdrometers were co-located with mobile pod units that measured temperature, wind, and relative humidity for quality control purposes. Data from the operational NEXRAD radar in LaCrosse, Wisconsin and space-based radar measurements from a Global Precipitation Measurement satellite overpass on July 13, 2015 were used for the analysis. The focus of this study is to compare DSD measurements from the disdrometers to radars in an effort to reduce errors in existing rainfall-retrieval algorithms. The error analysis consists of substituting measured DSDs into existing quantitative precipitation estimation techniques (e.g. Z-R relationships and dual-polarization rain estimates) and comparing these estimates to ground measurements of total precipitation. The results from this study will improve climatological estimates of total precipitation in continental convection that are used in hydrological studies, climate models, and other applications.
NASA Technical Reports Server (NTRS)
Mao, Jianping; Kawa, S. Randolph
2003-01-01
A series of sensitivity studies is carried out to explore the feasibility of space-based global carbon dioxide (CO2) measurements for global and regional carbon cycle studies. The detection method uses absorption of reflected sunlight in the CO2 vibration-rotation band at 1.58 microns. The sensitivities of the detected radiances are calculated using the line-by-line model (LBLRTM), implemented with the DISORT (Discrete Ordinates Radiative Transfer) model to include atmospheric scattering in this band. The results indicate that (a) the small (approx.1%) changes in CO2 near the Earth's surface are detectable in this CO2 band provided adequate sensor signal-to-noise ratio and spectral resolution are achievable; (b) the radiance signal or sensitivity to CO2 change near the surface is not significantly diminished even in the presence of aerosols and/or thin cirrus clouds in the atmosphere; (c) the modification of sunlight path length by scattering of aerosols and cirrus clouds could lead to large systematic errors in the retrieval; therefore, ancillary aerosol/cirrus cloud data are important to reduce retrieval errors; (d) CO2 retrieval requires good knowledge of the atmospheric temperature profile, e.g. approximately 1K RMS error in layer temperature; (e) the atmospheric path length, over which the CO2 absorption occurs, must be known in order to correctly interpret horizontal gradients of CO2 from the total column CO2 measurement; thus an additional sensor for surface pressure measurement needs to be attached for a complete measurement package.
Guo, Tong; Chen, Zhuo; Li, Minghui; Wu, Juhong; Fu, Xing; Hu, Xiaotang
2018-04-20
Based on white-light spectral interferometry and the Linnik microscopic interference configuration, the nonlinear phase components of the spectral interferometric signal were analyzed for film thickness measurement. The spectral interferometric signal was obtained using a Linnik microscopic white-light spectral interferometer, which includes the nonlinear phase components associated with the effective thickness, the nonlinear phase error caused by the double-objective lens, and the nonlinear phase of the thin film itself. To determine the influence of the effective thickness, a wavelength-correction method was proposed that converts the effective thickness into a constant value; the nonlinear phase caused by the effective thickness can then be determined and subtracted from the total nonlinear phase. A method for the extraction of the nonlinear phase error caused by the double-objective lens was also proposed. Accurate thickness measurement of a thin film can be achieved by fitting the nonlinear phase of the thin film after removal of the nonlinear phase caused by the effective thickness and by the nonlinear phase error caused by the double-objective lens. The experimental results demonstrated that both the wavelength-correction method and the extraction method for the nonlinear phase error caused by the double-objective lens improve the accuracy of film thickness measurements.
Kuikka, Liisa; Pitkälä, Kaisu
2014-01-01
Abstract Objective. To study coping differences between young and experienced GPs in primary care who experience medical errors and uncertainty. Design. Questionnaire-based survey (self-assessment) conducted in 2011. Setting. Finnish primary practice offices in Southern Finland. Subjects. Finnish GPs engaged in primary health care from two different respondent groups: young (working experience ≤ 5years, n = 85) and experienced (working experience > 5 years, n = 80). Main outcome measures. Outcome measures included experiences and attitudes expressed by the included participants towards medical errors and tolerance of uncertainty, their coping strategies, and factors that may influence (positively or negatively) sources of errors. Results. In total, 165/244 GPs responded (response rate: 68%). Young GPs expressed significantly more often fear of committing a medical error (70.2% vs. 48.1%, p = 0.004) and admitted more often than experienced GPs that they had committed a medical error during the past year (83.5% vs. 68.8%, p = 0.026). Young GPs were less prone to apologize to a patient for an error (44.7% vs. 65.0%, p = 0.009) and found, more often than their more experienced colleagues, on-site consultations and electronic databases useful for avoiding mistakes. Conclusion. Experienced GPs seem to better tolerate uncertainty and also seem to fear medical errors less than their young colleagues. Young and more experienced GPs use different coping strategies for dealing with medical errors. Implications. When GPs become more experienced, they seem to get better at coping with medical errors. Means to support these skills should be studied in future research. PMID:24914458
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Charles N.; Bucholtz, Anthony; Jonsson, Haf
2010-04-14
Significant errors occur in downwelling shortwave irradiance measurements made on moving platforms due to tilt from horizontal because, when the sun is not completely blocked by overhead cloud, the downwelling shortwave irradiance has a prominent directional component from the direct sun. A-priori knowledge of the partitioning between the direct and diffuse components of the total shortwave irradiance is needed to properly apply a correction for tilt. This partitioning information can be adequately provided using a newly available commercial radiometer that produces reasonable measurements of the total and diffuse shortwave irradiance, and by subtraction the direct shortwave irradiance, with no movingmore » parts and regardless of azimuthal orientation. We have developed methodologies for determining the constant pitch and roll offsets of the radiometers for aircraft applications, and for applying a tilt correction to the total shortwave irradiance data. Results suggest that the methodology is for tilt up to +/-10°, with 90% of the data corrected to within 10 Wm-2 at least for clear-sky data. Without a proper tilt correction, even data limited to 5° of tilt as is typical current practice still exhibits large errors, greater than 100 Wm-2 in some cases. Given the low cost, low weight, and low power consumption of the SPN1 total and diffuse radiometer, opportunities previously excluded for moving platform measurements such as small Unmanned Aerial Vehicles and solar powered buoys now become feasible using our methodology. The increase in measurement accuracy is important, given current concerns over long-term climate variability and change especially over the 70% of the Earth’s surface covered by ocean where long-term records of these measurements are sorely needed and must be made on ships and buoys.« less
Beasley, J M; Jung, M; Tasevska, N; Wong, W W; Siega-Riz, A M; Sotres-Alvarez, D; Gellman, M D; Kizer, J R; Shaw, P A; Stamler, J; Stoutenberg, M; Van Horn, L; Franke, A A; Wylie-Rosett, J; Mossavar-Rahmani, Y
2016-12-01
Measurement error in self-reported total sugars intake may obscure associations between sugars consumption and health outcomes, and the sum of 24 h urinary sucrose and fructose may serve as a predictive biomarker of total sugars intake. The Study of Latinos: Nutrition & Physical Activity Assessment Study (SOLNAS) was an ancillary study to the Hispanic Community Health Study/Study of Latinos (HCHS/SOL) cohort. Doubly labelled water and 24 h urinary sucrose and fructose were used as biomarkers of energy and sugars intake, respectively. Participants' diets were assessed by up to three 24 h recalls (88 % had two or more recalls). Procedures were repeated approximately 6 months after the initial visit among a subset of ninety-six participants. Four centres (Bronx, NY; Chicago, IL; Miami, FL; San Diego, CA) across the USA. Men and women (n 477) aged 18-74 years. The geometric mean of total sugars was 167·5 (95 % CI 154·4, 181·7) g/d for the biomarker-predicted and 90·6 (95 % CI 87·6, 93·6) g/d for the self-reported total sugars intake. Self-reported total sugars intake was not correlated with biomarker-predicted sugars intake (r=-0·06, P=0·20, n 450). Among the reliability sample (n 90), the reproducibility coefficient was 0·59 for biomarker-predicted and 0·20 for self-reported total sugars intake. Possible explanations for the lack of association between biomarker-predicted and self-reported sugars intake include measurement error in self-reported diet, high intra-individual variability in sugars intake, and/or urinary sucrose and fructose may not be a suitable proxy for total sugars intake in this study population.
Beasley, JM; Jung, M; Tasevska, N; Wong, WW; Siega-Riz, AM; Sotres-Alvarez, D; Gellman, MD; Kizer, JR; Shaw, PA; Stamler, J; Stoutenberg, M; Van Horn, L; Franke, AA; Wylie-Rosett, J; Mossavar-Rahmani, Y
2017-01-01
Objective Measurement error in self-reported total sugars intake may obscure associations between sugars consumption and health outcomes, and the sum of 24-hr urinary sucrose and fructose may serve as a predictive biomarker of total sugars intake. Design The Study of Latinos: Nutrition & Physical Activity Assessment Study (SOLNAS) was an ancillary study to the Hispanic Community Health Study/Study of Latinos (HCHS/SOL) cohort. Doubly labeled water (DLW) and 24-hr urinary sucrose and fructose were used as biomarkers of energy and sugars intake, respectively. Participants’ diets were assessed by up to three 24-hr recalls (88% had two or more recalls). Procedures were repeated approximately six months after the initial visit among a subset of 96 participants. Setting Four centers (Bronx, NY; Chicago, IL; Miami, FL; San Diego, CA) across the United States Subjects 477 men and women aged 18–74 years. Results The geometric mean of total sugars intake was 167.5 (95% CI: 154.4–181.7) g/day for the biomarker-predicted and 90.6 (95% CI: 87.6–93.6) g/day for the self-reported total sugars intake. Self-reported total sugars intake was not correlated with biomarker-predicted sugars intake (r=−0.06, P=0.20, n=450). Among the reliability sample (n=90), the reproducibility coefficient was 0.59 for biomarker-predicted and 0.20 for self-reported total sugars intake. Conclusions Possible explanations for the lack of association between biomarker-predicted and self-reported sugars intake include measurement error in self-reported diet, high intra-individual variability in sugars intake, and/or urinary sucrose and fructose may not be a suitable proxy for total sugars intake in this study population. PMID:27339078
Global horizontal irradiance clear sky models : implementation and analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Joshua S.; Hansen, Clifford W.; Reno, Matthew J.
2012-03-01
Clear sky models estimate the terrestrial solar radiation under a cloudless sky as a function of the solar elevation angle, site altitude, aerosol concentration, water vapor, and various atmospheric conditions. This report provides an overview of a number of global horizontal irradiance (GHI) clear sky models from very simple to complex. Validation of clear-sky models requires comparison of model results to measured irradiance during clear-sky periods. To facilitate validation, we present a new algorithm for automatically identifying clear-sky periods in a time series of GHI measurements. We evaluate the performance of selected clear-sky models using measured data from 30 differentmore » sites, totaling about 300 site-years of data. We analyze the variation of these errors across time and location. In terms of error averaged over all locations and times, we found that complex models that correctly account for all the atmospheric parameters are slightly more accurate than other models, but, primarily at low elevations, comparable accuracy can be obtained from some simpler models. However, simpler models often exhibit errors that vary with time of day and season, whereas the errors for complex models vary less over time.« less
Floré, Katelijne M J; Fiers, Tom; Delanghe, Joris R
2008-01-01
In recent years a number of point of care testing (POCT) glucometers were introduced on the market. We investigated the analytical variability (lot-to-lot variation, calibration error, inter-instrument and inter-operator variability) of glucose POCT systems in a university hospital environment and compared these results with the analytical needs required for tight glucose monitoring. The reference hexokinase method was compared to different POCT systems based on glucose oxidase (blood gas instruments) or glucose dehydrogenase (handheld glucometers). Based upon daily internal quality control data, total errors were calculated for the various glucose methods and the analytical variability of the glucometers was estimated. The total error of the glucometers exceeded by far the desirable analytical specifications (based on a biological variability model). Lot-to-lot variation, inter-instrument variation and inter-operator variability contributed approximately equally to total variance. As in a hospital environment, distribution of hematocrit values is broad, converting blood glucose into plasma values using a fixed factor further increases variance. The percentage of outliers exceeded the ISO 15197 criteria in a broad glucose concentration range. Total analytical variation of handheld glucometers is larger than expected. Clinicians should be aware that the variability of glucose measurements obtained by blood gas instruments is lower than results obtained with handheld glucometers on capillary blood.
Error-related brain activity predicts cocaine use after treatment at 3-month follow-up.
Marhe, Reshmi; van de Wetering, Ben J M; Franken, Ingmar H A
2013-04-15
Relapse after treatment is one of the most important problems in drug dependency. Several studies suggest that lack of cognitive control is one of the causes of relapse. In this study, a relative new electrophysiologic index of cognitive control, the error-related negativity, is investigated to examine its suitability as a predictor of relapse. The error-related negativity was measured in 57 cocaine-dependent patients during their first week in detoxification treatment. Data from 49 participants were used to predict cocaine use at 3-month follow-up. Cocaine use at follow-up was measured by means of self-reported days of cocaine use in the last month verified by urine screening. A multiple hierarchical regression model was used to examine the predictive value of the error-related negativity while controlling for addiction severity and self-reported craving in the week before treatment. The error-related negativity was the only significant predictor in the model and added 7.4% of explained variance to the control variables, resulting in a total of 33.4% explained variance in the prediction of days of cocaine use at follow-up. A reduced error-related negativity measured during the first week of treatment was associated with more days of cocaine use at 3-month follow-up. Moreover, the error-related negativity was a stronger predictor of recent cocaine use than addiction severity and craving. These results suggest that underactive error-related brain activity might help to identify patients who are at risk of relapse as early as in the first week of detoxification treatment. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Evaluation of Analytical Errors in a Clinical Chemistry Laboratory: A 3 Year Experience
Sakyi, AS; Laing, EF; Ephraim, RK; Asibey, OF; Sadique, OK
2015-01-01
Background: Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. Aim: We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. Materials and Methods: We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). Results: A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Conclusion: Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified. PMID:25745569
Neutron-Star Radius from a Population of Binary Neutron Star Mergers.
Bose, Sukanta; Chakravarti, Kabir; Rezzolla, Luciano; Sathyaprakash, B S; Takami, Kentaro
2018-01-19
We show how gravitational-wave observations with advanced detectors of tens to several tens of neutron-star binaries can measure the neutron-star radius with an accuracy of several to a few percent, for mass and spatial distributions that are realistic, and with none of the sources located within 100 Mpc. We achieve such an accuracy by combining measurements of the total mass from the inspiral phase with those of the compactness from the postmerger oscillation frequencies. For estimating the measurement errors of these frequencies, we utilize analytical fits to postmerger numerical relativity waveforms in the time domain, obtained here for the first time, for four nuclear-physics equations of state and a couple of values for the mass. We further exploit quasiuniversal relations to derive errors in compactness from those frequencies. Measuring the average radius to well within 10% is possible for a sample of 100 binaries distributed uniformly in volume between 100 and 300 Mpc, so long as the equation of state is not too soft or the binaries are not too heavy. We also give error estimates for the Einstein Telescope.
A complete representation of uncertainties in layer-counted paleoclimatic archives
NASA Astrophysics Data System (ADS)
Boers, Niklas; Goswami, Bedartha; Ghil, Michael
2017-09-01
Accurate time series representation of paleoclimatic proxy records is challenging because such records involve dating errors in addition to proxy measurement errors. Rigorous attention is rarely given to age uncertainties in paleoclimatic research, although the latter can severely bias the results of proxy record analysis. Here, we introduce a Bayesian approach to represent layer-counted proxy records - such as ice cores, sediments, corals, or tree rings - as sequences of probability distributions on absolute, error-free time axes. The method accounts for both proxy measurement errors and uncertainties arising from layer-counting-based dating of the records. An application to oxygen isotope ratios from the North Greenland Ice Core Project (NGRIP) record reveals that the counting errors, although seemingly small, lead to substantial uncertainties in the final representation of the oxygen isotope ratios. In particular, for the older parts of the NGRIP record, our results show that the total uncertainty originating from dating errors has been seriously underestimated. Our method is next applied to deriving the overall uncertainties of the Suigetsu radiocarbon comparison curve, which was recently obtained from varved sediment cores at Lake Suigetsu, Japan. This curve provides the only terrestrial radiocarbon comparison for the time interval 12.5-52.8 kyr BP. The uncertainties derived here can be readily employed to obtain complete error estimates for arbitrary radiometrically dated proxy records of this recent part of the last glacial interval.
Zhang, Tangtang; Wen, Jun; van der Velde, Rogier; Meng, Xianhong; Li, Zhenchao; Liu, Yuanyong; Liu, Rong
2008-01-01
The total atmospheric water vapor content (TAWV) and land surface temperature (LST) play important roles in meteorology, hydrology, ecology and some other disciplines. In this paper, the ENVISAT/AATSR (The Advanced Along-Track Scanning Radiometer) thermal data are used to estimate the TAWV and LST over the Loess Plateau in China by using a practical split window algorithm. The distribution of the TAWV is accord with that of the MODIS TAWV products, which indicates that the estimation of the total atmospheric water vapor content is reliable. Validations of the LST by comparing with the ground measurements indicate that the maximum absolute derivation, the maximum relative error and the average relative error is 4.0K, 11.8% and 5.0% respectively, which shows that the retrievals are believable; this algorithm can provide a new way to estimate the LST from AATSR data. PMID:27879795
Rate and power efficient image compressed sensing and transmission
NASA Astrophysics Data System (ADS)
Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan
2016-01-01
This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.
The Effects of Turbulence on Tthe Measurements of Five-Hole Probes
NASA Astrophysics Data System (ADS)
Diebold, Jeffrey Michael
The primary goals of this research were to quantify the effects of turbulence on the measurements of five-hole pressure probes (5HP) and to develop a model capable of predicting the response of a 5HP to turbulence. The five-hole pressure probe is a commonly used device in experimental fluid dynamics and aerodynamics. By measuring the pressure at the five pressure ports located on the tip of the probe it is possible to determine the total pressure, static pressure and the three components of velocity at a point in the flow. Previous research has demonstrated that the measurements of simple pressure probes such as Pitot probes are significantly influenced by the presence of turbulence. Turbulent velocity fluctuations contaminate the measurement of pressure due to the nonlinear relationship between pressure and velocity as well as the angular response characteristics of the probe. Despite our understanding of the effects of turbulence on Pitot and static pressure probes, relatively little is known about the influence of turbulence on five-hole probes. This study attempts to fill this gap in our knowledge by using advanced experimental techniques to quantify these turbulence-induced errors and by developing a novel method of predicting the response of a five-hole probe to turbulence. A few studies have attempted to quantify turbulence-induced errors in five-hole probe measurements but they were limited by their inability to accurately measure the total and static pressure in the turbulent flow. The current research utilizes a fast-response five-hole probe (FR5HP) in order to accurately quantify the effects of turbulence on different standard five-hole probes (Std5HP). The FR5HP is capable of measuring the instantaneous flowfield and unlike the Std5HP the FR5HP measurements are not contaminated by the turbulent velocity fluctuations. Measurements with the FR5HP and two different Std5HPs were acquired in the highly turbulent wakes of 2D and 3D cylinders in order to quantify the turbulence-induced errors in Std5HP measurements. The primary contribution of this work is the development and validation of a simulation method to predict the measurements of a Std5HP in an arbitrary turbulent flow. This simulation utilizes a statistical approach to estimating the pressure at each port on the tip of the probe. The angular response of the probe is modeled using experimental calibration data for each five-hole probe. The simulation method is validated against the experimental measurements of the Std5HPs, and then used to study the how the characteristics of the turbulent flowfield influence the measurements of the Std5HPs. It is shown that total pressure measured by a Std5HP is increased by axial velocity fluctuations but decreased by the transverse fluctuations. The static pressure was shown to be very sensitive to the transverse fluctuations while the axial fluctuations had a negligible effect. As with Pitot probes, the turbulence-induced errors in the Std5HPs measurements were dependent on both the properties of the turbulent flow and the geometry of the probe tip. It is then demonstrated that this simulation method can be used to correct the measurements of a Std5HP in a turbulent flow if the characteristics of the turbulence are known. Finally, it is demonstrated that turbulence-induced errors in Std5HP measurements can have a substantial effect on the determination of the profile and vortex-induced drag from measurements in the wake of a 3D body. The results showed that while the calculation of both drag components was influenced by turbulence-induced errors the largest effect was on the determination of vortex-induced drag.
NASA Technical Reports Server (NTRS)
Beck, S. M.
1975-01-01
A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.
Chung, Byunghoon; Lee, Hun; Choi, Bong Joon; Seo, Kyung Ryul; Kim, Eung Kwon; Kim, Dae Yune; Kim, Tae-Im
2017-02-01
The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery.
NASA Astrophysics Data System (ADS)
Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko
2014-05-01
Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands. For most of the country this led to over 15 hours of near-continuous precipitation, which resulted in total event accumulations exceeding 150 mm in the eastern part of the Netherlands. Such accumulations belong to the largest sums ever recorded in this country and gave rise to local flooding. Measuring precipitation by weather radar within such mesoscale convective systems is known to be a challenge, since measurements are affected by multiple sources of error. For the current event the operational weather radar rainfall product only estimated about 30% of the actual amount of precipitation as measured by rain gauges. In the current presentation we will try to identify what gave rise to such large underestimations. In general weather radar measurement errors can be subdivided into two different groups: 1) errors affecting the volumetric reflectivity measurements taken, and 2) errors related to the conversion of reflectivity values in rainfall intensity and attenuation estimates. To correct for the first group of errors, the quality of the weather radar reflectivity data was improved by successively correcting for 1) clutter and anomalous propagation, 2) radar calibration, 3) wet radome attenuation, 4) signal attenuation and 5) the vertical profile of reflectivity. Such consistent corrections are generally not performed by operational meteorological services. Results show a large improvement in the quality of the precipitation data, however still only ~65% of the actual observed accumulations was estimated. To further improve the quality of the precipitation estimates, the second group of errors are corrected for by making use of disdrometer measurements taken in close vicinity of the radar. Based on these data the parameters of a normalized drop size distribution are estimated for the total event as well as for each precipitation type separately (convective, stratiform and undefined). These are then used to obtain coherent parameter sets for the radar reflectivity-rainfall rate (Z-R) and radar reflectivity-attenuation (Z-k) relationship, specifically applicable for this event. By applying a single parameter set to correct for both sources of errors, the quality of the rainfall product improves further, leading to >80% of the observed accumulations. However, by differentiating between precipitation type no better results are obtained as when using the operational relationships. This leads to the question: how representative are local disdrometer observations to correct large scale weather radar measurements? In order to tackle this question a Monte Carlo approach was used to generate >10000 sets of the normalized dropsize distribution parameters and to assess their impact on the estimated precipitation amounts. Results show that a large number of parameter sets result in improved precipitation estimated by the weather radar closely resembling observations. However, these optimal sets vary considerably as compared to those obtained from the local disdrometer measurements.
NASA Astrophysics Data System (ADS)
Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.
2015-10-01
All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. Here, we are applying a consistent approach based on auto- and cross-covariance functions to quantify the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining data sets from several analysers and using simulations, we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time lag eliminates these effects (provided the time lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.
NASA Astrophysics Data System (ADS)
Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.
2015-03-01
All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.
Spörri, Jörg; Schiefermüller, Christian; Müller, Erich
2016-01-01
In the laboratory, optoelectronic stereophotogrammetry is one of the most commonly used motion capture systems; particularly, when position- or orientation-related analyses of human movements are intended. However, for many applied research questions, field experiments are indispensable, and it is not a priori clear whether optoelectronic stereophotogrammetric systems can be expected to perform similarly to in-lab experiments. This study aimed to assess the instrumental errors of kinematic data collected on a ski track using optoelectronic stereophotogrammetry, and to investigate the magnitudes of additional skiing-specific errors and soft tissue/suit artifacts. During a field experiment, the kinematic data of different static and dynamic tasks were captured by the use of 24 infrared-cameras. The distances between three passive markers attached to a rigid bar were stereophotogrammetrically reconstructed and, subsequently, were compared to the manufacturer-specified exact values. While at rest or skiing at low speed, the optoelectronic stereophotogrammetric system's accuracy and precision for determining inter-marker distances were found to be comparable to those known for in-lab experiments (< 1 mm). However, when measuring a skier's kinematics under "typical" skiing conditions (i.e., high speeds, inclined/angulated postures and moderate snow spraying), additional errors were found to occur for distances between equipment-fixed markers (total measurement errors: 2.3 ± 2.2 mm). Moreover, for distances between skin-fixed markers, such as the anterior hip markers, additional artifacts were observed (total measurement errors: 8.3 ± 7.1 mm). In summary, these values can be considered sufficient for the detection of meaningful position- or orientation-related differences in alpine skiing. However, it must be emphasized that the use of optoelectronic stereophotogrammetry on a ski track is seriously constrained by limited practical usability, small-sized capture volumes and the occurrence of extensive snow spraying (which results in marker obscuration). The latter limitation possibly might be overcome by the use of more sophisticated cluster-based marker sets.
Optically powered oil tank multichannel detection system with optical fiber link
NASA Astrophysics Data System (ADS)
Yu, Zhijing
1998-08-01
A novel oil tanks integrative parameters measuring system with optically powered are presented. To realize optical powered and micro-power consumption multiple channels and parameters detection, the system has taken the PWM/PPM modulation, ratio measurement, time division multiplexing and pulse width division multiplexing techniques. Moreover, the system also used special pulse width discriminator and single-chip microcomputer to accomplish signal pulse separation, PPM/PWM signal demodulation, the error correction of overlapping pulse and data processing. This new transducer has provided with high characteristics: experimental transmitting distance is 500m; total consumption of the probes is less than 150 (mu) W; measurement error: +/- 0.5 degrees C and +/- 0.2 percent FS. The measurement accuracy of the liquid level and reserves is mainly determined by the pressure accuracy. Finally, some points of the experiment are given.
Ono, Yohei; Kashihara, Rina; Yasojima, Nobutoshi; Kasahara, Hideki; Shimizu, Yuka; Tamura, Kenichi; Tsutsumi, Kaori; Sutherland, Kenneth; Koike, Takao; Kamishima, Tamotsu
2016-06-01
Accurate evaluation of joint space width (JSW) is important in the assessment of rheumatoid arthritis (RA). In clinical radiography of bilateral hands, the oblique incidence of X-rays is unavoidable, which may cause perceptional or measurement error of JSW. The objective of this study was to examine whether tomosynthesis, a recently developed modality, can facilitate a more accurate evaluation of JSW than radiography under the condition of oblique incidence of X-rays. We investigated quantitative errors derived from the oblique incidence of X-rays by imaging phantoms simulating various finger joint spaces using radiographs and tomosynthesis images. We then compared the qualitative results of the modified total Sharp score of a total of 320 joints from 20 patients with RA between these modalities. A quantitative error was prominent when the location of the phantom was shifted along the JSW direction. Modified total Sharp scores of tomosynthesis images were significantly higher than those of radiography, that is to say JSW was regarded as narrower in tomosynthesis than in radiography when finger joints were located where the oblique incidence of X-rays is expected in the JSW direction. Tomosynthesis can facilitate accurate evaluation of JSW in finger joints of patients with RA, even with oblique incidence of X-rays. Accurate evaluation of JSW is necessary for the management of patients with RA. Through phantom and clinical studies, we demonstrate that tomosynthesis may achieve more accurate evaluation of JSW.
1983-10-01
Hypophosphatemia was exaggerated, possibly because of respiratory alkalosis . Phosphate losses in urine and sweat were minimal, preventing appreciable loss... respiratory gases, the newer modifications for simplification of the measurements, and the total errors that are anticipated in its use. Data are presented... respiratory requirements at the altitude of the V icecap (7,000 feet) with that of sea level (actually 165 feet). (3) Energy metabolism was measured for
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-06-13
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Evaluation of an in-practice wet-chemistry analyzer using canine and feline serum samples.
Irvine, Katherine L; Burt, Kay; Papasouliotis, Kostas
2016-01-01
A wet-chemistry biochemical analyzer was assessed for in-practice veterinary use. Its small size may mean a cost-effective method for low-throughput in-house biochemical analyses for first-opinion practice. The objectives of our study were to determine imprecision, total observed error, and acceptability of the analyzer for measurement of common canine and feline serum analytes, and to compare clinical sample results to those from a commercial reference analyzer. Imprecision was determined by within- and between-run repeatability for canine and feline pooled samples, and manufacturer-supplied quality control material (QCM). Total observed error (TEobs) was determined for pooled samples and QCM. Performance was assessed for canine and feline pooled samples by sigma metric determination. Agreement and errors between the in-practice and reference analyzers were determined for canine and feline clinical samples by Bland-Altman and Deming regression analyses. Within- and between-run precision was high for most analytes, and TEobs(%) was mostly lower than total allowable error. Performance based on sigma metrics was good (σ > 4) for many analytes and marginal (σ > 3) for most of the remainder. Correlation between the analyzers was very high for most canine analytes and high for most feline analytes. Between-analyzer bias was generally attributed to high constant error. The in-practice analyzer showed good overall performance, with only calcium and phosphate analyses identified as significantly problematic. Agreement for most analytes was insufficient for transposition of reference intervals, and we recommend that in-practice-specific reference intervals be established in the laboratory. © 2015 The Author(s).
2014-01-01
Background Exposure measurement error is a concern in long-term PM2.5 health studies using ambient concentrations as exposures. We assessed error magnitude by estimating calibration coefficients as the association between personal PM2.5 exposures from validation studies and typically available surrogate exposures. Methods Daily personal and ambient PM2.5, and when available sulfate, measurements were compiled from nine cities, over 2 to 12 days. True exposure was defined as personal exposure to PM2.5 of ambient origin. Since PM2.5 of ambient origin could only be determined for five cities, personal exposure to total PM2.5 was also considered. Surrogate exposures were estimated as ambient PM2.5 at the nearest monitor or predicted outside subjects’ homes. We estimated calibration coefficients by regressing true on surrogate exposures in random effects models. Results When monthly-averaged personal PM2.5 of ambient origin was used as the true exposure, calibration coefficients equaled 0.31 (95% CI:0.14, 0.47) for nearest monitor and 0.54 (95% CI:0.42, 0.65) for outdoor home predictions. Between-city heterogeneity was not found for outdoor home PM2.5 for either true exposure. Heterogeneity was significant for nearest monitor PM2.5, for both true exposures, but not after adjusting for city-average motor vehicle number for total personal PM2.5. Conclusions Calibration coefficients were <1, consistent with previously reported chronic health risks using nearest monitor exposures being under-estimated when ambient concentrations are the exposure of interest. Calibration coefficients were closer to 1 for outdoor home predictions, likely reflecting less spatial error. Further research is needed to determine how our findings can be incorporated in future health studies. PMID:24410940
Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M
2011-01-01
Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.
Wireless Local Area Network Performance Inside Aircraft Passenger Cabins
NASA Technical Reports Server (NTRS)
Whetten, Frank L.; Soroker, Andrew; Whetten, Dennis A.; Whetten, Frank L.; Beggs, John H.
2005-01-01
An examination of IEEE 802.11 wireless network performance within an aircraft fuselage is performed. This examination measured the propagated RF power along the length of the fuselage, and the associated network performance: the link speed, total throughput, and packet losses and errors. A total of four airplanes: one single-aisle and three twin-aisle airplanes were tested with 802.11a, 802.11b, and 802.11g networks.
Error analysis of Dobson spectrophotometer measurements of the total ozone content
NASA Technical Reports Server (NTRS)
Holland, A. C.; Thomas, R. W. L.
1975-01-01
A study of techniques for measuring atmospheric ozone is reported. This study represents the second phase of a program designed to improve techniques for the measurement of atmospheric ozone. This phase of the program studied the sensitivity of Dobson direct sun measurements and the ozone amounts inferred from those measurements to variation in the atmospheric temperature profile. The study used the plane - parallel Monte-Carlo model developed and tested under the initial phase of this program, and a series of standard model atmospheres.
Duff, W.R.D.; Björkman, K.M.; Kawalilak, C.E.; Kehrig, A.M.; Wiebe, S.; Kontulainen, S.
2017-01-01
Objectives: To define pQCT precision errors, least-significant-changes, and identify associated factors for bone outcomes at the radius and tibia in children. Methods: We obtained duplicate radius and tibia pQCT scans from 35 children (8-14yrs). We report root-mean-squared coefficient of variation (CV%RMS) and 95% limits-of-agreement to characterize repeatability across scan quality and least-significant-changes for bone outcomes at distal (total and trabecular area, content and density; and compressive bone strength) and shaft sites (total area and content; cortical area content, density and thickness; and torsional bone strength). We used Spearman’s rho to identify associations between CV% and time between measurements, child’s age or anthropometrics. Results: After excluding unanalyzable scans (6-10% of scans per bone site), CV%RMS ranged from 4% (total density) to 19% (trabecular content) at the distal radius, 4% (cortical content) to 8% (cortical thickness) at the radius shaft, 2% (total density) to 14% (trabecular content) at the distal tibia and from 2% (cortical content) to 6% (bone strength) at the tibia shaft. Precision errors were within 95% limits-of-agreement across scan quality. Age was associated (rho -0.4 to -0.5, p <0.05) with CV% at the tibia. Conclusion: Bone density outcomes and cortical bone properties appeared most precise (CV%RMS <5%) in children. PMID:28574412
Measurement of reaeration coefficients for selected Florida streams
Hampson, P.S.; Coffin, J.E.
1989-01-01
A total of 29 separate reaeration coefficient determinations were performed on 27 subreaches of 12 selected Florida streams between October 1981 and May 1985. Measurements performed prior to June 1984 were made using the peak and area methods with ethylene and propane as the tracer gases. Later measurements utilized the steady-state method with propane as the only tracer gas. The reaeration coefficients ranged from 1.07 to 45.9 days with a mean estimated probable error of +/16.7%. Ten predictive equations (compiled from the literature) were also evaluated using the measured coefficients. The most representative equation was one of the energy dissipation type with a standard error of 60.3%. Seven of the 10 predictive additional equations were modified using the measured coefficients and nonlinear regression techniques. The most accurate of the developed equations was also of the energy dissipation form and had a standard error of 54.9%. For 5 of the 13 subreaches in which both ethylene and propane were used, the ethylene data resulted in substantially larger reaeration coefficient values which were rejected. In these reaches, ethylene concentrations were probably significantly affected by one or more electrophilic addition reactions known to occur in aqueous media. (Author 's abstract)
Kin Tekce, Buket; Tekce, Hikmet; Aktas, Gulali; Uyeturk, Ugur
2016-01-01
Uncertainty of measurement is the numeric expression of the errors associated with all measurements taken in clinical laboratories. Serum creatinine concentration is the most common diagnostic marker for acute kidney injury. The goal of this study was to determine the effect of the uncertainty of measurement of serum creatinine concentrations on the diagnosis of acute kidney injury. We calculated the uncertainty of measurement of serum creatinine according to the Nordtest Guide. Retrospectively, we identified 289 patients who were evaluated for acute kidney injury. Of the total patient pool, 233 were diagnosed with acute kidney injury using the AKIN classification scheme and then were compared using statistical analysis. We determined nine probabilities of the uncertainty of measurement of serum creatinine concentrations. There was a statistically significant difference in the number of patients diagnosed with acute kidney injury when uncertainty of measurement was taken into consideration (first probability compared to the fifth p = 0.023 and first probability compared to the ninth p = 0.012). We found that the uncertainty of measurement for serum creatinine concentrations was an important factor for correctly diagnosing acute kidney injury. In addition, based on the AKIN classification scheme, minimizing the total allowable error levels for serum creatinine concentrations is necessary for the accurate diagnosis of acute kidney injury by clinicians.
Radiological Image Compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung Benedict
The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.
Errors in the Extra-Analytical Phases of Clinical Chemistry Laboratory Testing.
Zemlin, Annalise E
2018-04-01
The total testing process consists of various phases from the pre-preanalytical to the post-postanalytical phase, the so-called brain-to-brain loop. With improvements in analytical techniques and efficient quality control programmes, most laboratory errors now occur in the extra-analytical phases. There has been recent interest in these errors with numerous publications highlighting their effect on service delivery, patient care and cost. This interest has led to the formation of various working groups whose mission is to develop standardized quality indicators which can be used to measure the performance of service of these phases. This will eventually lead to the development of external quality assessment schemes to monitor these phases in agreement with ISO15189:2012 recommendations. This review focuses on potential errors in the extra-analytical phases of clinical chemistry laboratory testing, some of the studies performed to assess the severity and impact of these errors and processes that are in place to address these errors. The aim of this review is to highlight the importance of these errors for the requesting clinician.
Manikandan, A.; Biplab, Sarkar; David, Perianayagam A.; Holla, R.; Vivek, T. R.; Sujatha, N.
2011-01-01
For high dose rate (HDR) brachytherapy, independent treatment verification is needed to ensure that the treatment is performed as per prescription. This study demonstrates dosimetric quality assurance of the HDR brachytherapy using a commercially available two-dimensional ion chamber array called IMatriXX, which has a detector separation of 0.7619 cm. The reference isodose length, step size, and source dwell positional accuracy were verified. A total of 24 dwell positions, which were verified for positional accuracy gave a total error (systematic and random) of –0.45 mm, with a standard deviation of 1.01 mm and maximum error of 1.8 mm. Using a step size of 5 mm, reference isodose length (the length of 100% isodose line) was verified for single and multiple catheters of same and different source loadings. An error ≤1 mm was measured in 57% of tests analyzed. Step size verification for 2, 3, 4, and 5 cm was performed and 70% of the step size errors were below 1 mm, with maximum of 1.2 mm. The step size ≤1 cm could not be verified by the IMatriXX as it could not resolve the peaks in dose profile. PMID:21897562
Medication Administration Errors in Nursing Homes Using an Automated Medication Dispensing System
van den Bemt, Patricia M.L.A.; Idzinga, Jetske C.; Robertz, Hans; Kormelink, Dennis Groot; Pels, Neske
2009-01-01
Objective To identify the frequency of medication administration errors as well as their potential risk factors in nursing homes using a distribution robot. Design The study was a prospective, observational study conducted within three nursing homes in the Netherlands caring for 180 individuals. Measurements Medication errors were measured using the disguised observation technique. Types of medication errors were described. The correlation between several potential risk factors and the occurrence of medication errors was studied to identify potential causes for the errors. Results In total 2,025 medication administrations to 127 clients were observed. In these administrations 428 errors were observed (21.2%). The most frequently occurring types of errors were use of wrong administration techniques (especially incorrect crushing of medication and not supervising the intake of medication) and wrong time errors (administering the medication at least 1 h early or late).The potential risk factors female gender (odds ratio (OR) 1.39; 95% confidence interval (CI) 1.05–1.83), ATC medication class antibiotics (OR 11.11; 95% CI 2.66–46.50), medication crushed (OR 7.83; 95% CI 5.40–11.36), number of dosages/day/client (OR 1.03; 95% CI 1.01–1.05), nursing home 2 (OR 3.97; 95% CI 2.86–5.50), medication not supplied by distribution robot (OR 2.92; 95% CI 2.04–4.18), time classes “7–10 am” (OR 2.28; 95% CI 1.50–3.47) and “10 am-2 pm” (OR 1.96; 1.18–3.27) and day of the week “Wednesday” (OR 1.46; 95% CI 1.03–2.07) are associated with a higher risk of administration errors. Conclusions Medication administration in nursing homes is prone to many errors. This study indicates that the handling of the medication after removing it from the robot packaging may contribute to this high error frequency, which may be reduced by training of nurse attendants, by automated clinical decision support and by measures to reduce workload. PMID:19390109
Large Uncertainty in Estimating pCO2 From Carbonate Equilibria in Lakes
NASA Astrophysics Data System (ADS)
Golub, Malgorzata; Desai, Ankur R.; McKinley, Galen A.; Remucal, Christina K.; Stanley, Emily H.
2017-11-01
Most estimates of carbon dioxide (CO2) evasion from freshwaters rely on calculating partial pressure of aquatic CO2 (pCO2) from two out of three CO2-related parameters using carbonate equilibria. However, the pCO2 uncertainty has not been systematically evaluated across multiple lake types and equilibria. We quantified random errors in pH, dissolved inorganic carbon, alkalinity, and temperature from the North Temperate Lakes Long-Term Ecological Research site in four lake groups across a broad gradient of chemical composition. These errors were propagated onto pCO2 calculated from three carbonate equilibria, and for overlapping observations, compared against uncertainties in directly measured pCO2. The empirical random errors in CO2-related parameters were mostly below 2% of their median values. Resulting random pCO2 errors ranged from ±3.7% to ±31.5% of the median depending on alkalinity group and choice of input parameter pairs. Temperature uncertainty had a negligible effect on pCO2. When compared with direct pCO2 measurements, all parameter combinations produced biased pCO2 estimates with less than one third of total uncertainty explained by random pCO2 errors, indicating that systematic uncertainty dominates over random error. Multidecadal trend of pCO2 was difficult to reconstruct from uncertain historical observations of CO2-related parameters. Given poor precision and accuracy of pCO2 estimates derived from virtually any combination of two CO2-related parameters, we recommend direct pCO2 measurements where possible. To achieve consistently robust estimates of CO2 emissions from freshwater components of terrestrial carbon balances, future efforts should focus on improving accuracy and precision of CO2-related parameters (including direct pCO2) measurements and associated pCO2 calculations.
Wilberg, Dale E.; Stolp, Bernard J.
2005-01-01
This report contains the results of an October 2001 seepage investigation conducted along a reach of the Escalante River in Utah extending from the U.S. Geological Survey streamflow-gaging station near Escalante to the mouth of Stevens Canyon. Discharge was measured at 16 individual sites along 15 consecutive reaches. Total reach length was about 86 miles. A reconnaissance-level sampling of water for tritium and chlorofluorcarbons was also done. In addition, hydrologic and water-quality data previously collected and published by the U.S. Geological Survey for the 2,020-square-mile Escalante River drainage basin was compiled and is presented in 12 tables. These data were collected from 64 surface-water sites and 28 springs from 1909 to 2002.None of the 15 consecutive reaches along the Escalante River had a measured loss or gain that exceeded the measurement error. All discharge measurements taken during the seepage investigation were assigned a qualitative rating of accuracy that ranged from 5 percent to greater than 8 percent of the actual flow. Summing the potential error for each measurement and dividing by the maximum of either the upstream discharge and any tributary inflow, or the downstream discharge, determined the normalized error for a reach. This was compared to the computed loss or gain that also was normalized to the maximum discharge. A loss or gain for a specified reach is considered significant when the loss or gain (normalized percentage difference) is greater than the measurement error (normalized percentage error). The percentage difference and percentage error were normalized to allow comparison between reaches with different amounts of discharge.The plate that accompanies the report is 36" by 40" and can be printed in 16 tiles, 8.5 by 11 inches. An index for the tiles is located on the lower left-hand side of the plate. Using Adobe Acrobat, the plate can be viewed independent of the report; all Acrobat functions are available.
Measurement of the Total Cross Section of Uranium-Uranium Collisions at √{sNN} = 192 . 8 GeV
NASA Astrophysics Data System (ADS)
Baltz, A. J.; Fischer, W.; Blaskiewicz, M.; Gassner, D.; Drees, K. A.; Luo, Y.; Minty, M.; Thieberger, P.; Wilinski, M.; Pshenichnov, I. A.
2014-03-01
The total cross section of Uranium-Uranium at √{sNN} = 192 . 8 GeV has been measured to be 515 +/-13stat +/-22sys barn, which agrees with the calculated theoretical value of 487.3 barn within experimental error. That this total cross section is more than an order of magnitude larger than the geometric ion-ion cross section is primarily due to Bound-Free Pair Production (BFPP) and Electro-Magnetic Dissociation (EMD). Nearly all beam losses were due to geometric, BFPP and EMD collisions. This allowed the determination of the total cross section from the measured beam loss rates and luminosity. The beam loss rate is calculated from a time-dependent measurement of the total beam intensity. The luminosity is measured via the detection of neutron pairs in time-coincidence in the Zero Degree Calorimeters. Apart from a general interest in verifying the calculations experimentally, an accurate prediction of the losses created in the heavy ion collisions is of practical interest for the LHC, where collision products have the potential to quench cryogenically cooled magnets.
Jiang, Qingan; Wu, Wenqi; Jiang, Mingming; Li, Yun
2017-01-01
High-accuracy railway track surveying is essential for railway construction and maintenance. The traditional approaches based on total station equipment are not efficient enough since high precision surveying frequently needs static measurements. This paper proposes a new filtering and smoothing algorithm based on the IMU/odometer and landmarks integration for the railway track surveying. In order to overcome the difficulty of estimating too many error parameters with too few landmark observations, a new model with completely observable error states is established by combining error terms of the system. Based on covariance analysis, the analytical relationship between the railway track surveying accuracy requirements and equivalent gyro drifts including bias instability and random walk noise are established. Experiment results show that the accuracy of the new filtering and smoothing algorithm for railway track surveying can reach 1 mm (1σ) when using a Ring Laser Gyroscope (RLG)-based Inertial Measurement Unit (IMU) with gyro bias instability of 0.03°/h and random walk noise of 0.005°/h while control points of the track control network (CPIII) position observations are provided by the optical total station in about every 60 m interval. The proposed approach can satisfy at the same time the demands of high accuracy and work efficiency for railway track surveying. PMID:28629191
Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Xi, Xiuxiu
2015-07-23
The measurement of soil total nitrogen (TN) by hyperspectral remote sensing provides an important tool for soil restoration programs in areas with subsided land caused by the extraction of natural resources. This study used the local correlation maximization-complementary superiority method (LCMCS) to establish TN prediction models by considering the relationship between spectral reflectance (measured by an ASD FieldSpec 3 spectroradiometer) and TN based on spectral reflectance curves of soil samples collected from subsided land which is determined by synthetic aperture radar interferometry (InSAR) technology. Based on the 1655 selected effective bands of the optimal spectrum (OSP) of the first derivate differential of reciprocal logarithm ([log{1/R}]'), (correlation coefficients, p < 0.01), the optimal model of LCMCS method was obtained to determine the final model, which produced lower prediction errors (root mean square error of validation [RMSEV] = 0.89, mean relative error of validation [MREV] = 5.93%) when compared with models built by the local correlation maximization (LCM), complementary superiority (CS) and partial least squares regression (PLS) methods. The predictive effect of LCMCS model was optional in Cangzhou, Renqiu and Fengfeng District. Results indicate that the LCMCS method has great potential to monitor TN in subsided lands caused by the extraction of natural resources including groundwater, oil and coal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russo, Gregory A., E-mail: gregory.russo@bmc.org; Qureshi, Muhammad M.; Truong, Minh-Tam
2012-11-01
Purpose: To determine whether the use of routine image guided radiation therapy (IGRT) using pretreatment on-board imaging (OBI) with orthogonal kilovoltage X-rays reduces treatment delivery errors. Methods and Materials: A retrospective review of documented treatment delivery errors from 2003 to 2009 was performed. Following implementation of IGRT in 2007, patients received daily OBI with orthogonal kV X-rays prior to treatment. The frequency of errors in the pre- and post-IGRT time frames was compared. Treatment errors (TEs) were classified as IGRT-preventable or non-IGRT-preventable. Results: A total of 71,260 treatment fractions were delivered to 2764 patients. A total of 135 (0.19%) TEsmore » occurred in 39 (1.4%) patients (3.2% in 2003, 1.1% in 2004, 2.5% in 2005, 2% in 2006, 0.86% in 2007, 0.24% in 2008, and 0.22% in 2009). In 2007, the TE rate decreased by >50% and has remained low (P = .00007, compared to before 2007). Errors were classified as being potentially preventable with IGRT (e.g., incorrect site, patient, or isocenter) vs. not. No patients had any IGRT-preventable TEs from 2007 to 2009, whereas there were 9 from 2003 to 2006 (1 in 2003, 2 in 2004, 2 in 2005, and 4 in 2006; P = .0058) before the implementation of IGRT. Conclusions: IGRT implementation has a patient safety benefit with a significant reduction in treatment delivery errors. As such, we recommend the use of IGRT in routine practice to complement existing quality assurance measures.« less
Russo, Gregory A; Qureshi, Muhammad M; Truong, Minh-Tam; Hirsch, Ariel E; Orlina, Lawrence; Bohrs, Harry; Clancy, Pauline; Willins, John; Kachnic, Lisa A
2012-11-01
To determine whether the use of routine image guided radiation therapy (IGRT) using pretreatment on-board imaging (OBI) with orthogonal kilovoltage X-rays reduces treatment delivery errors. A retrospective review of documented treatment delivery errors from 2003 to 2009 was performed. Following implementation of IGRT in 2007, patients received daily OBI with orthogonal kV X-rays prior to treatment. The frequency of errors in the pre- and post-IGRT time frames was compared. Treatment errors (TEs) were classified as IGRT-preventable or non-IGRT-preventable. A total of 71,260 treatment fractions were delivered to 2764 patients. A total of 135 (0.19%) TEs occurred in 39 (1.4%) patients (3.2% in 2003, 1.1% in 2004, 2.5% in 2005, 2% in 2006, 0.86% in 2007, 0.24% in 2008, and 0.22% in 2009). In 2007, the TE rate decreased by >50% and has remained low (P = .00007, compared to before 2007). Errors were classified as being potentially preventable with IGRT (e.g., incorrect site, patient, or isocenter) vs. not. No patients had any IGRT-preventable TEs from 2007 to 2009, whereas there were 9 from 2003 to 2006 (1 in 2003, 2 in 2004, 2 in 2005, and 4 in 2006; P = .0058) before the implementation of IGRT. IGRT implementation has a patient safety benefit with a significant reduction in treatment delivery errors. As such, we recommend the use of IGRT in routine practice to complement existing quality assurance measures. Copyright © 2012 Elsevier Inc. All rights reserved.
Searching the Allais effect during the total sun eclipse of 11 July 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salva, Horacio R.
2011-03-15
I have measured the precession change of the oscillation plane with an automated Foucault pendulum and found no evidence (within the measurement error) of the Allais effect. The precession speed was registered and, due the variations involved, if the precession speed would changed 0.3 degree per hour (increasing or decreasing the angle of the normal precession speed) during the all eclipse, it would be notice in this measurement.
Chen, Yi-Miau; Huang, Yi-Jing; Huang, Chien-Yu; Lin, Gong-Hong; Liaw, Lih-Jiun; Lee, Shih-Chieh; Hsieh, Ching-Lin
2017-10-01
The 3-point Berg Balance Scale (BBS-3P) and 3-point Postural Assessment Scale for Stroke Patients (PASS-3P) were simplified from the BBS and PASS to overcome the complex scoring systems. The BBS-3P and PASS-3P were more feasible in busy clinical practice and showed similarly sound validity and responsiveness to the original measures. However, the reliability of the BBS-3P and PASS-3P is unknown limiting their utility and the interpretability of scores. We aimed to examine the test-retest reliability and minimal detectable change (MDC) of the BBS-3P and PASS-3P in patients with stroke. Cross-sectional study. The rehabilitation departments of a medical center and a community hospital. A total of 51 chronic stroke patients (64.7% male). Both balance measures were administered twice 7 days apart. The test-retest reliability of both the BBS-3P and PASS-3P were examined by intraclass correlation coefficients (ICC). The MDC and its percentage over the total score (MDC%) of each measure was calculated for examining the random measurement errors. The ICC values of the BBS-3P and PASS-3P were 0.99 and 0.97, respectively. The MDC% (MDC) of the BBS-3P and PASS-3P were 9.1% (5.1 points) and 8.4% (3.0 points), respectively, indicating that both measures had small and acceptable random measurement errors. Our results showed that both the BBS-3P and the PASS-3P had good test-retest reliability, with small and acceptable random measurement error. These two simplified 3-level balance measures can provide reliable results over time. Our findings support the repeated administration of the BBS-3P and PASS-3P to monitor the balance of patients with stroke. The MDC values can help clinicians and researchers interpret the change scores more precisely.
Clark, S; Rose, D J
2001-04-01
To establish reliability estimates of the 75% Limits of Stability Test (75% LOS test) when administered to community-dwelling older adults with a history of falls. Generalizability theory was used to estimate both the relative contribution of identified error sources to the total measurement error and generalizability coefficients. A random effects repeated-measures analysis of variance (ANOVA) was used to assess consistency of LOS test movement variables across both days and targets. A motor control research laboratory in a university setting. Fifty community-dwelling older adults with 2 or more falls in the previous year. Spatial and temporal measures of dynamic balance derived from the 75% LOS test included average movement velocity, maximum center of gravity (COG) excursion, end-point COG excursion, and directional control. Estimated generalizability coefficients for 2 testing days ranged from.58 to.87. Total variance in LOS test measures attributable to inconsistencies in day-to-day test performance (Day and Subject x Day facets) ranged from 2.5% to 8.4%. The ANOVA results indicated that no significant differences were observed in the LOS test variables across the 2 testing days. The 75% LOS test administered to older adult fallers on 2 consecutive days provides consistent and reliable measures of dynamic balance.
Clasey, Jody L; Gater, David R
2005-11-01
To compare (1) total body volume (V(b)) and density (D(b)) measurements obtained by hydrostatic weighing (HW) and air displacement plethysmography (ADP) in adults with spinal cord injury (SCI); (2) measured and predicted thoracic gas volume (V(TG)); and (3) differences in percentage of fat measurements using ADP-obtained D(b) and HW-obtained D(b) measures that were interchanged in a 4-compartment body composition model (4-comp %fat). Twenty adults with SCI underwent ADP and V(TG), and HW testing. In a subgroup (n=13) of subjects, 4-comp %fat procedures were computed. Research laboratories in a university setting. Twenty adults with SCI below the T3 vertebrae and motor complete paraplegia. Not applicable. Statistical analyses, including determination of group mean differences, shared variance, total error, and 95% confidence intervals. The 2 methods yielded small yet significantly different V(b) and D(b). The groups' mean V(TG) did not differ significantly, but the large relative differences indicated an unacceptable amount of individual error. When the 4-comp %fat measurements were compared, there was a trend toward significant differences (P=.08). ADP is a valid alternative method of determining the V(b) and D(b) in adults with SCI; however, the predicted V(TG) should be used with caution.
Dong, Zhixu; Sun, Xingwei; Chen, Changzheng; Sun, Mengnan
2018-04-13
The inconvenient loading and unloading of a long and heavy drill pipe gives rise to the difficulty in measuring the contour parameters of its threads at both ends. To solve this problem, in this paper we take the SCK230 drill pipe thread-repairing machine tool as a carrier to design and achieve a fast and on-machine measuring system based on a laser probe. This system drives a laser displacement sensor to acquire the contour data of a certain axial section of the thread by using the servo function of a CNC machine tool. To correct the sensor's measurement errors caused by the measuring point inclination angle, an inclination error model is built to compensate data in real time. To better suppress random error interference and ensure real contour information, a new wavelet threshold function is proposed to process data through the wavelet threshold denoising. Discrete data after denoising is segmented according to the geometrical characteristics of the drill pipe thread, and the regression model of the contour data in each section is fitted by using the method of weighted total least squares (WTLS). Then, the thread parameters are calculated in real time to judge the processing quality. Inclination error experiments show that the proposed compensation model is accurate and effective, and it can improve the data acquisition accuracy of a sensor. Simulation results indicate that the improved threshold function is of better continuity and self-adaptability, which makes sure that denoising effects are guaranteed, and, meanwhile, the complete elimination of real data distorted in random errors is avoided. Additionally, NC50 thread-testing experiments show that the proposed on-machine measuring system can complete the measurement of a 25 mm thread in 7.8 s, with a measurement accuracy of ±8 μm and repeatability limit ≤ 4 μm (high repeatability), and hence the accuracy and efficiency of measurement are both improved.
Sun, Xingwei; Chen, Changzheng; Sun, Mengnan
2018-01-01
The inconvenient loading and unloading of a long and heavy drill pipe gives rise to the difficulty in measuring the contour parameters of its threads at both ends. To solve this problem, in this paper we take the SCK230 drill pipe thread-repairing machine tool as a carrier to design and achieve a fast and on-machine measuring system based on a laser probe. This system drives a laser displacement sensor to acquire the contour data of a certain axial section of the thread by using the servo function of a CNC machine tool. To correct the sensor’s measurement errors caused by the measuring point inclination angle, an inclination error model is built to compensate data in real time. To better suppress random error interference and ensure real contour information, a new wavelet threshold function is proposed to process data through the wavelet threshold denoising. Discrete data after denoising is segmented according to the geometrical characteristics of the drill pipe thread, and the regression model of the contour data in each section is fitted by using the method of weighted total least squares (WTLS). Then, the thread parameters are calculated in real time to judge the processing quality. Inclination error experiments show that the proposed compensation model is accurate and effective, and it can improve the data acquisition accuracy of a sensor. Simulation results indicate that the improved threshold function is of better continuity and self-adaptability, which makes sure that denoising effects are guaranteed, and, meanwhile, the complete elimination of real data distorted in random errors is avoided. Additionally, NC50 thread-testing experiments show that the proposed on-machine measuring system can complete the measurement of a 25 mm thread in 7.8 s, with a measurement accuracy of ±8 μm and repeatability limit ≤ 4 μm (high repeatability), and hence the accuracy and efficiency of measurement are both improved. PMID:29652836
Effect of stratospheric aerosol layers on the TOMS/SBUV ozone retrieval
NASA Technical Reports Server (NTRS)
Torres, O.; Ahmad, Zia; Pan, L.; Herman, J. R.; Bhartia, P. K.; Mcpeters, R.
1994-01-01
An evaluation of the optical effects of stratospheric aerosol layers on total ozone retrieval from space by the TOMS/SBUV type instruments is presented here. Using the Dave radiative transfer model we estimate the magnitude of the errors in the retrieved ozone when polar stratospheric clouds (PSC's) or volcanic aerosol layers interfere with the measurements. The largest errors are produced by optically thick water ice PSC's. Results of simulation experiments on the effect of the Pinatubo aerosol cloud on the Nimbus-7 and Meteor-3 TOMS products are presented.
NASA Astrophysics Data System (ADS)
Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric
2013-04-01
Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.
Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E
2014-01-01
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.
A Video Method to Study Drosophila Sleep
Zimmerman, John E.; Raizen, David M.; Maycock, Matthew H.; Maislin, Greg; Pack, Allan I.
2008-01-01
Study Objectives: To use video to determine the accuracy of the infrared beam-splitting method for measuring sleep in Drosophila and to determine the effect of time of day, sex, genotype, and age on sleep measurements. Design: A digital image analysis method based on frame subtraction principle was developed to distinguish a quiescent from a moving fly. Data obtained using this method were compared with data obtained using the Drosophila Activity Monitoring System (DAMS). The location of the fly was identified based on its centroid location in the subtracted images. Measurements and Results: The error associated with the identification of total sleep using DAMS ranged from 7% to 95% and depended on genotype, sex, age, and time of day. The degree of the total sleep error was dependent on genotype during the daytime (P < 0.001) and was dependent on age during both the daytime and the nighttime (P < 0.001 for both). The DAMS method overestimated sleep bout duration during both the day and night, and the degree of these errors was genotype dependent (P < 0.001). Brief movements that occur during sleep bouts can be accurately identified using video. Both video and DAMS detected a homeostatic response to sleep deprivation. Conclusions: Video digital analysis is more accurate than DAMS in fly sleep measurements. In particular, conclusions drawn from DAMS measurements regarding daytime sleep and sleep architecture should be made with caution. Video analysis also permits the assessment of fly position and brief movements during sleep. Citation: Zimmerman JE; Raizen DM; Maycock MH; Maislin G; Pack AI. A video method to study drosophila sleep. SLEEP 2008;31(11):1587–1598. PMID:19014079
NASA Astrophysics Data System (ADS)
Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.
2017-12-01
The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.
Monitoring gait in multiple sclerosis with novel wearable motion sensors.
Moon, Yaejin; McGinnis, Ryan S; Seagers, Kirsten; Motl, Robert W; Sheth, Nirav; Wright, John A; Ghaffari, Roozbeh; Sosnoff, Jacob J
2017-01-01
Mobility impairment is common in people with multiple sclerosis (PwMS) and there is a need to assess mobility in remote settings. Here, we apply a novel wireless, skin-mounted, and conformal inertial sensor (BioStampRC, MC10 Inc.) to examine gait characteristics of PwMS under controlled conditions. We determine the accuracy and precision of BioStampRC in measuring gait kinematics by comparing to contemporary research-grade measurement devices. A total of 45 PwMS, who presented with diverse walking impairment (Mild MS = 15, Moderate MS = 15, Severe MS = 15), and 15 healthy control subjects participated in the study. Participants completed a series of clinical walking tests. During the tests participants were instrumented with BioStampRC and MTx (Xsens, Inc.) sensors on their shanks, as well as an activity monitor GT3X (Actigraph, Inc.) on their non-dominant hip. Shank angular velocity was simultaneously measured with the inertial sensors. Step number and temporal gait parameters were calculated from the data recorded by each sensor. Visual inspection and the MTx served as the reference standards for computing the step number and temporal parameters, respectively. Accuracy (error) and precision (variance of error) was assessed based on absolute and relative metrics. Temporal parameters were compared across groups using ANOVA. Mean accuracy±precision for the BioStampRC was 2±2 steps error for step number, 6±9ms error for stride time and 6±7ms error for step time (0.6-2.6% relative error). Swing time had the least accuracy±precision (25±19ms error, 5±4% relative error) among the parameters. GT3X had the least accuracy±precision (8±14% relative error) in step number estimate among the devices. Both MTx and BioStampRC detected significantly distinct gait characteristics between PwMS with different disability levels (p<0.01). BioStampRC sensors accurately and precisely measure gait parameters in PwMS across diverse walking impairment levels and detected differences in gait characteristics by disability level in PwMS. This technology has the potential to provide granular monitoring of gait both inside and outside the clinic.
Colgan, Matthew S; Asner, Gregory P; Swemmer, Tony
2013-07-01
Tree biomass is an integrated measure of net growth and is critical for understanding, monitoring, and modeling ecosystem functions. Despite the importance of accurately measuring tree biomass, several fundamental barriers preclude direct measurement at large spatial scales, including the facts that trees must be felled to be weighed and that even modestly sized trees are challenging to maneuver once felled. Allometric methods allow for estimation of tree mass using structural characteristics, such as trunk diameter. Savanna trees present additional challenges, including limited available allometry and a prevalence of multiple stems per individual. Here we collected airborne lidar data over a semiarid savanna adjacent to the Kruger National Park, South Africa, and then harvested and weighed woody plant biomass at the plot scale to provide a standard against which field and airborne estimation methods could be compared. For an existing airborne lidar method, we found that half of the total error was due to averaging canopy height at the plot scale. This error was eliminated by instead measuring maximum height and crown area of individual trees from lidar data using an object-based method to identify individual tree crowns and estimate their biomass. The best object-based model approached the accuracy of field allometry at both the tree and plot levels, and it more than doubled the accuracy compared to existing airborne methods (17% vs. 44% deviation from harvested biomass). Allometric error accounted for less than one-third of the total residual error in airborne biomass estimates at the plot scale when using allometry with low bias. Airborne methods also gave more accurate predictions at the plot level than did field methods based on diameter-only allometry. These results provide a novel comparison of field and airborne biomass estimates using harvested plots and advance the role of lidar remote sensing in savanna ecosystems.
Feedback controlled optics with wavefront compensation
NASA Technical Reports Server (NTRS)
Breckenridge, William G. (Inventor); Redding, David C. (Inventor)
1993-01-01
The sensitivity model of a complex optical system obtained by linear ray tracing is used to compute a control gain matrix by imposing the mathematical condition for minimizing the total wavefront error at the optical system's exit pupil. The most recent deformations or error states of the controlled segments or optical surfaces of the system are then assembled as an error vector, and the error vector is transformed by the control gain matrix to produce the exact control variables which will minimize the total wavefront error at the exit pupil of the optical system. These exact control variables are then applied to the actuators controlling the various optical surfaces in the system causing the immediate reduction in total wavefront error observed at the exit pupil of the optical system.
Hajibozorgi, M; Arjmand, N
2016-04-11
Range of motion (ROM) of the thoracic spine has implications in patient discrimination for diagnostic purposes and in biomechanical models for predictions of spinal loads. Few previous studies have reported quite different thoracic ROMs. Total (T1-T12), lower (T5-T12) and upper (T1-T5) thoracic, lumbar (T12-S1), pelvis, and entire trunk (T1) ROMs were measured using an inertial tracking device as asymptomatic subjects flexed forward from their neutral upright position to full forward flexion. Correlations between body height and the ROMs were conducted. An effect of measurement errors of the trunk flexion (T1) on the model-predicted spinal loads was investigated. Mean of peak voluntary total flexion of trunk (T1) was 118.4 ± 13.9°, of which 20.5 ± 6.5° was generated by flexion of the T1 to T12 (thoracic ROM), and the remaining by flexion of the T12 to S1 (lumbar ROM) (50.2 ± 7.0°) and pelvis (47.8 ± 6.9°). Lower thoracic ROM was significantly larger than upper thoracic ROM (14.8 ± 5.4° versus 5.8 ± 3.1°). There were non-significant weak correlations between body height and the ROMs. Contribution of the pelvis to generate the total trunk flexion increased from ~20% to 40% and that of the lumbar decreased from ~60% to 42% as subjects flexed forward from upright to maximal flexion while that of the thoracic spine remained almost constant (~16% to 20%) during the entire movement. Small uncertainties (±5°) in the measurement of trunk flexion angle resulted in considerable errors (~27%) in the model-predicted spinal loads only in activities involving small trunk flexion. Copyright © 2015 Elsevier Ltd. All rights reserved.
Parity Violation in Proton-Proton Scattering at Intermediate Energies
DOE R&D Accomplishments Database
Yuan, V.; Frauenfelder, H.; Harper, R. W.; Bowman, J. D.; Carlini, R.; MacArthur, D. W.; Mischke, R. E.; Nagle, D. E.; Talaga, R. L.; McDonald, A. B.
1986-05-01
Results of a measurement of parity nonconservation in the anti p-p total cross sections at 800-MeV are presented. The dependence of transmission on beam properties and correction for systematic errors are discussed. The measured longitudinal asymmetry is A{sub L} = (+2.4 +- 1.1(statistical) +- 0.1(systematic)) x 10{sup -7}. A proposed experiment at 230 MeV is discussed.
Total Survey Error & Institutional Research: A Case Study of the University Experience Survey
ERIC Educational Resources Information Center
Whiteley, Sonia
2014-01-01
Total Survey Error (TSE) is a component of Total Survey Quality (TSQ) that supports the assessment of the extent to which a survey is "fit-for-purpose". While TSQ looks at a number of dimensions, such as relevance, credibility and accessibility, TSE is has a more operational focus on accuracy and minimising errors. Mitigating survey…
45 CFR 265.7 - How will we determine if the State is meeting the quarterly reporting requirements?
Code of Federal Regulations, 2012 CFR
2012-10-01
... computational errors and are internally consistent (e.g., items that should add to totals do so); (3) The State... from computational errors and are internally consistent (e.g., items that should add to totals do so... from computational errors and are internally consistent (e.g., items that should add to totals do so...
Samsiah, A; Othman, Noordin; Jamshed, Shazia; Hassali, Mohamed Azmi; Wan-Mohaina, W M
2016-12-01
Reporting and analysing the data on medication errors (MEs) is important and contributes to a better understanding of the error-prone environment. This study aims to examine the characteristics of errors submitted to the National Medication Error Reporting System (MERS) in Malaysia. A retrospective review of reports received from 1 January 2009 to 31 December 2012 was undertaken. Descriptive statistics method was applied. A total of 17,357 MEs reported were reviewed. The majority of errors were from public-funded hospitals. Near misses were classified in 86.3 % of the errors. The majority of errors (98.1 %) had no harmful effects on the patients. Prescribing contributed to more than three-quarters of the overall errors (76.1 %). Pharmacists detected and reported the majority of errors (92.1 %). Cases of erroneous dosage or strength of medicine (30.75 %) were the leading type of error, whilst cardiovascular (25.4 %) was the most common category of drug found. MERS provides rich information on the characteristics of reported MEs. Low contribution to reporting from healthcare facilities other than government hospitals and non-pharmacists requires further investigation. Thus, a feasible approach to promote MERS among healthcare providers in both public and private sectors needs to be formulated and strengthened. Preventive measures to minimise MEs should be directed to improve prescribing competency among the fallible prescribers identified.
Ly, Thomas; Pamer, Carol; Dang, Oanh; Brajovic, Sonja; Haider, Shahrukh; Botsis, Taxiarchis; Milward, David; Winter, Andrew; Lu, Susan; Ball, Robert
2018-05-31
The FDA Adverse Event Reporting System (FAERS) is a primary data source for identifying unlabeled adverse events (AEs) in a drug or biologic drug product's postmarketing phase. Many AE reports must be reviewed by drug safety experts to identify unlabeled AEs, even if the reported AEs are previously identified, labeled AEs. Integrating the labeling status of drug product AEs into FAERS could increase report triage and review efficiency. Medical Dictionary for Regulatory Activities (MedDRA) is the standard for coding AE terms in FAERS cases. However, drug manufacturers are not required to use MedDRA to describe AEs in product labels. We hypothesized that natural language processing (NLP) tools could assist in automating the extraction and MedDRA mapping of AE terms in drug product labels. We evaluated the performance of three NLP systems, (ETHER, I2E, MetaMap) for their ability to extract AE terms from drug labels and translate the terms to MedDRA Preferred Terms (PTs). Pharmacovigilance-based annotation guidelines for extracting AE terms from drug labels were developed for this study. We compared each system's output to MedDRA PT AE lists, manually mapped by FDA pharmacovigilance experts using the guidelines, for ten drug product labels known as the "gold standard AE list" (GSL) dataset. Strict time and configuration conditions were imposed in order to test each system's capabilities under conditions of no human intervention and minimal system configuration. Each NLP system's output was evaluated for precision, recall and F measure in comparison to the GSL. A qualitative error analysis (QEA) was conducted to categorize a random sample of each NLP system's false positive and false negative errors. A total of 417, 278, and 250 false positive errors occurred in the ETHER, I2E, and MetaMap outputs, respectively. A total of 100, 80, and 187 false negative errors occurred in ETHER, I2E, and MetaMap outputs, respectively. Precision ranged from 64% to 77%, recall from 64% to 83% and F measure from 67% to 79%. I2E had the highest precision (77%), recall (83%) and F measure (79%). ETHER had the lowest precision (64%). MetaMap had the lowest recall (64%). The QEA found that the most prevalent false positive errors were context errors such as "Context error/General term", "Context error/Instructions or monitoring parameters", "Context error/Medical history preexisting condition underlying condition risk factor or contraindication", and "Context error/AE manifestations or secondary complication". The most prevalent false negative errors were in the "Incomplete or missed extraction" error category. Missing AE terms were typically due to long terms, or terms containing non-contiguous words which do not correspond exactly to MedDRA synonyms. MedDRA mapping errors were a minority of errors for ETHER and I2E but were the most prevalent false positive errors for MetaMap. The results demonstrate that it may be feasible to use NLP tools to extract and map AE terms to MedDRA PTs. However, the NLP tools we tested would need to be modified or reconfigured to lower the error rates to support their use in a regulatory setting. Tools specific for extracting AE terms from drug labels and mapping the terms to MedDRA PTs may need to be developed to support pharmacovigilance. Conducting research using additional NLP systems on a larger, diverse GSL would also be informative. Copyright © 2018. Published by Elsevier Inc.
The Effects of Bar-coding Technology on Medication Errors: A Systematic Literature Review.
Hutton, Kevin; Ding, Qian; Wellman, Gregory
2017-02-24
The bar-coding technology adoptions have risen drastically in U.S. health systems in the past decade. However, few studies have addressed the impact of bar-coding technology with strong prospective methodologies and the research, which has been conducted from both in-pharmacy and bedside implementations. This systematic literature review is to examine the effectiveness of bar-coding technology on preventing medication errors and what types of medication errors may be prevented in the hospital setting. A systematic search of databases was performed from 1998 to December 2016. Studies measuring the effect of bar-coding technology on medication errors were included in a full-text review. Studies with the outcomes other than medication errors such as efficiency or workarounds were excluded. The outcomes were measured and findings were summarized for each retained study. A total of 2603 articles were initially identified and 10 studies, which used prospective before-and-after study design, were fully reviewed in this article. Of the 10 included studies, 9 took place in the United States, whereas the remaining was conducted in the United Kingdom. One research article focused on bar-coding implementation in a pharmacy setting, whereas the other 9 focused on bar coding within patient care areas. All 10 studies showed overall positive effects associated with bar-coding implementation. The results of this review show that bar-coding technology may reduce medication errors in hospital settings, particularly on preventing targeted wrong dose, wrong drug, wrong patient, unauthorized drug, and wrong route errors.
NASA Technical Reports Server (NTRS)
Martin, D. L.; Perry, M. J.
1994-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.
Estimating extreme stream temperatures by the standard deviate method
NASA Astrophysics Data System (ADS)
Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz
2006-02-01
It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.
Measuring Data Quality Through a Source Data Verification Audit in a Clinical Research Setting.
Houston, Lauren; Probst, Yasmine; Humphries, Allison
2015-01-01
Health data has long been scrutinised in relation to data quality and integrity problems. Currently, no internationally accepted or "gold standard" method exists measuring data quality and error rates within datasets. We conducted a source data verification (SDV) audit on a prospective clinical trial dataset. An audit plan was applied to conduct 100% manual verification checks on a 10% random sample of participant files. A quality assurance rule was developed, whereby if >5% of data variables were incorrect a second 10% random sample would be extracted from the trial data set. Error was coded: correct, incorrect (valid or invalid), not recorded or not entered. Audit-1 had a total error of 33% and audit-2 36%. The physiological section was the only audit section to have <5% error. Data not recorded to case report forms had the greatest impact on error calculations. A significant association (p=0.00) was found between audit-1 and audit-2 and whether or not data was deemed correct or incorrect. Our study developed a straightforward method to perform a SDV audit. An audit rule was identified and error coding was implemented. Findings demonstrate that monitoring data quality by a SDV audit can identify data quality and integrity issues within clinical research settings allowing quality improvement to be made. The authors suggest this approach be implemented for future research.
Cecconi, Maurizio; Rhodes, Andrew; Poloniecki, Jan; Della Rocca, Giorgio; Grounds, R Michael
2009-01-01
Bland-Altman analysis is used for assessing agreement between two measurements of the same clinical variable. In the field of cardiac output monitoring, its results, in terms of bias and limits of agreement, are often difficult to interpret, leading clinicians to use a cutoff of 30% in the percentage error in order to decide whether a new technique may be considered a good alternative. This percentage error of +/- 30% arises from the assumption that the commonly used reference technique, intermittent thermodilution, has a precision of +/- 20% or less. The combination of two precisions of +/- 20% equates to a total error of +/- 28.3%, which is commonly rounded up to +/- 30%. Thus, finding a percentage error of less than +/- 30% should equate to the new tested technique having an error similar to the reference, which therefore should be acceptable. In a worked example in this paper, we discuss the limitations of this approach, in particular in regard to the situation in which the reference technique may be either more or less precise than would normally be expected. This can lead to inappropriate conclusions being drawn from data acquired in validation studies of new monitoring technologies. We conclude that it is not acceptable to present comparison studies quoting percentage error as an acceptability criteria without reporting the precision of the reference technique.
Conkle, Joel; Ramakrishnan, Usha; Flores-Ayala, Rafael; Suchdev, Parminder S; Martorell, Reynaldo
2017-01-01
Anthropometric data collected in clinics and surveys are often inaccurate and unreliable due to measurement error. The Body Imaging for Nutritional Assessment Study (BINA) evaluated the ability of 3D imaging to correctly measure stature, head circumference (HC) and arm circumference (MUAC) for children under five years of age. This paper describes the protocol for and the quality of manual anthropometric measurements in BINA, a study conducted in 2016-17 in Atlanta, USA. Quality was evaluated by examining digit preference, biological plausibility of z-scores, z-score standard deviations, and reliability. We calculated z-scores and analyzed plausibility based on the 2006 WHO Child Growth Standards (CGS). For reliability, we calculated intra- and inter-observer Technical Error of Measurement (TEM) and Intraclass Correlation Coefficient (ICC). We found low digit preference; 99.6% of z-scores were biologically plausible, with z-score standard deviations ranging from 0.92 to 1.07. Total TEM was 0.40 for stature, 0.28 for HC, and 0.25 for MUAC in centimeters. ICC ranged from 0.99 to 1.00. The quality of manual measurements in BINA was high and similar to that of the anthropometric data used to develop the WHO CGS. We attributed high quality to vigorous training, motivated and competent field staff, reduction of non-measurement error through the use of technology, and reduction of measurement error through adequate monitoring and supervision. Our anthropometry measurement protocol, which builds on and improves upon the protocol used for the WHO CGS, can be used to improve anthropometric data quality. The discussion illustrates the need to standardize anthropometric data quality assessment, and we conclude that BINA can provide a valuable evaluation of 3D imaging for child anthropometry because there is comparison to gold-standard, manual measurements.
Refining Field Measurements of Methane Flux Rates from Abandoned Oil and Gas Wells
NASA Astrophysics Data System (ADS)
Lagron, C. S.; Kang, M.; Riqueros, N. S.; Jackson, R. B.
2015-12-01
Recent studies in Pennsylvania demonstrate the potential for significant methane emissions from abandoned oil and gas wells. A subset of tested wells was high emitting, with methane flux rates up to seven orders of magnitude greater than natural fluxes (up to 105 mg CH4/hour, or about 2.5LPM). These wells contribute disproportionately to the total methane emissions from abandoned oil and gas wells. The principles guiding the chamber design have been developed for lower flux rates, typically found in natural environments, and chamber design modifications may reduce uncertainty in flux rates associated with high-emitting wells. Kang et al. estimate errors of a factor of two in measured values based on previous studies. We conduct controlled releases of methane to refine error estimates and improve chamber design with a focus on high-emitters. Controlled releases of methane are conducted at 0.05 LPM, 0.50 LPM, 1.0 LPM, 2.0 LPM, 3.0 LPM, and 5.0 LPM, and at two chamber dimensions typically used in field measurements studies of abandoned wells. As most sources of error tabulated by Kang et al. tend to bias the results toward underreporting of methane emissions, a flux-targeted chamber design modification can reduce error margins and/or provide grounds for a potential upward revision of emission estimates.
Kerr, Ava; Slater, Gary J; Byrne, Nuala
2017-02-01
Two, three and four compartment (2C, 3C and 4C) models of body composition are popular methods to measure fat mass (FM) and fat-free mass (FFM) in athletes. However, the impact of food and fluid intake on measurement error has not been established. The purpose of this study was to evaluate standardised (overnight fasted, rested and hydrated) v. non-standardised (afternoon and non-fasted) presentation on technical and biological error on surface anthropometry (SA), 2C, 3C and 4C models. In thirty-two athletic males, measures of SA, dual-energy X-ray absorptiometry (DXA), bioelectrical impedance spectroscopy (BIS) and air displacement plethysmography (BOD POD) were taken to establish 2C, 3C and 4C models. Tests were conducted after an overnight fast (duplicate), about 7 h later after ad libitum food and fluid intake, and repeated 24 h later before and after ingestion of a specified meal. Magnitudes of changes in the mean and typical errors of measurement were determined. Mean change scores for non-standardised presentation and post meal tests for FM were substantially large in BIS, SA, 3C and 4C models. For FFM, mean change scores for non-standardised conditions produced large changes for BIS, 3C and 4C models, small for DXA, trivial for BOD POD and SA. Models that included a total body water (TBW) value from BIS (3C and 4C) were more sensitive to TBW changes in non-standardised conditions than 2C models. Biological error is minimised in all models with standardised presentation but DXA and BOD POD are acceptable if acute food and fluid intake remains below 500 g.
Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G
2014-10-01
Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
A confirmation of the general relativistic prediction of the Lense-Thirring effect.
Ciufolini, I; Pavlis, E C
2004-10-21
An important early prediction of Einstein's general relativity was the advance of the perihelion of Mercury's orbit, whose measurement provided one of the classical tests of Einstein's theory. The advance of the orbital point-of-closest-approach also applies to a binary pulsar system and to an Earth-orbiting satellite. General relativity also predicts that the rotation of a body like Earth will drag the local inertial frames of reference around it, which will affect the orbit of a satellite. This Lense-Thirring effect has hitherto not been detected with high accuracy, but its detection with an error of about 1 per cent is the main goal of Gravity Probe B--an ongoing space mission using orbiting gyroscopes. Here we report a measurement of the Lense-Thirring effect on two Earth satellites: it is 99 +/- 5 per cent of the value predicted by general relativity; the uncertainty of this measurement includes all known random and systematic errors, but we allow for a total +/- 10 per cent uncertainty to include underestimated and unknown sources of error.
Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan
2010-09-01
Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.
Villa, Chiara; Brůžek, Jaroslav
2017-01-01
Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. Methods We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). Results and Discussion The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results. PMID:28533960
Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara; Brůžek, Jaroslav
2017-01-01
Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results.
Gillis, A; Miller, D R
2000-10-09
A series of controlled environment experiments were conducted to examine the use of a dynamic flux chamber to measure soil emission and absorption of total gaseous mercury (TGM). Uncertainty about the appropriate airflow rates through the chamber and chamber exposure to ambient wind are shown to be major sources of potential error. Soil surface mercury flux measurements over a range of chamber airflow rates showed a positive linear relationship between flux rates and airflow rate through the chamber. Mercury flux measurements using the chamber in an environmental wind tunnel showed that exposure of the system to ambient winds decreased the measured flux rates by 40% at a wind speed of 1.0 m s(-1) and 90% at a wind speed of 2 m s(-1). Wind tunnel measurements also showed that the chamber footprint was limited to the area of soil inside the chamber and there is little uncertainty of the footprint size in dry soil.
Adherence to balance tolerance limits at the Upper Mississippi Science Center, La Crosse, Wisconsin.
Myers, C.T.; Kennedy, D.M.
1998-01-01
Verification of balance accuracy entails applying a series of standard masses to a balance prior to use and recording the measured values. The recorded values for each standard should have lower and upper weight limits or tolerances that are accepted as verification of balance accuracy under normal operating conditions. Balance logbooks for seven analytical balances at the Upper Mississippi Science Center were checked over a 3.5-year period to determine if the recorded weights were within the established tolerance limits. A total of 9435 measurements were checked. There were 14 instances in which the balance malfunctioned and operators recorded a rationale in the balance logbook. Sixty-three recording errors were found. Twenty-eight operators were responsible for two types of recording errors: Measurements of weights were recorded outside of the tolerance limit but not acknowledged as an error by the operator (n = 40); and measurements were recorded with the wrong number of decimal places (n = 23). The adherence rate for following tolerance limits was 99.3%. To ensure the continued adherence to tolerance limits, the quality-assurance unit revised standard operating procedures to require more frequent review of balance logbooks.
Determinants of Wealth Fluctuation: Changes in Hard-To-Measure Economic Variables in a Panel Study
Pfeffer, Fabian T.; Griffin, Jamie
2017-01-01
Measuring fluctuation in families’ economic conditions is the raison d’être of household panel studies. Accordingly, a particularly challenging critique is that extreme fluctuation in measured economic characteristics might indicate compounding measurement error rather than actual changes in families’ economic wellbeing. In this article, we address this claim by moving beyond the assumption that particularly large fluctuation in economic conditions might be too large to be realistic. Instead, we examine predictors of large fluctuation, capturing sources related to actual socio-economic changes as well as potential sources of measurement error. Using the Panel Study of Income Dynamics, we study between-wave changes in a dimension of economic wellbeing that is especially hard to measure, namely, net worth as an indicator of total family wealth. Our results demonstrate that even very large between-wave changes in net worth can be attributed to actual socio-economic and demographic processes. We do, however, also identify a potential source of measurement error that contributes to large wealth fluctuation, namely, the treatment of incomplete information, presenting a pervasive challenge for any longitudinal survey that includes questions on economic assets. Our results point to ways for improving wealth variables both in the data collection process (e.g., by measuring active savings) and in data processing (e.g., by improving imputation algorithms). PMID:28316752
Height-Error Analysis for the FAA-Air Force Replacement Radar Program (FARR)
1991-08-01
7719 Figure 1-7 CLIMATOLOGY ERRORS BY MONWTH PERCENT FREQUENCY TABLE OF ERROR BY MONTH ERROR MONTH Col Pc IJAl IFEB )MA IA R IAY JJ’N IJUL JAUG (SEP...MONTH Col Pct IJAN IFEB IMPJ JAPR 1 MM IJUN IJUL JAUG ISEP J--T IN~ IDEC I Total ----- -- - - --------------------------.. . -.. 4...MONTH ERROR MONTH Col Pct IJAN IFEB IM4AR IAPR IMAY jJum IJU JAUG ISEP JOCT IN JDEC I Total . .- 4
NASA Astrophysics Data System (ADS)
Dohe, S.; Sherlock, V.; Hase, F.; Gisi, M.; Robinson, J.; Sepúlveda, E.; Schneider, M.; Blumenstock, T.
2013-08-01
The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF) of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE) is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment). Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y) at both sites show discrepancies of 0.2-0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.
Mendez, Michelle A.; Popkin, Barry M.; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R.; Sánchez, María-José; González, Carlos A
2011-01-01
Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29–65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = −0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes. PMID:21242302
Mendez, Michelle A; Popkin, Barry M; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R; Sánchez, María-José; González, Carlos A
2011-02-15
Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29-65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = -0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes.
Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.
Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2016-01-01
Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (P<0.05). Our study demonstrated that in UKA, cutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.
Effects of Reynolds number on orifice induced pressure error
NASA Technical Reports Server (NTRS)
Plentovich, E. B.; Gloss, B. B.
1982-01-01
Data previously reported for orifice induced pressure errors are extended to the case of higher Reynolds number flows, and a remedy is presented in the form of a porous metal plug for the orifice. Test orifices with apertures 0.330, 0.660, and 1.321 cm in diam. were fabricated on a flat plate for trials in the NASA Langley wind tunnel at Mach numbers 0.40-0.72. A boundary layer survey rake was also mounted on the flat plate to allow measurement of the total boundary layer pressures at the orifices. At the high Reynolds number flows studied, the orifice induced pressure error was found to be a function of the ratio of the orifice diameter to the boundary layer thickness. The error was effectively eliminated by the insertion of a porous metal disc set flush with the orifice outside surface.
Performance evaluation of a 1.6-µm methane DIAL system from ground, aircraft and UAV platforms.
Refaat, Tamer F; Ismail, Syed; Nehrir, Amin R; Hair, John W; Crawford, James H; Leifer, Ira; Shuman, Timothy
2013-12-16
Methane is an efficient absorber of infrared radiation and a potent greenhouse gas with a warming potential 72 times greater than carbon dioxide on a per molecule basis. Development of methane active remote sensing capability using the differential absorption lidar (DIAL) technique enables scientific assessments of the gas emission and impacts on the climate. A performance evaluation of a pulsed DIAL system for monitoring atmospheric methane is presented. This system leverages a robust injection-seeded pulsed Nd:YAG pumped Optical Parametric Oscillator (OPO) laser technology operating in the 1.645 µm spectral band. The system also leverages an efficient low noise, commercially available, InGaAs avalanche photo-detector (APD). Lidar signals and error budget are analyzed for system operation on ground in the range-resolved DIAL mode and from airborne platforms in the integrated path DIAL (IPDA) mode. Results indicate system capability of measuring methane concentration profiles with <1.0% total error up to 4.5 km range with 5 minute averaging from ground. For airborne IPDA, the total error in the column dry mixing ratio is less than 0.3% with 0.1 sec average using ground returns. This system has a unique capability of combining signals from the atmospheric scattering from layers above the surface with ground return signals, which provides methane column measurement between the atmospheric scattering layer and the ground directly. In such case 0.5% and 1.2% total errors are achieved with 10 sec average from airborne platforms at 8 km and 15.24 km altitudes, respectively. Due to the pulsed nature of the transmitter, the system is relatively insensitive to aerosol and cloud interferences. Such DIAL system would be ideal for investigating high latitude methane releases over polar ice sheets, permafrost regions, wetlands, and over ocean during day and night. This system would have commercial potential for fossil fuel leaks detection and industrial monitoring applications.
Planck 2013 results. VII. HFI time response and beams
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bowyer, J. W.; Bridges, M.; Bucher, M.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Haissinski, J.; Hansen, F. K.; Hanson, D.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hou, Z.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leonardi, R.; Leroy, C.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; MacTavish, C. J.; Maffei, B.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matsumura, T.; Matthai, F.; Mazzotta, P.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polegre, A. M.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savini, G.; Scott, D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
This paper characterizes the effective beams, the effective beam window functions and the associated errors for the Planck High Frequency Instrument (HFI) detectors. The effective beam is theangular response including the effect of the optics, detectors, data processing and the scan strategy. The window function is the representation of this beam in the harmonic domain which is required to recover an unbiased measurement of the cosmic microwave background angular power spectrum. The HFI is a scanning instrument and its effective beams are the convolution of: a) the optical response of the telescope and feeds; b) the processing of the time-ordered data and deconvolution of the bolometric and electronic transfer function; and c) the merging of several surveys to produce maps. The time response transfer functions are measured using observations of Jupiter and Saturn and by minimizing survey difference residuals. The scanning beam is the post-deconvolution angular response of the instrument, and is characterized with observations of Mars. The main beam solid angles are determined to better than 0.5% at each HFI frequency band. Observations of Jupiter and Saturn limit near sidelobes (within 5°) to about 0.1% of the total solid angle. Time response residuals remain as long tails in the scanning beams, but contribute less than 0.1% of the total solid angle. The bias and uncertainty in the beam products are estimated using ensembles of simulated planet observations that include the impact of instrumental noise and known systematic effects. The correlation structure of these ensembles is well-described by five error eigenmodes that are sub-dominant to sample variance and instrumental noise in the harmonic domain. A suite of consistency tests provide confidence that the error model represents a sufficient description of the data. The total error in the effective beam window functions is below 1% at 100 GHz up to multipole ℓ ~ 1500, and below 0.5% at 143 and 217 GHz up to ℓ ~ 2000.
CHARACTERIZATION OF THE MILLIMETER-WAVE POLARIZATION OF CENTAURUS A WITH QUaD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemcov, M.; Bock, J.; Leitch, E.
2010-02-20
Centaurus (Cen) A represents one of the best candidates for an isolated, compact, highly polarized source that is bright at typical cosmic microwave background (CMB) experiment frequencies. We present measurements of the 4{sup 0} x 2{sup 0} region centered on Cen A with QUaD, a CMB polarimeter whose absolute polarization angle is known to an accuracy of 0.{sup 0}5. Simulations are performed to assess the effect of misestimation of the instrumental parameters on the final measurement and systematic errors due to the field's background structure and temporal variability from Cen A's nuclear region are determined. The total (Q, U) ofmore » the inner lobe region is (1.00 +- 0.07(stat.) +- 0.04(sys.), - 1.72 +- 0.06 +- 0.05) Jy at 100 GHz and (0.80 +- 0.06 +- 0.06, - 1.40 +- 0.07 +- 0.08) Jy at 150 GHz, leading to polarization angles and total errors of -30.{sup 0}0 +- 1.{sup 0}1 and -29.{sup 0}1 +- 1.{sup 0}7. These measurements will allow the use of Cen A as a polarized calibration source for future millimeter experiments.« less
Simulation of flow and water quality of the Arroyo Colorado, Texas, 1989-99
Raines, Timothy H.; Miranda, Roger M.
2002-01-01
A model parameter set for use with the Hydrological Simulation Program—FORTRAN watershed model was developed to simulate flow and water quality for selected properties and constituents for the Arroyo Colorado from the city of Mission to the Laguna Madre, Texas. The model simulates flow, selected water-quality properties, and constituent concentrations. The model can be used to estimate a total maximum daily load for selected properties and constituents in the Arroyo Colorado. The model was calibrated and tested for flow with data measured during 1989–99 at three streamflow-gaging stations. The errors for total flow volume ranged from -0.1 to 29.0 percent, and the errors for total storm volume ranged from -15.6 to 8.4 percent. The model was calibrated and tested for water quality for seven properties and constituents with 1989–99 data. The model was calibrated sequentially for suspended sediment, water temperature, biochemical oxygen demand, dissolved oxygen, nitrate nitrogen, ammonia nitrogen, and orthophosphate. The simulated concentrations of the selected properties and constituents generally matched the measured concentrations available for the calibration and testing periods. The model was used to simulate total point- and nonpoint-source loads for selected properties and constituents for 1989–99 for urban, natural, and agricultural land-use types. About one-third to one-half of the biochemical oxygen demand and nutrient loads are from urban point and nonpoint sources, although only 13 percent of the total land use in the basin is urban.
NASA Astrophysics Data System (ADS)
Davis, K. J.; Bakwin, P. S.; Yi, C.; Cook, B. D.; Wang, W.; Denning, A. S.; Teclaw, R.; Isebrands, J. G.
2001-05-01
Long-term, tower-based measurements using the eddy-covariance method have revealed a wealth of detail about the temporal dynamics of netecosystem-atmosphere exchange (NEE) of CO2. The data also provide a measure of the annual net CO2 exchange. The area represented by these flux measurements, however, is limited, and doubts remain about possible systematic errors that may bias the annual net exchange measurements. Flux and mixing ratio measurements conducted at the WLEF tall tower as part of the Chequamegon Ecosystem-Atmosphere Study (ChEAS) allow for unique assessment of the uncertainties in NEE of CO2. The synergy between flux and mixing ratio observations shows the potential for comparing inverse and eddy-covariance methods of estimating NEE of CO2. Such comparisons may strengthen confidence in both results and begin to bridge the huge gap in spatial scales (at least 3 orders of magnitude) between continental or hemispheric scale inverse studies and kilometer-scale eddy covariance flux measurements. Data from WLEF and Willow Creek, another ChEAS tower, are used to estimate random and systematic errors in NEE of CO2. Random uncertainty in seasonal exchange rates and the annual integrated NEE, including both turbulent sampling errors and variability in enviromental conditions, is small. Systematic errors are identified by examining changes in flux as a function of atmospheric stability and wind direction, and by comparing the multiple level flux measurements on the WLEF tower. Nighttime drainage is modest but evident. Systematic horizontal advection occurs during the morning turbulence transition. The potential total systematic error appears to be larger than random uncertainty, but still modest. The total systematic error, however, is difficult to assess. It appears that the WLEF region ecosystems were a small net sink of CO2 in 1997. It is clear that the summer uptake rate at WLEF is much smaller than that at most deciduous forest sites, including the nearby Willow Creek site. The WLEF tower also allows us to study the potential for monitoring continental CO2 mixing ratios from tower sites. Despite concerns about the proximity to ecosystem sources and sinks, it is clear that boundary layer CO2 mixing ratios can be monitored using typical surface layer towers. Seasonal and annual land-ocean mixing ratio gradients are readily detectable, providing the motivation for a flux-tower based mixing ratio observation network that could greatly improve the accuracy of inversion-based estimates of NEE of CO2, and enable inversions to be applied on smaller temporal and spatial scales. Results from the WLEF tower illustrate the degree to which local flux measurements represent interannual, seasonal and synoptic CO2 mixing ratio trends. This coherence between fluxes and mixing ratios serves to "regionalize" the eddy-covariance based local NEE observations.
Lima, Luiz Rodrigo Augustemak de; Martins, Priscila Custódio; Junior, Carlos Alencar Souza Alves; Castro, João Antônio Chula de; Silva, Diego Augusto Santos; Petroski, Edio Luiz
The aim of this study was to assess the validity of traditional anthropometric equations and to develop predictive equations of total body and trunk fat for children and adolescents living with HIV based on anthropometric measurements. Forty-eight children and adolescents of both sexes (24 boys) aged 7-17 years, living in Santa Catarina, Brazil, participated in the study. Dual-energy X-ray absorptiometry was used as the reference method to evaluate total body and trunk fat. Height, body weight, circumferences and triceps, subscapular, abdominal and calf skinfolds were measured. The traditional equations of Lohman and Slaughter were used to estimate body fat. Multiple regression models were fitted to predict total body fat (Model 1) and trunk fat (Model 2) using a backward selection procedure. Model 1 had an R 2 =0.85 and a standard error of the estimate of 1.43. Model 2 had an R 2 =0.80 and standard error of the estimate=0.49. The traditional equations of Lohman and Slaughter showed poor performance in estimating body fat in children and adolescents living with HIV. The prediction models using anthropometry provided reliable estimates and can be used by clinicians and healthcare professionals to monitor total body and trunk fat in children and adolescents living with HIV. Copyright © 2017 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.
Time determination for spacecraft users of the Navstar Global Positioning System /GPS/
NASA Technical Reports Server (NTRS)
Grenchik, T. J.; Fang, B. T.
1977-01-01
Global Positioning System (GPS) navigation is performed by time measurements. A description is presented of a two body model of spacecraft motion. Orbit determination is the process of inferring the position, velocity, and clock offset of the user from measurements made of the user motion in the Newtonian coordinate system. To illustrate the effect of clock errors and the accuracy with which the user spacecraft time and orbit may be determined, a low-earth-orbit spacecraft (Seasat) as tracked by six Phase I GPS space vehicles is considered. The obtained results indicate that in the absence of unmodeled dynamic parameter errors clock biases may be determined to the nanosecond level. There is, however, a high correlation between the clock bias and the uncertainty in the gravitational parameter GM, i.e., the product of the universal gravitational constant and the total mass of the earth. It is, therefore, not possible to determine clock bias to better than 25 nanosecond accuracy in the presence of a gravitational error of one part per million.
Development and content validation of performance assessments for endoscopic third ventriculostomy.
Breimer, Gerben E; Haji, Faizal A; Hoving, Eelco W; Drake, James M
2015-08-01
This study aims to develop and establish the content validity of multiple expert rating instruments to assess performance in endoscopic third ventriculostomy (ETV), collectively called the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). The important aspects of ETV were identified through a review of current literature, ETV videos, and discussion with neurosurgeons, fellows, and residents. Three assessment measures were subsequently developed: a procedure-specific checklist (CL), a CL of surgical errors, and a global rating scale (GRS). Neurosurgeons from various countries, all identified as experts in ETV, were then invited to participate in a modified Delphi survey to establish the content validity of these instruments. In each Delphi round, experts rated their agreement including each procedural step, error, and GRS item in the respective instruments on a 5-point Likert scale. Seventeen experts agreed to participate in the study and completed all Delphi rounds. After item generation, a total of 27 procedural CL items, 26 error CL items, and 9 GRS items were posed to Delphi panelists for rating. An additional 17 procedural CL items, 12 error CL items, and 1 GRS item were added by panelists. After three rounds, strong consensus (>80% agreement) was achieved on 35 procedural CL items, 29 error CL items, and 10 GRS items. Moderate consensus (50-80% agreement) was achieved on an additional 7 procedural CL items and 1 error CL item. The final procedural and error checklist contained 42 and 30 items, respectively (divided into setup, exposure, navigation, ventriculostomy, and closure). The final GRS contained 10 items. We have established the content validity of three ETV assessment measures by iterative consensus of an international expert panel. Each measure provides unique assessment information and thus can be used individually or in combination, depending on the characteristics of the learner and the purpose of the assessment. These instruments must now be evaluated in both the simulated and operative settings, to determine their construct validity and reliability. Ultimately, the measures contained in the NEVAT may prove suitable for formative assessment during ETV training and potentially as summative assessment measures during certification.
[The relations of corneal, lenticular and total astigmatism].
Liang, D; Guan, Z; Lin, J
1995-06-01
To determine the relations of corneal, lenticular and total astigmatism and the changes of the astigmatism with age. Out-patients with refractive errors were refracted with retinoscope after using cycloplegic drops and measured the radii of anterior corneal curvature. One hundred and ninety-four cases (382 eyes) with refractive errors were studied. Of the eyes 67.9% had regular corneal astigmatism, 68.1% irregular lenticular astigmatism and 60.7% regular total astigmatism, 88.5% of the corneal astigmatism has the same quality as the total astigmatism. The total astigmatism in 46% of the eyes included the summation of corneal and lenticular astigmatism, but in 41.3% of the eyes irregular lenticular astigmatism corrected the regular corneal astigmatism. The astigmatism of cornea, lens and total astigmatism changed from regular to irregular with the increase of age. The linear correlation analysis showed a positive correlation between the power of horizontal corneal refraction and age, and a negative corrlation between the power of vertical corneal refraction and age. The shape of cornea was the major cause of total astigmatism. The influence of lens on the total astigmatism was different. The reasons for the change of the total astigmatism from regular to irregular with the increase of age were the changes of the power of corneal refraction, particularly the increase of the power of horizontal corneal refraction and lenticular irregular astigmatism.
Performance factors of mobile rich media job aids for community health workers
Florez-Arango, Jose F; Dunn, Kim; Zhang, Jiajie
2011-01-01
Objective To study and analyze the possible benefits on performance of community health workers using point-of-care clinical guidelines implemented as interactive rich media job aids on small-format mobile platforms. Design A crossover study with one intervention (rich media job aids) and one control (traditional job aids), two periods, with 50 community health workers, each subject solving a total 15 standardized cases per period per period (30 cases in total per subject). Measurements Error rate per case and task, protocol compliance. Results A total of 1394 cases were evaluated. Intervention reduces errors by an average of 33.15% (p=0.001) and increases protocol compliance 30.18% (p<0.001). Limitations Medical cases were presented on human patient simulators in a laboratory setting, not on real patients. Conclusion These results indicate encouraging prospects for mHealth technologies in general, and the use of rich media clinical guidelines on cell phones in particular, for the improvement of community health worker performance in developing countries. PMID:21292702
Investigation of Stability of Precise Geodetic Instruments Used in Deformation Monitoring
NASA Astrophysics Data System (ADS)
Woźniak, Marek; Odziemczyk, Waldemar
2017-12-01
Monitoring systems using automated electronic total stations are an important element of safety control of many engineering objects. In order to ensure the appropriate credibility of acquired data, it is necessary that instruments (total stations in most of the cases) used for measurements meet requirements of measurement accuracy, as well as the stability of instrument axis system geometry. With regards to the above, it is expedient to conduct quality control of data acquired using electronic total stations in the context of performed measurement procedures. This paper presents results of research conducted at the Faculty of Geodesy and Cartography at Warsaw University of Technology investigating the stability of "basic" error values (collimation, zero location for V circle, inclination), for two types of automatic total stations: TDA 5005 and TCRP 1201+. Research provided also information concerning the influence of temperature changes upon the stability of investigated instrument's optical parameters. Results are presented in graphical analytic technique. Final conclusions propose methods, which allow avoiding negative results of measuring tool-set geometry changes during conducting precise deformation monitoring measurements.
Bonny, Daniel P; Hull, M L; Howell, S M
2014-01-01
An accurate axis-finding technique is required to measure any changes from normal caused by total knee arthroplasty in the flexion-extension (F-E) and longitudinal rotation (LR) axes of the tibiofemoral joint. In a previous paper, we computationally determined how best to design and use an instrumented spatial linkage (ISL) to locate the F-E and LR axes such that rotational and translational errors were minimized. However, the ISL was not built and consequently was not calibrated; thus the errors in locating these axes were not quantified on an actual ISL. Moreover, previous methods to calibrate an ISL used calibration devices with accuracies that were either undocumented or insufficient for the device to serve as a gold-standard. Accordingly, the objectives were to (1) construct an ISL using the previously established guidelines,(2) calibrate the ISL using an improved method, and (3) quantify the error in measuring changes in the F-E and LR axes. A 3D printed ISL was constructed and calibrated using a coordinate measuring machine, which served as a gold standard. Validation was performed using a fixture that represented the tibiofemoral joint with an adjustable F-E axis and the errors in measuring changes to the positions and orientations of the F-E and LR axes were quantified. The resulting root mean squared errors (RMSEs) of the calibration residuals using the new calibration method were 0.24, 0.33, and 0.15 mm for the anterior-posterior, medial-lateral, and proximal-distal positions, respectively, and 0.11, 0.10, and 0.09 deg for varus-valgus, flexion-extension, and internal-external orientations, respectively. All RMSEs were below 0.29% of the respective full-scale range. When measuring changes to the F-E or LR axes, each orientation error was below 0.5 deg; when measuring changes in the F-E axis, each position error was below 1.0 mm. The largest position RMSE was when measuring a medial-lateral change in the LR axis (1.2 mm). Despite the large size of the ISL, these calibration residuals were better than those for previously published ISLs, particularly when measuring orientations, indicating that using a more accurate gold standard was beneficial in limiting the calibration residuals. The validation method demonstrated that this ISL is capable of accurately measuring clinically important changes (i.e. 1 mm and 1 deg) in the F-E and LR axes.
Older drivers: On-road and off-road test results.
Selander, Helena; Lee, Hoe C; Johansson, Kurt; Falkmer, Torbjörn
2011-07-01
Eighty-five volunteer drivers, 65-85 years old, without cognitive impairments impacting on their driving were examined, in order to investigate driving errors characteristic for older drivers. In addition, any relationships between cognitive off-road and on-road tests results, the latter being the gold standard, were identified. Performance measurements included Trail Making Test (TMT), Nordic Stroke Driver Screening Assessment (NorSDSA), Useful Field of View (UFOV), self-rating driving performance and the two on-road protocols P-Drive and ROA. Some of the older drivers displayed questionable driving behaviour. In total, 21% of the participants failed the on-road assessment. Some of the specific errors were more serious than others. The most common driving errors embraced speed; exceeding the speed limit or not controlling the speed. Correlations with the P-Drive protocol were established for NorSDSA total score (weak), UFOV subtest 2 (weak), and UFOV subtest 3 (moderate). Correlations with the ROA protocol were established for UFOV subtest 2 (weak) and UFOV subtest 3 (weak). P-Drive and self ratings correlated weakly, whereas no correlation between self ratings and the ROA protocol was found. The results suggest that specific problems or errors seen in an older person's driving can actually be "normal driving behaviours". Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, Tom; Croft, Stephen; Jarman, Kenneth D.
The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less
Cuadrado-Cenzual, M A; García Briñón, M; de Gracia Hills, Y; González Estecha, M; Collado Yurrita, L; de Pedro Moro, J A; Fernández Pérez, C; Arroyo Fernández, M
2015-01-01
Patient identification errors and biological samples are one of the problems with the highest risk factor in causing an adverse event in the patient. To detect and analyse the causes of patient identification errors in analytical requests (PIEAR) from emergency departments, and to develop improvement strategies. A process and protocol was designed, to be followed by all professionals involved in the requesting and performing of laboratory tests. Evaluation and monitoring indicators of PIEAR were determined, before and after the implementation of these improvement measures (years 2010-2014). A total of 316 PIEAR were detected in a total of 483,254 emergency service requests during the study period, representing a mean of 6.80/10,000 requests. Patient identification failure was the most frequent in all the 6-monthly periods assessed, with a significant difference (P<.0001). The improvement strategies applied showed to be effective in detecting PIEAR, as well as the prevention of such errors. However, we must continue working with this strategy, promoting a culture of safety for all the professionals involved, and trying to achieve the goal that 100% of the analytical and samples are properly identified. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
Refractive errors and strabismus in Down's syndrome in Korea.
Han, Dae Heon; Kim, Kyun Hyung; Paik, Hae Jung
2012-12-01
The aims of this study were to examine the distribution of refractive errors and clinical characteristics of strabismus in Korean patients with Down's syndrome. A total of 41 Korean patients with Down's syndrome were screened for strabismus and refractive errors in 2009. A total of 41 patients with an average age of 11.9 years (range, 2 to 36 years) were screened. Eighteen patients (43.9%) had strabismus. Ten (23.4%) of 18 patients exhibited esotropia and the others had intermittent exotropia. The most frequently detected type of esotropia was acquired non-accommodative esotropia, and that of exotropia was the basic type. Fifteen patients (36.6%) had hypermetropia and 20 (48.8%) had myopia. The patients with esotropia had refractive errors of +4.89 diopters (D, ±3.73) and the patients with exotropia had refractive errors of -0.31 D (±1.78). Six of ten patients with esotropia had an accommodation weakness. Twenty one patients (63.4%) had astigmatism. Eleven (28.6%) of 21 patients had anisometropia and six (14.6%) of those had clinically significant anisometropia. In Korean patients with Down's syndrome, esotropia was more common than exotropia and hypermetropia more common than myopia. Especially, Down's syndrome patients with esotropia generally exhibit clinically significant hyperopic errors (>+3.00 D) and evidence of under-accommodation. Thus, hypermetropia and accommodation weakness could be possible factors in esotropia when it occurs in Down's syndrome patients. Based on the results of this study, eye examinations of Down's syndrome patients should routinely include a measure of accommodation at near distances, and bifocals should be considered for those with evidence of under-accommodation.
Evaluating rainfall errors in global climate models through cloud regimes
NASA Astrophysics Data System (ADS)
Tan, Jackson; Oreopoulos, Lazaros; Jakob, Christian; Jin, Daeho
2017-07-01
Global climate models suffer from a persistent shortcoming in their simulation of rainfall by producing too much drizzle and too little intense rain. This erroneous distribution of rainfall is a result of deficiencies in the representation of underlying processes of rainfall formation. In the real world, clouds are precursors to rainfall and the distribution of clouds is intimately linked to the rainfall over the area. This study examines the model representation of tropical rainfall using the cloud regime concept. In observations, these cloud regimes are derived from cluster analysis of joint-histograms of cloud properties retrieved from passive satellite measurements. With the implementation of satellite simulators, comparable cloud regimes can be defined in models. This enables us to contrast the rainfall distributions of cloud regimes in 11 CMIP5 models to observations and decompose the rainfall errors by cloud regimes. Many models underestimate the rainfall from the organized convective cloud regime, which in observation provides half of the total rain in the tropics. Furthermore, these rainfall errors are relatively independent of the model's accuracy in representing this cloud regime. Error decomposition reveals that the biases are compensated in some models by a more frequent occurrence of the cloud regime and most models exhibit substantial cancellation of rainfall errors from different regimes and regions. Therefore, underlying relatively accurate total rainfall in models are significant cancellation of rainfall errors from different cloud types and regions. The fact that a good representation of clouds does not lead to appreciable improvement in rainfall suggests a certain disconnect in the cloud-precipitation processes of global climate models.
Knowledge of healthcare professionals about medication errors in hospitals
Abdel-Latif, Mohamed M. M.
2016-01-01
Context: Medication errors are the most common types of medical errors in hospitals and leading cause of morbidity and mortality among patients. Aims: The aim of the present study was to assess the knowledge of healthcare professionals about medication errors in hospitals. Settings and Design: A self-administered questionnaire was distributed to randomly selected healthcare professionals in eight hospitals in Madinah, Saudi Arabia. Subjects and Methods: An 18-item survey was designed and comprised questions on demographic data, knowledge of medication errors, availability of reporting systems in hospitals, attitudes toward error reporting, causes of medication errors. Statistical Analysis Used: Data were analyzed with Statistical Package for the Social Sciences software Version 17. Results: A total of 323 of healthcare professionals completed the questionnaire with 64.6% response rate of 138 (42.72%) physicians, 34 (10.53%) pharmacists, and 151 (46.75%) nurses. A majority of the participants had a good knowledge about medication errors concept and their dangers on patients. Only 68.7% of them were aware of reporting systems in hospitals. Healthcare professionals revealed that there was no clear mechanism available for reporting of errors in most hospitals. Prescribing (46.5%) and administration (29%) errors were the main causes of errors. The most frequently encountered medication errors were anti-hypertensives, antidiabetics, antibiotics, digoxin, and insulin. Conclusions: This study revealed differences in the awareness among healthcare professionals toward medication errors in hospitals. The poor knowledge about medication errors emphasized the urgent necessity to adopt appropriate measures to raise awareness about medication errors in Saudi hospitals. PMID:27330261
A portable non-contact displacement sensor and its application of lens centration error measurement
NASA Astrophysics Data System (ADS)
Yu, Zong-Ru; Peng, Wei-Jei; Wang, Jung-Hsing; Chen, Po-Jui; Chen, Hua-Lin; Lin, Yi-Hao; Chen, Chun-Cheng; Hsu, Wei-Yao; Chen, Fong-Zhi
2018-02-01
We present a portable non-contact displacement sensor (NCDS) based on astigmatic method for micron displacement measurement. The NCDS are composed of a collimated laser, a polarized beam splitter, a 1/4 wave plate, an aspheric objective lens, an astigmatic lens and a four-quadrant photodiode. A visible laser source is adopted for easier alignment and usage. The dimension of the sensor is limited to 115 mm x 36 mm x 56 mm, and a control box is used for dealing with signal and power control between the sensor and computer. The NCDS performs micron-accuracy with +/-30 μm working range and the working distance is constrained in few millimeters. We also demonstrate the application of the NCDS for lens centration error measurement, which is similar to the total indicator runout (TIR) or edge thickness difference (ETD) of a lens measurement using contact dial indicator. This application has advantage for measuring lens made in soft materials that would be starched by using contact dial indicator.
Nguyen, Hung P.; Dingwell, Jonathan B.
2012-01-01
Determining how the human nervous system contends with neuro-motor noise is vital to understanding how humans achieve accurate goal-directed movements. Experimentally, people learning skilled tasks tend to reduce variability in distal joint movements more than in proximal joint movements. This suggests that they might be imposing greater control over distal joints than proximal joints. However, the reasons for this remain unclear, largely because it is not experimentally possible to directly manipulate either the noise or the control at each joint independently. Therefore, this study used a 2 degree-of-freedom torque driven arm model to determine how different combinations of noise and/or control independently applied at each joint affected the reaching accuracy and the total work required to make the movement. Signal-dependent noise was simultaneously and independently added to the shoulder and elbow torques to induce endpoint errors during planar reaching. Feedback control was then applied, independently and jointly, at each joint to reduce endpoint error due to the added neuromuscular noise. Movement direction and the inertia distribution along the arm were varied to quantify how these biomechanical variations affected the system performance. Endpoint error and total net work were computed as dependent measures. When each joint was independently subjected to noise in the absence of control, endpoint errors were more sensitive to distal (elbow) noise than to proximal (shoulder) noise for nearly all combinations of reaching direction and inertia ratio. The effects of distal noise on endpoint errors were more pronounced when inertia was distributed more toward the forearm. In contrast, the total net work decreased as mass was shifted to the upper arm for reaching movements in all directions. When noise was present at both joints and joint control was implemented, controlling the distal joint alone reduced endpoint errors more than controlling the proximal joint alone for nearly all combinations of reaching direction and inertia ratio. Applying control only at the distal joint was more effective at reducing endpoint errors when more of the mass was more proximally distributed. Likewise, controlling the distal joint alone required less total net work than controlling the proximal joint alone for nearly all combinations of reaching distance and inertia ratio. It is more efficient to reduce endpoint error and energetic cost by selectively applying control to reduce variability in the distal joint than the proximal joint. The reasons for this arise from the biomechanical configuration of the arm itself. PMID:22757504
Nguyen, Hung P; Dingwell, Jonathan B
2012-06-01
Determining how the human nervous system contends with neuro-motor noise is vital to understanding how humans achieve accurate goal-directed movements. Experimentally, people learning skilled tasks tend to reduce variability in distal joint movements more than in proximal joint movements. This suggests that they might be imposing greater control over distal joints than proximal joints. However, the reasons for this remain unclear, largely because it is not experimentally possible to directly manipulate either the noise or the control at each joint independently. Therefore, this study used a 2 degree-of-freedom torque driven arm model to determine how different combinations of noise and/or control independently applied at each joint affected the reaching accuracy and the total work required to make the movement. Signal-dependent noise was simultaneously and independently added to the shoulder and elbow torques to induce endpoint errors during planar reaching. Feedback control was then applied, independently and jointly, at each joint to reduce endpoint error due to the added neuromuscular noise. Movement direction and the inertia distribution along the arm were varied to quantify how these biomechanical variations affected the system performance. Endpoint error and total net work were computed as dependent measures. When each joint was independently subjected to noise in the absence of control, endpoint errors were more sensitive to distal (elbow) noise than to proximal (shoulder) noise for nearly all combinations of reaching direction and inertia ratio. The effects of distal noise on endpoint errors were more pronounced when inertia was distributed more toward the forearm. In contrast, the total net work decreased as mass was shifted to the upper arm for reaching movements in all directions. When noise was present at both joints and joint control was implemented, controlling the distal joint alone reduced endpoint errors more than controlling the proximal joint alone for nearly all combinations of reaching direction and inertia ratio. Applying control only at the distal joint was more effective at reducing endpoint errors when more of the mass was more proximally distributed. Likewise, controlling the distal joint alone required less total net work than controlling the proximal joint alone for nearly all combinations of reaching distance and inertia ratio. It is more efficient to reduce endpoint error and energetic cost by selectively applying control to reduce variability in the distal joint than the proximal joint. The reasons for this arise from the biomechanical configuration of the arm itself.
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... rates, which is defined as the percentage of cases with an error (expressed as the total number of cases with an error compared to the total number of cases); the percentage of cases with an improper payment...
The MOBID-2 pain scale: Reliability and responsiveness to pain in patients with dementia
Husebo, BS; Ostelo, R; Strand, LI
2014-01-01
Background Mobilization-Observation-Behavior-Intensity-Dementia-2 (MOBID-2) pain scale is a staff-administered pain tool for patients with dementia. This study explores MOBID-2's test–retest reliability, measurement error and responsiveness to change. Methods Analyses are based upon data from a cluster randomized trial including 352 patients with advanced dementia from 18 Norwegian nursing homes. Test–retest reliability between baseline and week 2 (n = 163), and weeks 2 and 4 (n = 159) was examined in patients not expected to change (controls), using intraclass correlation coefficient (ICC2.1), standard error of measurement (SEM) and smallest detectable change (SDC). Responsiveness was examined by testing six priori-formulated hypotheses about the association between change scores on MOBID-2 and other outcome measures. Results ICCs of the total MOBID-2 scores were 0.81 (0–2 weeks) and 0.85 (2–4 weeks). SEM and SDC were 1.9 and 3.1 (0–2 weeks) and 1.4 and 2.3 (2–4 weeks), respectively. Five out of six hypotheses were confirmed: MOBID-2 discriminated (p < 0.001) between change in patients with and without a stepwise protocol for treatment of pain (SPTP). Moderate association (r = 0.35) was demonstrated with Cohen-Mansfield Agitation Inventory, and no association with Mini-Mental State Examination, Functional Assessment Staging and Activity of Daily Living. Expected associations between change scores of MOBID-2 and Neuropsychiatric Inventory – Nursing Home version were not confirmed. Conclusion The SEM and SDC in connection with the MOBID-2 pain scale indicate that the instrument is responsive to a decrease in pain after a SPTP. Satisfactory test–retest reliability across test periods was demonstrated. Change scores ≥ 3 on total and subscales are clinically relevant and are beyond measurement error. PMID:24799157
Personal protective equipment for the Ebola virus disease: A comparison of 2 training programs.
Casalino, Enrique; Astocondor, Eugenio; Sanchez, Juan Carlos; Díaz-Santana, David Enrique; Del Aguila, Carlos; Carrillo, Juan Pablo
2015-12-01
Personal protective equipment (PPE) for preventing Ebola virus disease (EVD) includes basic PPE (B-PPE) and enhanced PPE (E-PPE). Our aim was to compare conventional training programs (CTPs) and reinforced training programs (RTPs) on the use of B-PPE and E-PPE. Four groups were created, designated CTP-B, CTP-E, RTP-B, and RTP-E. All groups received the same theoretical training, followed by 3 practical training sessions. A total of 120 students were included (30 per group). In all 4 groups, the frequency and number of total errors and critical errors decreased significantly over the course of the training sessions (P < .01). The RTP was associated with a greater reduction in the number of total errors and critical errors (P < .0001). During the third training session, we noted an error frequency of 7%-43%, a critical error frequency of 3%-40%, 0.3-1.5 total errors, and 0.1-0.8 critical errors per student. The B-PPE groups had the fewest errors and critical errors (P < .0001). Our results indicate that both training methods improved the student's proficiency, that B-PPE appears to be easier to use than E-PPE, that the RTP achieved better proficiency for both PPE types, and that a number of students are still potentially at risk for EVD contamination despite the improvements observed during the training. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Xi, Xiuxiu
2015-01-01
The measurement of soil total nitrogen (TN) by hyperspectral remote sensing provides an important tool for soil restoration programs in areas with subsided land caused by the extraction of natural resources. This study used the local correlation maximization-complementary superiority method (LCMCS) to establish TN prediction models by considering the relationship between spectral reflectance (measured by an ASD FieldSpec 3 spectroradiometer) and TN based on spectral reflectance curves of soil samples collected from subsided land which is determined by synthetic aperture radar interferometry (InSAR) technology. Based on the 1655 selected effective bands of the optimal spectrum (OSP) of the first derivate differential of reciprocal logarithm ([log{1/R}]′), (correlation coefficients, p < 0.01), the optimal model of LCMCS method was obtained to determine the final model, which produced lower prediction errors (root mean square error of validation [RMSEV] = 0.89, mean relative error of validation [MREV] = 5.93%) when compared with models built by the local correlation maximization (LCM), complementary superiority (CS) and partial least squares regression (PLS) methods. The predictive effect of LCMCS model was optional in Cangzhou, Renqiu and Fengfeng District. Results indicate that the LCMCS method has great potential to monitor TN in subsided lands caused by the extraction of natural resources including groundwater, oil and coal. PMID:26213935
NASA Astrophysics Data System (ADS)
Manago, Naohiro; Noguchi, Katsuyuki; Hashimoto, George L.; Senshu, Hiroki; Otobe, Naohito; Suzuki, Makoto; Kuze, Hiroaki
2017-12-01
Dust and water vapor are important constituents in the Martian atmosphere, exerting significant influence on the heat balance of the atmosphere and surface. We have developed a method to retrieve optical and physical properties of Martian dust from spectral intensities of direct and scattered solar radiation to be measured using a multi-wavelength environmental camera onboard a Mars lander. Martian dust is assumed to be composed of silicate-like substrate and hematite-like inclusion, having spheroidal shape with a monomodal gamma size distribution. Error analysis based on simulated data reveals that appropriate combinations of three bands centered at 450, 550, and 675 nm wavelengths and 4 scattering angles of 3°, 10°, 50°, and 120° lead to good retrieval of four dust parameters, namely, aerosol optical depth, effective radius and variance of size distribution, and volume mixing ratio of hematite. Retrieval error increases when some of the observational parameters such as color ratio or aureole are omitted from the retrieval. Also, the capability of retrieving total column water vapor is examined through observations of direct and scattered solar radiation intensities at 925, 935, and 972 nm. The simulation and error analysis presented here will be useful for designing an environmental camera that can elucidate the dust and water vapor properties in a future Mars lander mission.
Qu, Weina; Ge, Yan; Zhang, Qian; Zhao, Wenguo; Zhang, Kan
2015-07-01
Driver inattention is a significant cause of motor vehicle collisions and incidents. The purpose of this study was to translate the Attention-Related Driving Error Scale (ARDES) into Chinese and to verify its reliability and validity. A total of 317 drivers completed the Chinese version of the ARDES, the Dula Dangerous Driving Index (DDDI), the Attention-Related Cognitive Errors Scale (ARCES) and the Mindful Attention Awareness Scale (MAAS) questionnaires. Specific sociodemographic variables and traffic violations were also measured. Psychometric results confirm that the ARDES-China has adequate psychometric properties (Cronbach's alpha=0.88) to be a useful tool for evaluating proneness to attentional errors in the Chinese driving population. First, ARDES-China scores were positively correlated with both DDDI scores and number of accidents in the prior year; in addition, ARDES-China scores were a significant predictor of dangerous driving behavior as measured by DDDI. Second, we found that ARDES-China scores were strongly correlated with ARCES scores and negatively correlated with MAAS scores. Finally, different demographic groups exhibited significant differences in ARDES scores; in particular, ARDES scores varied with years of driving experience. Copyright © 2015 Elsevier Ltd. All rights reserved.
Spectral purity study for IPDA lidar measurement of CO2
NASA Astrophysics Data System (ADS)
Ma, Hui; Liu, Dong; Xie, Chen-Bo; Tan, Min; Deng, Qian; Xu, Ji-Wei; Tian, Xiao-Min; Wang, Zhen-Zhu; Wang, Bang-Xin; Wang, Ying-Jian
2018-02-01
A high sensitivity and global covered observation of carbon dioxide (CO2) is expected by space-borne integrated path differential absorption (IPDA) lidar which has been designed as the next generation measurement. The stringent precision of space-borne CO2 data, for example 1ppm or better, is required to address the largest number of carbon cycle science questions. Spectral purity, which is defined as the ratio of effective absorbed energy to the total energy transmitted, is one of the most important system parameters of IPDA lidar which directly influences the precision of CO2. Due to the column averaged dry air mixing ratio of CO2 is inferred from comparison of the two echo pulse signals, the laser output usually accompanied by an unexpected spectrally broadband background radiation would posing significant systematic error. In this study, the spectral energy density line shape and spectral impurity line shape are modeled as Lorentz line shape for the simulation, and the latter is assumed as an unabsorbed component by CO2. An error equation is deduced according to IPDA detecting theory for calculating the system error caused by spectral impurity. For a spectral purity of 99%, the induced error could reach up to 8.97 ppm.
Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty
NASA Technical Reports Server (NTRS)
Mather, Janice L.; Taylor, Shawn C.
2015-01-01
In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.
2015-04-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.
Al-lela, Omer Qutaiba B; Bahari, Mohd Baidi; Al-abbassi, Mustafa G; Salih, Muhannad R M; Basher, Amena Y
2012-06-06
The immunization status of children is improved by interventions that increase community demand for compulsory and non-compulsory vaccines, one of the most important interventions related to immunization providers. The aim of this study is to evaluate the activities of immunization providers in terms of activities time and cost, to calculate the immunization doses cost, and to determine the immunization dose errors cost. Time-motion and cost analysis study design was used. Five public health clinics in Mosul-Iraq participated in the study. Fifty (50) vaccine doses were required to estimate activities time and cost. Micro-costing method was used; time and cost data were collected for each immunization-related activity performed by the clinic staff. A stopwatch was used to measure the duration of activity interactions between the parents and clinic staff. The immunization service cost was calculated by multiplying the average salary/min by activity time per minute. 528 immunization cards of Iraqi children were scanned to determine the number and the cost of immunization doses errors (extraimmunization doses and invalid doses). The average time for child registration was 6.7 min per each immunization dose, and the physician spent more than 10 min per dose. Nurses needed more than 5 min to complete child vaccination. The total cost of immunization activities was 1.67 US$ per each immunization dose. Measles vaccine (fifth dose) has a lower price (0.42 US$) than all other immunization doses. The cost of a total of 288 invalid doses was 744.55 US$ and the cost of a total of 195 extra immunization doses was 503.85 US$. The time spent on physicians' activities was longer than that spent on registrars' and nurses' activities. Physician total cost was higher than registrar cost and nurse cost. The total immunization cost will increase by about 13.3% owing to dose errors. Copyright © 2012 Elsevier Ltd. All rights reserved.
Performance of electrolyte measurements assessed by a trueness verification program.
Ge, Menglei; Zhao, Haijian; Yan, Ying; Zhang, Tianjiao; Zeng, Jie; Zhou, Weiyan; Wang, Yufei; Meng, Qinghui; Zhang, Chuanbao
2016-08-01
In this study, we analyzed frozen sera with known commutabilities for standardization of serum electrolyte measurements in China. Fresh frozen sera were sent to 187 clinical laboratories in China for measurement of four electrolytes (sodium, potassium, calcium, and magnesium). Target values were assigned by two reference laboratories. Precision (CV), trueness (bias), and accuracy [total error (TEa)] were used to evaluate measurement performance, and the tolerance limit derived from the biological variation was used as the evaluation criterion. About half of the laboratories used a homogeneous system (same manufacturer for instrument, reagent and calibrator) for calcium and magnesium measurement, and more than 80% of laboratories used a homogeneous system for sodium and potassium measurement. More laboratories met the tolerance limit of imprecision (coefficient of variation [CVa]) than the tolerance limits of trueness (biasa) and TEa. For sodium, calcium, and magnesium, the minimal performance criterion derived from biological variation was used, and the pass rates for total error were approximately equal to the bias (<50%). For potassium, the pass rates for CV and TE were more than 90%. Compared with the non homogeneous system, the homogeneous system was superior for all three quality specifications. The use of commutable proficiency testing/external quality assessment (PT/EQA) samples with values assigned by reference methods can monitor performance and provide reliable data for improving the performance of laboratory electrolyte measurement. The homogeneous systems were superior to the non homogeneous systems, whereas accuracy of assigned values of calibrators and assay stability remained challenges.
Müller, Erich
2016-01-01
In the laboratory, optoelectronic stereophotogrammetry is one of the most commonly used motion capture systems; particularly, when position- or orientation-related analyses of human movements are intended. However, for many applied research questions, field experiments are indispensable, and it is not a priori clear whether optoelectronic stereophotogrammetric systems can be expected to perform similarly to in-lab experiments. This study aimed to assess the instrumental errors of kinematic data collected on a ski track using optoelectronic stereophotogrammetry, and to investigate the magnitudes of additional skiing-specific errors and soft tissue/suit artifacts. During a field experiment, the kinematic data of different static and dynamic tasks were captured by the use of 24 infrared-cameras. The distances between three passive markers attached to a rigid bar were stereophotogrammetrically reconstructed and, subsequently, were compared to the manufacturer-specified exact values. While at rest or skiing at low speed, the optoelectronic stereophotogrammetric system’s accuracy and precision for determining inter-marker distances were found to be comparable to those known for in-lab experiments (< 1 mm). However, when measuring a skier’s kinematics under “typical” skiing conditions (i.e., high speeds, inclined/angulated postures and moderate snow spraying), additional errors were found to occur for distances between equipment-fixed markers (total measurement errors: 2.3 ± 2.2 mm). Moreover, for distances between skin-fixed markers, such as the anterior hip markers, additional artifacts were observed (total measurement errors: 8.3 ± 7.1 mm). In summary, these values can be considered sufficient for the detection of meaningful position- or orientation-related differences in alpine skiing. However, it must be emphasized that the use of optoelectronic stereophotogrammetry on a ski track is seriously constrained by limited practical usability, small-sized capture volumes and the occurrence of extensive snow spraying (which results in marker obscuration). The latter limitation possibly might be overcome by the use of more sophisticated cluster-based marker sets. PMID:27560498
NASA Astrophysics Data System (ADS)
Baker, D. F.; Oda, T.; O'Dell, C.; Wunch, D.; Jacobson, A. R.; Yoshida, Y.; Partners, T.
2012-12-01
Measurements of column CO2 concentration from space are now being taken at a spatial and temporal density that permits regional CO2 sources and sinks to be estimated. Systematic errors in the satellite retrievals must be minimized for these estimates to be useful, however. CO2 retrievals from the TANSO instrument aboard the GOSAT satellite are compared to similar column retrievals from the Total Carbon Column Observing Network (TCCON) as the primary method of validation; while this is a powerful approach, it can only be done for overflights of 10-20 locations and has not, for example, permitted validation of GOSAT data over the oceans or deserts. Here we present a complementary approach that uses a global atmospheric transport model and flux inversion method to compare different types of CO2 measurements (GOSAT, TCCON, surface in situ, and aircraft) at different locations, at the cost of added transport error. The measurements from any single type of data are used in a variational carbon data assimilation method to optimize surface CO2 fluxes (with a CarbonTracker prior), then the corresponding optimized CO2 concentration fields are compared to those data types not inverted, using the appropriate vertical weighting. With this approach, we find that GOSAT column CO2 retrievals from the ACOS project (version 2.9 and 2.10) contain systematic errors that make the modeled fit to the independent data worse. However, we find that the differences between the GOSAT data and our prior model are correlated with certain physical variables (aerosol amount, surface albedo, correction to total column mass) that are likely driving errors in the retrievals, independent of CO2 concentration. If we correct the GOSAT data using a fit to these variables, then we find the GOSAT data to improve the fit to independent CO2 data, which suggests that the useful information in the measurements outweighs the negative impact of the remaining systematic errors. With this assurance, we compare the flux estimates given by assimilating the ACOS GOSAT retrievals to similar ones given by NIES GOSAT column retrievals, bias-corrected in a similar manner. Finally, we have found systematic differences on the order of a half ppm between column CO2 integrals from 18 TCCON sites and those given by assimilating NOAA in situ data (both surface and aircraft profile) in this approach. We assess how these differences change in switching to a newer version of the TCCON retrieval software.
Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.
Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente
2014-07-15
Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Comparison of measured and computed pitot pressures in a leading edge vortex from a delta wing
NASA Technical Reports Server (NTRS)
Murman, Earll M.; Powell, Kenneth G.
1987-01-01
Calculations are presented for a 75-deg swept flat plate wing tested at a freestream Mach number of 1.95 and 10 degrees angle of attack. Good agreement is found between computational data and previous experimental pitot pressure measurements in the core of the vortex, suggesting that the total pressure losses predicted by the Euler equation solvers are not errors, but realistic predictions. Data suggest that the magnitude of the total pressure loss is related to the circumferential velocity field through the vortex, and that it increases with angle of attack and varies with Mach number and sweep angle.
A description of medication errors reported by pharmacists in a neonatal intensive care unit.
Pawluk, Shane; Jaam, Myriam; Hazi, Fatima; Al Hail, Moza Sulaiman; El Kassem, Wessam; Khalifa, Hanan; Thomas, Binny; Abdul Rouf, Pallivalappila
2017-02-01
Background Patients in the Neonatal Intensive Care Unit (NICU) are at an increased risk for medication errors. Objective The objective of this study is to describe the nature and setting of medication errors occurring in patients admitted to an NICU in Qatar based on a standard electronic system reported by pharmacists. Setting Neonatal intensive care unit, Doha, Qatar. Method This was a retrospective cross-sectional study on medication errors reported electronically by pharmacists in the NICU between January 1, 2014 and April 30, 2015. Main outcome measure Data collected included patient information, and incident details including error category, medications involved, and follow-up completed. Results A total of 201 NICU pharmacists-reported medication errors were submitted during the study period. All reported errors did not reach the patient and did not cause harm. Of the errors reported, 98.5% occurred in the prescribing phase of the medication process with 58.7% being due to calculation errors. Overall, 53 different medications were documented in error reports with the anti-infective agents being the most frequently cited. The majority of incidents indicated that the primary prescriber was contacted and the error was resolved before reaching the next phase of the medication process. Conclusion Medication errors reported by pharmacists occur most frequently in the prescribing phase of the medication process. Our data suggest that error reporting systems need to be specific to the population involved. Special attention should be paid to frequently used medications in the NICU as these were responsible for the greatest numbers of medication errors.
Kim, Youngwon; Welk, Gregory J
2017-02-01
Sedentary behaviour (SB) has emerged as a modifiable risk factor, but little is known about measurement errors of SB. The purpose of this study was to determine the validity of 24-h Physical Activity Recall (24PAR) relative to SenseWear Armband (SWA) for assessing SB. Each participant (n = 1485) undertook a series of data collection procedures on two randomly selected days: wearing a SWA for full 24-h, and then completing the telephone-administered 24PAR the following day to recall the past 24-h activities. Estimates of total sedentary time (TST) were computed without the inclusion of reported or recorded sleep time. Equivalence testing was used to compare estimates of TST. Analyses from equivalence testing showed no significant equivalence of 24PAR for TST (90% CI: 443.0 and 457.6 min · day -1 ) relative to SWA (equivalence zone: 580.7 and 709.8 min · day -1 ). Bland-Altman plots indicated individuals that were extremely or minimally sedentary provided relatively comparable sedentary time between 24PAR and SWA. Overweight/obese and/or older individuals were more likely to under-estimate sedentary time than normal weight and/or younger individuals. Measurement errors of 24PAR varied by the level of sedentary time and demographic indicators. This evidence informs future work to develop measurement error models to correct for errors of self-reports.
Karanovic, Nenad; Carev, Mladen; Kardum, Goran; Pecotic, Renata; Valic, Maja; Karanovic, Sandra; Ujevic, Ante; Dogas, Zoran
2009-10-01
The profession of anaesthesiologist is demanding and potentially hazardous. Extended work shifts combined with intensive work load may adversely affect physicians' performance. The aim of this study was to explore the impact of a single in-hospital 24 h shift on the cognitive and psychomotor performance of anaesthesiologists in a surgical emergency department. Following ethical and institutional approval, 11 staff anaesthesiologists [six men, five women, age 48 (35-50), years of experience 17 (7-20), median (range)] successfully completed the study protocol. Four computer-generated psychological tests (CRD, Complex Reactionmeter Drenovac, Croatia) consisting of light signal position discrimination (CRD 311), simple visual orientation (CRD 21), simple arithmetic operations (CRD 11), and complex psychomotor coordination (CRD 411) were used to measure objective parameters of cognitive and psychomotor performance at four time points (D1 = 8:00 a.m., D2 = 3:00 p.m., D3 = 11:00 p.m.; and D4 = 7:00-8:00 a.m. next day) during the 24 h working day. The control testing on an ordinary working day was performed at two time points (C1 = 8:00 a.m., C2 = 3:00 p.m.). Three parameters were recorded: total test solving time (TTST), total variability, and total number of errors for all four tests. TTST was significantly impaired during the 24 h shift in all tests, and TTST was prolonged in CRD 21 test at different time points from 1.6 +/- 1.4 to 5.5 +/- 1.6 s compared with the control (F = 6.39, P = 0.001). The reaction times were prolonged from 1.3 +/- 1.8 to 5.4 +/- 1.2 s (F = 3.49, P = 0.009) in CRD 311, from 3.8 +/- 9.0 to 34.3 +/- 5.8 s (F = 5.05, P = 0.002) in CRD 11 TTST, and from 0.8 +/- 3.0 to 16.3 +/- 8.6 s (F = 2.67, P = 0.034) in CRD 411. Total variability was significantly altered during the 24 h shift only in CRD 411 (F = 2.63, P = 0.036). There was no difference in the total number of errors between the 24 h shift and the ordinary working day. Anaesthesiologists' 24 h working day in the emergency department altered cognitive and psychomotor function in comparison with ordinary working days. Speed, reliability and mental endurance (measured by TTST) were significantly impaired in all four tests. Stability and reaction time (measured by total variability) were only slightly impaired. Paradoxically, attention and alertness (measured by total number of errors) were not adversely affected. In conclusion, anaesthesiologists' psychomotor performance was impaired during the single 24 h shift.
NASA Technical Reports Server (NTRS)
Weaver, W. L.; Green, R. N.
1980-01-01
Geometric shape factors were computed and applied to satellite simulated irradiance measurements to estimate Earth emitted flux densities for global and zonal scales and for areas smaller than the detector field of view (FOV). Wide field of view flat plate detectors were emphasized, but spherical detectors were also studied. The radiation field was modeled after data from the Nimbus 2 and 3 satellites. At a satellite altitude of 600 km, zonal estimates were in error 1.0 to 1.2 percent and global estimates were in error less than 0.2 percent. Estimates with unrestricted field of view (UFOV) detectors were about the same for Lambertian and limb darkening radiation models. The opposite was found for restricted field of view detectors. The UFOV detectors are found to be poor estimators of flux density from the total FOV and are shown to be much better as estimators of flux density from a circle centered at the FOV with an area significantly smaller than that for the total FOV.
Single event upset susceptibilities of latchup immune CMOS process programmable gate arrays
NASA Astrophysics Data System (ADS)
Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.; Tsubota, T. K.
Single event upsets (SEU) and latchup susceptibilities of complementary metal oxide semiconductor programmable gate arrays (CMOS PPGA's) were measured at the Lawrence Berkeley Laboratory 88-in. cyclotron facility with Xe (603 MeV), Cu (290 MeV), and Ar (180 MeV) ion beams. The PPGA devices tested were those which may be used in space. Most of the SEU measurements were taken with a newly constructed tester called the Bus Access Storage and Comparison System (BASACS) operating via a Macintosh II computer. When BASACS finds that an output does not match a prerecorded pattern, the state of all outputs, position in the test cycle, and other necessary information is transmitted and stored in the Macintosh. The upset rate was kept between 1 and 3 per second. After a sufficient number of errors are stored, the test is stopped and the total fluence of particles and total errors are recorded. The device power supply current was closely monitored to check for occurrence of latchup. Results of the tests are presented, indicating that some of the PPGA's are good candidates for selected space applications.
The Global Energy Balance of Titan
NASA Technical Reports Server (NTRS)
Li, Liming; Nixon, Conor A.; Achterberg, Richard K.; Smith, Mark A.; Gorius, Nicolas J. P.; Jiang, Xun; Conrath, Barney J.; Gierasch, Peter J.; Simon-Miller, Amy A.; Flasar, F. Michael;
2011-01-01
We report the first measurement of the global emitted power of Titan. Longterm (2004-2010) observations conducted by the Composite Infrared Spectrometer (CIRS) onboard Cassini reveal that the total emitted power by Titan is (2.84 plus or minus 0.01) x 10(exp 8) watts. Together with previous measurements of the global absorbed solar power of Titan, the CIRS measurements indicate that the global energy budget of Titan is in equilibrium within measurement error. The uncertainty in the absorbed solar energy places an upper limit on the energy imbalance of 5.3%.
Crock, J.G.; Severson, R.C.
1980-01-01
Attaining acceptable precision in extractable element determinations is more difficult than in total element determinations. In total element determinations, dissolution of the sample is qualitatively checked by the clarity of the solution and the absence of residues. These criteria cannot be used for extracts. Possibilities for error are introduced in virtually every step in soil extractions. Therefore, the use of reference materials whose homogeneity and element content are reasonably well known is essential for determination of extractable elements. In this report, estimates of homogeneity and element content are presented for four reference samples. Bulk samples of about 100 kilograms of each sample were ground to pass an 80-mesh sieve. The samples were homogenized and split using a Jones-type splitter. Fourteen splits of each reference sample were analyzed for total content of Ca, Co, Cu, Fe, K, Mg, Mn, Na, and Zn; DTPA-extractable Cd, Co, Cu, Fe, Mn, Ni, Pb, and Zn; exchangeable Ca, Mg, K, and Na; cation exchange capacity water-saturation-extractable Ca, Mg, K, Na, C1, and SO4; soil pH; and hot-water-extractable boron. Error measured between splits was small, indicating that the samples were homogenized adequately and that the laboratory procedure provided reproducible results.
Model-based color halftoning using direct binary search.
Agar, A Ufuk; Allebach, Jan P
2005-12-01
In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.
Absolute measurement of the extreme UV solar flux
NASA Technical Reports Server (NTRS)
Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.
1984-01-01
A windowless rare-gas ionization chamber has been developed to measure the absolute value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable absolute detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net error of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other errors such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.
Hydrograph matching method for measuring model performance
NASA Astrophysics Data System (ADS)
Ewen, John
2011-09-01
SummaryDespite all the progress made over the years on developing automatic methods for analysing hydrographs and measuring the performance of rainfall-runoff models, automatic methods cannot yet match the power and flexibility of the human eye and brain. Very simple approaches are therefore being developed that mimic the way hydrologists inspect and interpret hydrographs, including the way that patterns are recognised, links are made by eye, and hydrological responses and errors are studied and remembered. In this paper, a dynamic programming algorithm originally designed for use in data mining is customised for use with hydrographs. It generates sets of "rays" that are analogous to the visual links made by the hydrologist's eye when linking features or times in one hydrograph to the corresponding features or times in another hydrograph. One outcome from this work is a new family of performance measures called "visual" performance measures. These can measure differences in amplitude and timing, including the timing errors between simulated and observed hydrographs in model calibration. To demonstrate this, two visual performance measures, one based on the Nash-Sutcliffe Efficiency and the other on the mean absolute error, are used in a total of 34 split-sample calibration-validation tests for two rainfall-runoff models applied to the Hodder catchment, northwest England. The customised algorithm, called the Hydrograph Matching Algorithm, is very simple to apply; it is given in a few lines of pseudocode.
Quality Control Methodology Of A Surface Wind Observational Database In North Eastern North America
NASA Astrophysics Data System (ADS)
Lucio-Eceiza, Etor E.; Fidel González-Rouco, J.; Navarro, Jorge; Conte, Jorge; Beltrami, Hugo
2016-04-01
This work summarizes the design and application of a Quality Control (QC) procedure for an observational surface wind database located in North Eastern North America. The database consists of 526 sites (486 land stations and 40 buoys) with varying resolutions of hourly, 3 hourly and 6 hourly data, compiled from three different source institutions with uneven measurement units and changing measuring procedures, instrumentation and heights. The records span from 1953 to 2010. The QC process is composed of different phases focused either on problems related with the providing source institutions or measurement errors. The first phases deal with problems often related with data recording and management: (1) compilation stage dealing with the detection of typographical errors, decoding problems, site displacements and unification of institutional practices; (2) detection of erroneous data sequence duplications within a station or among different ones; (3) detection of errors related with physically unrealistic data measurements. The last phases are focused on instrumental errors: (4) problems related with low variability, placing particular emphasis on the detection of unrealistic low wind speed records with the help of regional references; (5) high variability related erroneous records; (6) standardization of wind speed record biases due to changing measurement heights, detection of wind speed biases on week to monthly timescales, and homogenization of wind direction records. As a result, around 1.7% of wind speed records and 0.4% of wind direction records have been deleted, making a combined total of 1.9% of removed records. Additionally, around 15.9% wind speed records and 2.4% of wind direction data have been also corrected.
Hood, Donald C; Anderson, Susan C; Wall, Michael; Raza, Ali S; Kardon, Randy H
2009-09-01
Retinal nerve fiber (RNFL) thickness and visual field loss data from patients with glaucoma were analyzed in the context of a model, to better understand individual variation in structure versus function. Optical coherence tomography (OCT) RNFL thickness and standard automated perimetry (SAP) visual field loss were measured in the arcuate regions of one eye of 140 patients with glaucoma and 82 normal control subjects. An estimate of within-individual (measurement) error was obtained by repeat measures made on different days within a short period in 34 patients and 22 control subjects. A linear model, previously shown to describe the general characteristics of the structure-function data, was extended to predict the variability in the data. For normal control subjects, between-individual error (individual differences) accounted for 87% and 71% of the total variance in OCT and SAP measures, respectively. SAP within-individual error increased and then decreased with increased SAP loss, whereas OCT error remained constant. The linear model with variability (LMV) described much of the variability in the data. However, 12.5% of the patients' points fell outside the 95% boundary. An examination of these points revealed factors that can contribute to the overall variability in the data. These factors include epiretinal membranes, edema, individual variation in field-to-disc mapping, and the location of blood vessels and degree to which they are included by the RNFL algorithm. The model and the partitioning of within- versus between-individual variability helped elucidate the factors contributing to the considerable variability in the structure-versus-function data.
Cöster, Maria C; Nilsdotter, Anna; Brudin, Lars; Bremander, Ann
2017-01-01
Background and purpose Patient-reported outcome measures (PROMs) are increasingly used to evaluate results in orthopedic surgery. To enhance good responsiveness with a PROM, the minimally important change (MIC) should be established. MIC reflects the smallest measured change in score that is perceived as being relevant by the patients. We assessed MIC for the Self-reported Foot and Ankle Score (SEFAS) used in Swedish national registries. Patients and methods Patients with forefoot disorders (n = 83) or hindfoot/ankle disorders (n = 80) completed the SEFAS before surgery and 6 months after surgery. At 6 months also, a patient global assessment (PGA) scale—as external criterion—was completed. Measurement error was expressed as the standard error of a single determination. MIC was calculated by (1) median change scores in improved patients on the PGA scale, and (2) the best cutoff point (BCP) and area under the curve (AUC) using analysis of receiver operating characteristic curves (ROCs). Results The change in mean summary score was the same, 9 (SD 9), in patients with forefoot disorders and in patients with hindfoot/ankle disorders. MIC for SEFAS in the total sample was 5 score points (IQR: 2–8) and the measurement error was 2.4. BCP was 5 and AUC was 0.8 (95% CI: 0.7–0.9). Interpretation As previously shown, SEFAS has good responsiveness. The score change in SEFAS 6 months after surgery should exceed 5 score points in both forefoot patients and hindfoot/ankle patients to be considered as being clinically relevant. PMID:28464751
NASA Astrophysics Data System (ADS)
Senten, C.; de Mazière, M.; Dils, B.; Hermans, C.; Kruglanski, M.; Neefs, E.; Scolas, F.; Vandaele, A. C.; Vanhaelewyn, G.; Vigouroux, C.; Carleer, M.; Coheur, P. F.; Fally, S.; Barret, B.; Baray, J. L.; Delmas, R.; Leveau, J.; Metzger, J. M.; Mahieu, E.; Boone, C.; Walker, K. A.; Bernath, P. F.; Strong, K.
2008-01-01
Ground-based high spectral resolution Fourier-transform infrared (FTIR) solar absorption spectroscopy is a powerful remote sensing technique to obtain information on the total column abundances and on the vertical distribution of various constituents in the atmosphere. This work presents results from two short-term FTIR measurement campaigns in 2002 and 2004, held at the (sub)tropical site Ile de La Réunion (21°S, 55°E). These campaigns represent the first FTIR observations carried out at this site. The results include total column amounts from the surface up to 100 km of ozone (O3), methane (CH4), nitrous oxide (N2O), carbon monoxide (CO), ethane (C2H6), hydrogen chloride (HCl), hydrogen fluoride (HF) and nitric acid (HNO3), as well as some vertical profile information for the first four mentioned trace gases. The data are characterised in terms of the vertical information content and associated error budget. In the 2004 time series, the seasonal increase of the CO concentration was observed by the end of October, along with a sudden rise that has been attributed to biomass burning events in southern Africa and Madagascar. This attribution was based on trajectory modeling. In the same period, other biomass burning gases such as C2H6 also show an enhancement in their total column amounts which is highly correlated with the increase of the CO total columns. The observed total column values for CO are consistent with correlative data from MOPITT (Measurements Of Pollution In The Troposphere). Comparisons between our ground-based FTIR observations and space-borne observations from ACE-FTS (Atmospheric Chemistry Experiment - Fourier Transform Spectrometer) and HALOE (Halogen Occultation Experiment) confirm the feasibility of the FTIR measurements at Ile de La Réunion.
The Error in Total Error Reduction
Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.
2013-01-01
Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930
Lizarraga, Joy S.; Ockerman, Darwin J.
2011-01-01
The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, Fort Worth District; the City of Corpus Christi; the Guadalupe-Blanco River Authority; the San Antonio River Authority; and the San Antonio Water System, configured, calibrated, and tested a watershed model for a study area consisting of about 5,490 mi2 of the Frio River watershed in south Texas. The purpose of the model is to contribute to the understanding of watershed processes and hydrologic conditions in the lower Frio River watershed. The model simulates streamflow, evapotranspiration (ET), and groundwater recharge by using a numerical representation of physical characteristics of the landscape, and meteorological and streamflow data. Additional time-series inputs to the model include wastewater-treatment-plant discharges, surface-water withdrawals, and estimated groundwater inflow from Leona Springs. Model simulations of streamflow, ET, and groundwater recharge were done for various periods of record depending upon available measured data for input and comparison, starting as early as 1961. Because of the large size of the study area, the lower Frio River watershed was divided into 12 subwatersheds; separate Hydrological Simulation Program-FORTRAN models were developed for each subwatershed. Simulation of the overall study area involved running simulations in downstream order. Output from the model was summarized by subwatershed, point locations, reservoir reaches, and the Carrizo-Wilcox aquifer outcrop. Four long-term U.S. Geological Survey streamflow-gaging stations and two short-term streamflow-gaging stations were used for streamflow model calibration and testing with data from 1991-2008. Calibration was based on data from 2000-08, and testing was based on data from 1991-99. Choke Canyon Reservoir stage data from 1992-2008 and monthly evaporation estimates from 1999-2008 also were used for model calibration. Additionally, 2006-08 ET data from a U.S. Geological Survey meteorological station in Medina County were used for calibration. Streamflow and ET calibration were considered good or very good. For the 2000-08 calibration period, total simulated flow volume and the flow volume of the highest 10 percent of simulated daily flows were calibrated to within about 10 percent of measured volumes at six U.S. Geological Survey streamflow-gaging stations. The flow volume of the lowest 50 percent of daily flows was not simulated as accurately but represented a small percent of the total flow volume. The model-fit efficiency for the weekly mean streamflow during the calibration periods ranged from 0.60 to 0.91, and the root mean square error ranged from 16 to 271 percent of the mean flow rate. The simulated total flow volumes during the testing periods at the long-term gaging stations exceeded the measured total flow volumes by approximately 22 to 50 percent at three stations and were within 7 percent of the measured total flow volumes at one station. For the longer 1961-2008 simulation period at the long-term stations, simulated total flow volumes were within about 3 to 18 percent of measured total flow volumes. The calibrations made by using Choke Canyon reservoir volume for 1992-2008, reservoir evaporation for 1999-2008, and ET in Medina County for 2006-08, are considered very good. Model limitations include possible errors related to model conceptualization and parameter variability, lack of data to better quantify certain model inputs, and measurement errors. Uncertainty regarding the degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error. A sensitivity analysis was performed for the Upper San Miguel subwatershed model to show the effect of changes to model parameters on the estimated mean recharge, ET, and surface runoff from that part of the Carrizo-Wilcox aquifer outcrop. Simulated recharge was most sensitive to the changes in the lower-zone ET (LZ
Ultrasonic Doppler measurement of renal artery blood flow
NASA Technical Reports Server (NTRS)
Freund, W. R.; Meindl, J. D.
1975-01-01
An extensive evaluation of the practical and theoretical limitations encountered in the use of totally implantable CW Doppler flowmeters is provided. Theoretical analyses, computer models, in-vitro and in-vivo calibration studies describe the sources and magnitudes of potential errors in the measurement of blood flow through the renal artery, as well as larger vessels in the circulatory system. The evaluation of new flowmeter/transducer systems and their use in physiological investigations is reported.
NASA Astrophysics Data System (ADS)
Carter, W. E.; Robertson, D. S.; Nothnagel, A.; Nicolson, G. D.; Schuh, H.
1988-12-01
High-accuracy geodetic very long baseline interferometry measurements between the African, Eurasian, and North American plates have been analyzed to determine the terrestrial coordinates of the Hartebeesthoek observatory to better than 10 cm, to determine the celestial coordinates of eight Southern Hemisphere radio sources with milliarc second (mas) accuracy, and to derive quasi-independent polar motion, UTI, and nutation time series. Comparison of the earth orientation time series with ongoing International Radio Interferometric Surveying project values shows agreement at about the 1 mas of arc level in polar motion and nutation and 0.1 ms of time in UTI. Given the independence of the observing sessions and the unlikeliness of common systematic error sources, this level of agreement serves to bound the total errors in both measurement series.
Monitoring gait in multiple sclerosis with novel wearable motion sensors
McGinnis, Ryan S.; Seagers, Kirsten; Motl, Robert W.; Sheth, Nirav; Wright, John A.; Ghaffari, Roozbeh; Sosnoff, Jacob J.
2017-01-01
Background Mobility impairment is common in people with multiple sclerosis (PwMS) and there is a need to assess mobility in remote settings. Here, we apply a novel wireless, skin-mounted, and conformal inertial sensor (BioStampRC, MC10 Inc.) to examine gait characteristics of PwMS under controlled conditions. We determine the accuracy and precision of BioStampRC in measuring gait kinematics by comparing to contemporary research-grade measurement devices. Methods A total of 45 PwMS, who presented with diverse walking impairment (Mild MS = 15, Moderate MS = 15, Severe MS = 15), and 15 healthy control subjects participated in the study. Participants completed a series of clinical walking tests. During the tests participants were instrumented with BioStampRC and MTx (Xsens, Inc.) sensors on their shanks, as well as an activity monitor GT3X (Actigraph, Inc.) on their non-dominant hip. Shank angular velocity was simultaneously measured with the inertial sensors. Step number and temporal gait parameters were calculated from the data recorded by each sensor. Visual inspection and the MTx served as the reference standards for computing the step number and temporal parameters, respectively. Accuracy (error) and precision (variance of error) was assessed based on absolute and relative metrics. Temporal parameters were compared across groups using ANOVA. Results Mean accuracy±precision for the BioStampRC was 2±2 steps error for step number, 6±9ms error for stride time and 6±7ms error for step time (0.6–2.6% relative error). Swing time had the least accuracy±precision (25±19ms error, 5±4% relative error) among the parameters. GT3X had the least accuracy±precision (8±14% relative error) in step number estimate among the devices. Both MTx and BioStampRC detected significantly distinct gait characteristics between PwMS with different disability levels (p<0.01). Conclusion BioStampRC sensors accurately and precisely measure gait parameters in PwMS across diverse walking impairment levels and detected differences in gait characteristics by disability level in PwMS. This technology has the potential to provide granular monitoring of gait both inside and outside the clinic. PMID:28178288
Verification of calculated skin doses in postmastectomy helical tomotherapy.
Ito, Shima; Parker, Brent C; Levine, Renee; Sanders, Mary Ella; Fontenot, Jonas; Gibbons, John; Hogstrom, Kenneth
2011-10-01
To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi·Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. The mean difference and standard error of the mean difference between measurement and calculation for the scar measurements was -1.8% ± 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% ± 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% ± 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%. Copyright © 2011 Elsevier Inc. All rights reserved.
Instrumental variables vs. grouping approach for reducing bias due to measurement error.
Batistatou, Evridiki; McNamee, Roseanne
2008-01-01
Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or without replicate measurements. Our finding may also have implications for the use of aggregate variables in epidemiology to control for unmeasured confounding.
Height and Biomass of Mangroves in Africa from ICEsat/GLAS and SRTM
NASA Technical Reports Server (NTRS)
Fatoyinbo, Temilola E.; Simard, Marc
2012-01-01
The accurate quantification of forest 3-D structure is of great importance for studies of the global carbon cycle and biodiversity. These studies are especially relevant in Africa, where deforestation rates are high and the lack of background data is great. Mangrove forests are ecologically significant and it is important to measure mangrove canopy heights and biomass. The objectives of this study are to estimate: 1. The total area, 2. Canopy height distributions and 3. Aboveground biomass of mangrove forests in Africa. To derive mangrove 3-D structure and biomass maps, we used a combination of mangrove maps derived from Landsat ETM+, LiDAR canopy height estimates from ICEsat/GLAS (Ice, Cloud, and land Elevation Satellite/Geoscience Laser Altimeter System) and elevation data from SRTM (Shuttle Radar Topography Mission) for the African continent. More specifically, we extracted mangrove forest areas on the SRTM DEM using Landsat based landcover maps. The LiDAR (Light Detection and Ranging) measurements from the large footprint GLAS sensor were used to derive local estimates of canopy height and calibrate the Interferometric Synthetic Aperture Radar (InSAR) data from SRTM. We then applied allometric equations relating canopy height to biomass in order to estimate above ground biomass (AGB) from the canopy height product. The total mangrove area of Africa was estimated to be 25 960 square kilometers with 83% accuracy. The largest mangrove areas and greatest total biomass was 29 found in Nigeria covering 8 573 km2 with 132 x10(exp 6) Mg AGB. Canopy height across Africa was estimated with an overall root mean square error of 3.55 m. This error also includes the impact of using sensors with different resolutions and geolocation error which make comparison between measurements sensitive to canopy heterogeneities. This study provides the first systematic estimates of mangrove area, height and biomass in Africa. Our results showed that the combination of ICEsat/GLAS and SRTM data is well suited for vegetation 3-D mapping on a continental scale.
Families as Partners in Hospital Error and Adverse Event Surveillance
Khan, Alisa; Coffey, Maitreya; Litterer, Katherine P.; Baird, Jennifer D.; Furtak, Stephannie L.; Garcia, Briana M.; Ashland, Michele A.; Calaman, Sharon; Kuzma, Nicholas C.; O’Toole, Jennifer K.; Patel, Aarti; Rosenbluth, Glenn; Destino, Lauren A.; Everhart, Jennifer L.; Good, Brian P.; Hepps, Jennifer H.; Dalal, Anuj K.; Lipsitz, Stuart R.; Yoon, Catherine S.; Zigmont, Katherine R.; Srivastava, Rajendu; Starmer, Amy J.; Sectish, Theodore C.; Spector, Nancy D.; West, Daniel C.; Landrigan, Christopher P.
2017-01-01
IMPORTANCE Medical errors and adverse events (AEs) are common among hospitalized children. While clinician reports are the foundation of operational hospital safety surveillance and a key component of multifaceted research surveillance, patient and family reports are not routinely gathered. We hypothesized that a novel family-reporting mechanism would improve incident detection. OBJECTIVE To compare error and AE rates (1) gathered systematically with vs without family reporting, (2) reported by families vs clinicians, and (3) reported by families vs hospital incident reports. DESIGN, SETTING, AND PARTICIPANTS We conducted a prospective cohort study including the parents/caregivers of 989 hospitalized patients 17 years and younger (total 3902 patient-days) and their clinicians from December 2014 to July 2015 in 4 US pediatric centers. Clinician abstractors identified potential errors and AEs by reviewing medical records, hospital incident reports, and clinician reports as well as weekly and discharge Family Safety Interviews (FSIs). Two physicians reviewed and independently categorized all incidents, rating severity and preventability (agreement, 68%–90%; κ, 0.50–0.68). Discordant categorizations were reconciled. Rates were generated using Poisson regression estimated via generalized estimating equations to account for repeated measures on the same patient. MAIN OUTCOMES AND MEASURES Error and AE rates. RESULTS Overall, 746 parents/caregivers consented for the study. Of these, 717 completed FSIs. Their median (interquartile range) age was 32.5 (26–40) years; 380 (53.0%) were nonwhite, 566 (78.9%) were female, 603 (84.1%) were English speaking, and 380 (53.0%) had attended college. Of 717 parents/caregivers completing FSIs, 185 (25.8%) reported a total of 255 incidents, which were classified as 132 safety concerns (51.8%), 102 nonsafety-related quality concerns (40.0%), and 21 other concerns (8.2%). These included 22 preventable AEs (8.6%), 17 nonharmful medical errors (6.7%), and 11 nonpreventable AEs (4.3%) on the study unit. In total, 179 errors and 113 AEs were identified from all sources. Family reports included 8 otherwise unidentified AEs, including 7 preventable AEs. Error rates with family reporting (45.9 per 1000 patient-days) were 1.2-fold (95%CI, 1.1–1.2) higher than rates without family reporting (39.7 per 1000 patient-days). Adverse event rates with family reporting (28.7 per 1000 patient-days) were 1.1-fold (95%CI, 1.0–1.2; P=.006) higher than rates without (26.1 per 1000 patient-days). Families and clinicians reported similar rates of errors (10.0 vs 12.8 per 1000 patient-days; relative rate, 0.8; 95%CI, .5–1.2) and AEs (8.5 vs 6.2 per 1000 patient-days; relative rate, 1.4; 95%CI, 0.8–2.2). Family-reported error rates were 5.0-fold (95%CI, 1.9–13.0) higher and AE rates 2.9-fold (95% CI, 1.2–6.7) higher than hospital incident report rates. CONCLUSIONS AND RELEVANCE Families provide unique information about hospital safety and should be included in hospital safety surveillance in order to facilitate better design and assessment of interventions to improve safety. PMID:28241211
Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.
Yamamoto, Loren; Kanemori, Joan
2010-06-01
Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.
A novel measure and significance testing in data analysis of cell image segmentation.
Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L
2017-03-14
Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.
NASA Astrophysics Data System (ADS)
Lauvaux, Thomas; Miles, Natasha L.; Deng, Aijun; Richardson, Scott J.; Cambaliza, Maria O.; Davis, Kenneth J.; Gaudet, Brian; Gurney, Kevin R.; Huang, Jianhua; O'Keefe, Darragh; Song, Yang; Karion, Anna; Oda, Tomohiro; Patarasuk, Risa; Razlivanov, Igor; Sarmiento, Daniel; Shepson, Paul; Sweeney, Colm; Turnbull, Jocelyn; Wu, Kai
2016-05-01
Based on a uniquely dense network of surface towers measuring continuously the atmospheric concentrations of greenhouse gases (GHGs), we developed the first comprehensive monitoring systems of CO2 emissions at high resolution over the city of Indianapolis. The urban inversion evaluated over the 2012-2013 dormant season showed a statistically significant increase of about 20% (from 4.5 to 5.7 MtC ± 0.23 MtC) compared to the Hestia CO2 emission estimate, a state-of-the-art building-level emission product. Spatial structures in prior emission errors, mostly undetermined, appeared to affect the spatial pattern in the inverse solution and the total carbon budget over the entire area by up to 15%, while the inverse solution remains fairly insensitive to the CO2 boundary inflow and to the different prior emissions (i.e., ODIAC). Preceding the surface emission optimization, we improved the atmospheric simulations using a meteorological data assimilation system also informing our Bayesian inversion system through updated observations error variances. Finally, we estimated the uncertainties associated with undetermined parameters using an ensemble of inversions. The total CO2 emissions based on the ensemble mean and quartiles (5.26-5.91 MtC) were statistically different compared to the prior total emissions (4.1 to 4.5 MtC). Considering the relatively small sensitivity to the different parameters, we conclude that atmospheric inversions are potentially able to constrain the carbon budget of the city, assuming sufficient data to measure the inflow of GHG over the city, but additional information on prior emission error structures are required to determine the spatial structures of urban emissions at high resolution.
NASA Astrophysics Data System (ADS)
Pagano, T. J.; Worden, J. R.
2016-12-01
Methane is the second most powerful greenhouse gas with a highly positive radiative forcing of 0.48 W/m2 (IPCC 2013). Global concentrations of methane have been steadily increasing since 2007 (Bruhwiler 2014), raising concerns about methane's impact on the future global climate. For about the last decade, the Tropospheric Emission Spectrometer (TES) on the Earth Observing System (EOS) Aura spacecraft has been detecting several trace gas species in the troposphere including methane. The goal of this study is to compare TES methane products to that of the Atmospheric Infrared Sounder (AIRS) on the EOS Aqua spacecraft so that scientific investigations may be transferred from TES to AIRS. The two instruments fly in the afternoon constellations (A-Train), providing numerous coincident measurements for comparison. In addition, they also have a similar spectral range, (3.3 to 15.4 µm) for TES (Beer, 2006) and (3.7 to 15.4 µm) for AIRS (Chahine, 2006), making both satellites sensitive to the mid and upper troposphere. This makes them ideal candidates to compare methane data products. In a previous study, total column methane was mapped and global zonal averages were compared. It was found that bias of the total column measurements between the two sounders was about constant over tropical and subtropical regions. However, because AIRS spectral resolution is lower than that of the TES, it is important to analyze the difference in vertical sensitivity. In this study, we will construct vertical profiles of methane concentration and compare them statistically through RMS difference and bias to better understand these differences. In addition, we will compare the error profile and total column errors of the TES and AIRS methane from the data to better understand error characteristics of the products.
Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.
Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep
2014-01-01
Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.
NASA Technical Reports Server (NTRS)
Piersol, Allan G.
1991-01-01
Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.
Improved uncertainty quantification in nondestructive assay for nonproliferation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, Tom; Croft, Stephen; Jarman, Ken
2016-12-01
This paper illustrates methods to improve uncertainty quantification (UQ) for non-destructive assay (NDA) measurements used in nuclear nonproliferation. First, it is shown that current bottom-up UQ applied to calibration data is not always adequate, for three main reasons: (1) Because there are errors in both the predictors and the response, calibration involves a ratio of random quantities, and calibration data sets in NDA usually consist of only a modest number of samples (3–10); therefore, asymptotic approximations involving quantities needed for UQ such as means and variances are often not sufficiently accurate; (2) Common practice overlooks that calibration implies a partitioningmore » of total error into random and systematic error, and (3) In many NDA applications, test items exhibit non-negligible departures in physical properties from calibration items, so model-based adjustments are used, but item-specific bias remains in some data. Therefore, improved bottom-up UQ using calibration data should predict the typical magnitude of item-specific bias, and the suggestion is to do so by including sources of item-specific bias in synthetic calibration data that is generated using a combination of modeling and real calibration data. Second, for measurements of the same nuclear material item by both the facility operator and international inspectors, current empirical (top-down) UQ is described for estimating operator and inspector systematic and random error variance components. A Bayesian alternative is introduced that easily accommodates constraints on variance components, and is more robust than current top-down methods to the underlying measurement error distributions.« less
Improved Correction System for Vibration Sensitive Inertial Angle of Attack Measurement Devices
NASA Technical Reports Server (NTRS)
Crawford, Bradley L.; Finley, Tom D.
2000-01-01
Inertial angle of attack (AoA) devices currently in use at NASA Langley Research Center (LaRC) are subject to inaccuracies due to centrifugal accelerations caused by model dynamics, also known as sting whip. Recent literature suggests that these errors can be as high as 0.25 deg. With the current AoA accuracy target at LaRC being 0.01 deg., there is a dire need for improvement. With other errors in the inertial system (temperature, rectification, resolution, etc.) having been reduced to acceptable levels, a system is currently being developed at LaRC to measure and correct for the sting-whip-induced errors. By using miniaturized piezoelectric accelerometers and magnetohydrodynamic rate sensors, not only can the total centrifugal acceleration be measured, but yaw and pitch dynamics in the tunnel can also be characterized. These corrections can be used to determine a tunnel's past performance and can also indicate where efforts need to be concentrated to reduce these dynamics. Included in this paper are data on individual sensors, laboratory testing techniques, package evaluation, and wind tunnel test results on a High Speed Research (HSR) model in the Langley 16-Foot Transonic Wind Tunnel.
TH-AB-202-04: Auto-Adaptive Margin Generation for MLC-Tracked Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glitzner, M; Lagendijk, J; Raaymakers, B
Purpose: To develop an auto-adaptive margin generator for MLC tracking. The generator is able to estimate errors arising in image guided radiotherapy, particularly on an MR-Linac, which depend on the latencies of machine and image processing, as well as on patient motion characteristics. From the estimated error distribution, a segment margin is generated, able to compensate errors up to a user-defined confidence. Method: In every tracking control cycle (TCC, 40ms), the desired aperture D(t) is compared to the actual aperture A(t), a delayed and imperfect representation of D(t). Thus an error e(t)=A(T)-D(T) is measured every TCC. Applying kernel-density-estimation (KDE), themore » cumulative distribution (CDF) of e(t) is estimated. With CDF-confidence limits, upper and lower error limits are extracted for motion axes along and perpendicular leaf-travel direction and applied as margins. To test the dosimetric impact, two representative motion traces were extracted from fast liver-MRI (10Hz). The traces were applied onto a 4D-motion platform and continuously tracked by an Elekta Agility 160 MLC using an artificially imposed tracking delay. Gafchromic film was used to detect dose exposition for static, tracked, and error-compensated tracking cases. The margin generator was parameterized to cover 90% of all tracking errors. Dosimetric impact was rated by calculating the ratio between underexposed points (>5% underdosage) to the total number of points inside FWHM of static exposure. Results: Without imposing adaptive margins, tracking experiments showed a ratio of underexposed points of 17.5% and 14.3% for two motion cases with imaging delays of 200ms and 300ms, respectively. Activating the margin generated yielded total suppression (<1%) of underdosed points. Conclusion: We showed that auto-adaptive error compensation using machine error statistics is possible for MLC tracking. The error compensation margins are calculated on-line, without the need of assuming motion or machine models. Further strategies to reduce consequential overdosages are currently under investigation. This work was funded by the SoRTS consortium, which includes the industry partners Elekta, Philips and Technolution.« less
Haba, Tomonobu; Kondo, Shimpei; Hayashi, Daiki; Koyama, Shuji
2013-07-01
Detective quantum efficiency (DQE) is widely used as a comprehensive metric for X-ray image evaluation in digital X-ray units. The incident photon fluence per air kerma (SNR²(in)) is necessary for calculating the DQE. The International Electrotechnical Commission (IEC) reports the SNR²(in) under conditions of standard radiation quality, but this SNR²(in) might not be accurate as calculated from the X-ray spectra emitted by an actual X-ray tube. In this study, we evaluated the error range of the SNR²(in) presented by the IEC62220-1 report. We measured the X-ray spectra emitted by an X-ray tube under conditions of standard radiation quality of RQA5. The spectral photon fluence at each energy bin was multiplied by the photon energy and the mass energy absorption coefficient of air; then the air kerma spectrum was derived. The air kerma spectrum was integrated over the whole photon energy range to yield the total air kerma. The total photon number was then divided by the total air kerma. This value is the SNR²(in). These calculations were performed for various measurement parameters and X-ray units. The percent difference between the calculated value and the standard value of RQA5 was up to 2.9%. The error range was not negligibly small. Therefore, it is better to use the new SNR²(in) of 30694 (1/(mm(2) μGy)) than the current [Formula: see text] of 30174 (1/(mm(2) μGy)).
Minimizing Artifacts and Biases in Chamber-Based Measurements of Soil Respiration
NASA Astrophysics Data System (ADS)
Davidson, E. A.; Savage, K.
2001-05-01
Soil respiration is one of the largest and most important fluxes of carbon in terrestrial ecosystems. The objectives of this paper are to review concerns about uncertainties of chamber-based measurements of CO2 emissions from soils, to evaluate the direction and magnitude of these potential errors, and to explain procedures that minimize these errors and biases. Disturbance of diffusion gradients cause underestimate of fluxes by less than 15% in most cases, and can be partially corrected for with curve fitting and/or can be minimized by using brief measurement periods. Under-pressurization or over-pressurization of the chamber caused by flow restrictions in air circulating designs can cause large errors, but can also be avoided with properly sized chamber vents and unrestricted flows. Somewhat larger pressure differentials are observed under windy conditions, and the accuracy of measurements made under such conditions needs more research. Spatial and temporal heterogeneity can be addressed with appropriate chamber sizes and numbers and frequency of sampling. For example, means of 8 randomly chosen flux measurements from a population of 36 measurements made with 300 cm2 chambers in tropical forests and pastures were within 25% of the full population mean 98% of the time and were within 10% of the full population mean 70% of the time. Comparisons of chamber-based measurements with tower-based measurements of total ecosystem respiration require analysis of the scale of variation within the purported tower footprint. In a forest at Howland, Maine, the differences in soil respiration rates among very poorly drained and well drained soils were large, but they mostly were fortuitously cancelled when evaluated for purported tower footprints of 600-2100 m length. While all of these potential sources of measurement error and sampling biases must be carefully considered, properly designed and deployed chambers provide a reliable means of accurately measuring soil respiration in terrestrial ecosystems.
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Funds and State Matching and Maintenance-of-Effort (MOE Funds): (1) Percentage of cases with an error... cases in the sample with an error compared to the total number of cases in the sample; (2) Percentage of cases with an improper payment (both over and under payments), expressed as the total number of cases in...
NASA Technical Reports Server (NTRS)
Koblinsky, C. J.; Ryan, J.; Braatz, L.; Klosko, S. M.
1993-01-01
The overall accuracy of the U.S. Navy Geosat altimeter wet atmospheric range delay caused by refraction through the atmosphere is directly assessed by comparing the estimates made from the DMSP Special Sensor Microwave/Imager and the U.S. Navy Fleet Numerical Ocean Center forecast model for Geosat with measurements of total zenith columnar water vapor content from four VLBI sites. The assessment is made by comparing time series of range delay from various methods at each location. To determine the importance of diurnal variation in water vapor content in noncoincident estimates, the VLBI measurements were made at 15-min intervals over a few days. The VLBI measurements showed strong diurnal variations in columnar water vapor at several sites, causing errors of the order 3 cm rms in any noncoincident measurement of the wet troposphere range delay. These errors have an effect on studies of annual and interannual changes in sea level with Geosat data.
Breathing gas perfluorocarbon measurements using an absorber filled with zeolites.
Proquitté, H; Rüdiger, M; Wauer, R R; Schmalisch, G
2003-11-01
Perfluorocarbon (PFC) has been widely used in the treatment of respiratory diseases; however, PFC content of the breathing gases remains unknown. Therefore, we developed an absorber using PFC selective zeolites for PFC measurement in gases and investigated its accuracy. To generate a breathing gas with different PFC contents a heated flask was rinsed with a constant air flow of 4 litre x min(-1) and 1, 5, 10, and 20 ml of PFC were infused over 20 min using an infusor. The absorber was placed on an electronic scale and the total PFC volume was calculated from the weight gain. Steady-state increase in weight was achieved 3.5 min after stopping the infusion. The calculated PFC volume was slightly underestimated but the measuring error did not exceed -1% for PFC less than 1 ml. The measurement error decreased with increasing PFC volume. This zeolite absorber is an accurate method to quantitatively determine PFC in breathing gases and can be used as a reference method to validate other PFC sensors.
NASA Technical Reports Server (NTRS)
Stoll, F.; Tremback, J. W.; Arnaiz, H. H.
1979-01-01
A study was performed to determine the effects of the number and position of total pressure probes on the calculation of five compressor face distortion descriptors. This study used three sets of 320 steady state total pressure measurements that were obtained with a special rotating rake apparatus in wind tunnel tests of a mixed-compression inlet. The inlet was a one third scale model of the inlet on a YF-12 airplane, and it was tested in the wind tunnel at representative flight conditions at Mach numbers above 2.0. The study shows that large errors resulted in the calculation of the distortion descriptors even with a number of probes that were considered adequate in the past. There were errors as large as 30 and -50 percent in several distortion descriptors for a configuration consisting of eight rakes with five equal-area-weighted probes on each rake.
Measurement time and statistics for a noise thermometer with a synthetic-noise reference
NASA Astrophysics Data System (ADS)
White, D. R.; Benz, S. P.; Labenski, J. R.; Nam, S. W.; Qu, J. F.; Rogalla, H.; Tew, W. L.
2008-08-01
This paper describes methods for reducing the statistical uncertainty in measurements made by noise thermometers using digital cross-correlators and, in particular, for thermometers using pseudo-random noise for the reference signal. First, a discrete-frequency expression for the correlation bandwidth for conventional noise thermometers is derived. It is shown how an alternative frequency-domain computation can be used to eliminate the spectral response of the correlator and increase the correlation bandwidth. The corresponding expressions for the uncertainty in the measurement of pseudo-random noise in the presence of uncorrelated thermal noise are then derived. The measurement uncertainty in this case is less than that for true thermal-noise measurements. For pseudo-random sources generating a frequency comb, an additional small reduction in uncertainty is possible, but at the cost of increasing the thermometer's sensitivity to non-linearity errors. A procedure is described for allocating integration times to further reduce the total uncertainty in temperature measurements. Finally, an important systematic error arising from the calculation of ratios of statistical variables is described.
Prevalence and cost of hospital medical errors in the general and elderly United States populations.
Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S
2013-12-01
The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.
Total error shift patterns for daily CT on rails image-guided radiotherapy to the prostate bed
2011-01-01
Background To evaluate the daily total error shift patterns on post-prostatectomy patients undergoing image guided radiotherapy (IGRT) with a diagnostic quality computer tomography (CT) on rails system. Methods A total of 17 consecutive post-prostatectomy patients receiving adjuvant or salvage IMRT using CT-on-rails IGRT were analyzed. The prostate bed's daily total error shifts were evaluated for a total of 661 CT scans. Results In the right-left, cranial-caudal, and posterior-anterior directions, 11.5%, 9.2%, and 6.5% of the 661 scans required no position adjustments; 75.3%, 66.1%, and 56.8% required a shift of 1 - 5 mm; 11.5%, 20.9%, and 31.2% required a shift of 6 - 10 mm; and 1.7%, 3.8%, and 5.5% required a shift of more than 10 mm, respectively. There was evidence of correlation between the x and y, x and z, and y and z axes in 3, 3, and 3 of 17 patients, respectively. Univariate (ANOVA) analysis showed that the total error pattern was random in the x, y, and z axis for 10, 5, and 2 of 17 patients, respectively, and systematic for the rest. Multivariate (MANOVA) analysis showed that the (x,y), (x,z), (y,z), and (x, y, z) total error pattern was random in 5, 1, 1, and 1 of 17 patients, respectively, and systematic for the rest. Conclusions The overall daily total error shift pattern for these 17 patients simulated with an empty bladder, and treated with CT on rails IGRT was predominantly systematic. Despite this, the temporal vector trends showed complex behaviors and unpredictable changes in magnitude and direction. These findings highlight the importance of using daily IGRT in post-prostatectomy patients. PMID:22024279
Effects of Differential Item Functioning on Examinees' Test Performance and Reliability of Test
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2017-01-01
Simulations were conducted to examine the effect of differential item functioning (DIF) on measurement consequences such as total scores, item response theory (IRT) ability estimates, and test reliability in terms of the ratio of true-score variance to observed-score variance and the standard error of estimation for the IRT ability parameter. The…
Multilevel Multidimensional Item Response Model with a Multilevel Latent Covariate
ERIC Educational Resources Information Center
Cho, Sun-Joo; Bottge, Brian A.
2015-01-01
In a pretest-posttest cluster-randomized trial, one of the methods commonly used to detect an intervention effect involves controlling pre-test scores and other related covariates while estimating an intervention effect at post-test. In many applications in education, the total post-test and pre-test scores that ignores measurement error in the…
A Likelihood-Based Framework for Association Analysis of Allele-Specific Copy Numbers.
Hu, Y J; Lin, D Y; Sun, W; Zeng, D
2014-10-01
Copy number variants (CNVs) and single nucleotide polymorphisms (SNPs) co-exist throughout the human genome and jointly contribute to phenotypic variations. Thus, it is desirable to consider both types of variants, as characterized by allele-specific copy numbers (ASCNs), in association studies of complex human diseases. Current SNP genotyping technologies capture the CNV and SNP information simultaneously via fluorescent intensity measurements. The common practice of calling ASCNs from the intensity measurements and then using the ASCN calls in downstream association analysis has important limitations. First, the association tests are prone to false-positive findings when differential measurement errors between cases and controls arise from differences in DNA quality or handling. Second, the uncertainties in the ASCN calls are ignored. We present a general framework for the integrated analysis of CNVs and SNPs, including the analysis of total copy numbers as a special case. Our approach combines the ASCN calling and the association analysis into a single step while allowing for differential measurement errors. We construct likelihood functions that properly account for case-control sampling and measurement errors. We establish the asymptotic properties of the maximum likelihood estimators and develop EM algorithms to implement the corresponding inference procedures. The advantages of the proposed methods over the existing ones are demonstrated through realistic simulation studies and an application to a genome-wide association study of schizophrenia. Extensions to next-generation sequencing data are discussed.
2017-01-01
Anthropometric data collected in clinics and surveys are often inaccurate and unreliable due to measurement error. The Body Imaging for Nutritional Assessment Study (BINA) evaluated the ability of 3D imaging to correctly measure stature, head circumference (HC) and arm circumference (MUAC) for children under five years of age. This paper describes the protocol for and the quality of manual anthropometric measurements in BINA, a study conducted in 2016–17 in Atlanta, USA. Quality was evaluated by examining digit preference, biological plausibility of z-scores, z-score standard deviations, and reliability. We calculated z-scores and analyzed plausibility based on the 2006 WHO Child Growth Standards (CGS). For reliability, we calculated intra- and inter-observer Technical Error of Measurement (TEM) and Intraclass Correlation Coefficient (ICC). We found low digit preference; 99.6% of z-scores were biologically plausible, with z-score standard deviations ranging from 0.92 to 1.07. Total TEM was 0.40 for stature, 0.28 for HC, and 0.25 for MUAC in centimeters. ICC ranged from 0.99 to 1.00. The quality of manual measurements in BINA was high and similar to that of the anthropometric data used to develop the WHO CGS. We attributed high quality to vigorous training, motivated and competent field staff, reduction of non-measurement error through the use of technology, and reduction of measurement error through adequate monitoring and supervision. Our anthropometry measurement protocol, which builds on and improves upon the protocol used for the WHO CGS, can be used to improve anthropometric data quality. The discussion illustrates the need to standardize anthropometric data quality assessment, and we conclude that BINA can provide a valuable evaluation of 3D imaging for child anthropometry because there is comparison to gold-standard, manual measurements. PMID:29240796
NASA Astrophysics Data System (ADS)
Behrendt, A.; Wulfmeyer, V.; Hammann, E.; Muppa, S. K.; Pal, S.
2015-05-01
The rotational Raman lidar (RRL) of the University of Hohenheim (UHOH) measures atmospheric temperature profiles with high resolution (10 s, 109 m). The data contain low-noise errors even in daytime due to the use of strong UV laser light (355 nm, 10 W, 50 Hz) and a very efficient interference-filter-based polychromator. In this paper, the first profiling of the second- to fourth-order moments of turbulent temperature fluctuations is presented. Furthermore, skewness profiles and kurtosis profiles in the convective planetary boundary layer (CBL) including the interfacial layer (IL) are discussed. The results demonstrate that the UHOH RRL resolves the vertical structure of these moments. The data set which is used for this case study was collected in western Germany (50°53'50.56'' N, 6°27'50.39'' E; 110 m a.s.l.) on 24 April 2013 during the Intensive Observations Period (IOP) 6 of the HD(CP)2 (High-Definition Clouds and Precipitation for advancing Climate Prediction) Observational Prototype Experiment (HOPE). We used the data between 11:00 and 12:00 UTC corresponding to 1 h around local noon (the highest position of the Sun was at 11:33 UTC). First, we investigated profiles of the total noise error of the temperature measurements and compared them with estimates of the temperature measurement uncertainty due to shot noise derived with Poisson statistics. The comparison confirms that the major contribution to the total statistical uncertainty of the temperature measurements originates from shot noise. The total statistical uncertainty of a 20 min temperature measurement is lower than 0.1 K up to 1050 m a.g.l. (above ground level) at noontime; even for single 10 s temperature profiles, it is smaller than 1 K up to 1020 m a.g.l. Autocovariance and spectral analyses of the atmospheric temperature fluctuations confirm that a temporal resolution of 10 s was sufficient to resolve the turbulence down to the inertial subrange. This is also indicated by the integral scale of the temperature fluctuations which had a mean value of about 80 s in the CBL with a tendency to decrease to smaller values towards the CBL top. Analyses of profiles of the second-, third-, and fourth-order moments show that all moments had peak values in the IL around the mean top of the CBL which was located at 1230 m a.g.l. The maximum of the variance profile in the IL was 0.39 K2 with 0.07 and 0.11 K2 for the sampling error and noise error, respectively. The third-order moment (TOM) was not significantly different from zero in the CBL but showed a negative peak in the IL with a minimum of -0.93 K3 and values of 0.05 and 0.16 K3 for the sampling and noise errors, respectively. The fourth-order moment (FOM) and kurtosis values throughout the CBL were not significantly different to those of a Gaussian distribution. Both showed also maxima in the IL but these were not statistically significant taking the measurement uncertainties into account. We conclude that these measurements permit the validation of large eddy simulation results and the direct investigation of turbulence parameterizations with respect to temperature.
Urinary Sugars--A Biomarker of Total Sugars Intake.
Tasevska, Natasha
2015-07-15
Measurement error in self-reported sugars intake may explain the lack of consistency in the epidemiologic evidence on the association between sugars and disease risk. This review describes the development and applications of a biomarker of sugars intake, informs its future use and recommends directions for future research. Recently, 24 h urinary sucrose and fructose were suggested as a predictive biomarker for total sugars intake, based on findings from three highly controlled feeding studies conducted in the United Kingdom. From this work, a calibration equation for the biomarker that provides an unbiased measure of sugars intake was generated that has since been used in two US-based studies with free-living individuals to assess measurement error in dietary self-reports and to develop regression calibration equations that could be used in future diet-disease analyses. Further applications of the biomarker include its use as a surrogate measure of intake in diet-disease association studies. Although this biomarker has great potential and exhibits favorable characteristics, available data come from a few controlled studies with limited sample sizes conducted in the UK. Larger feeding studies conducted in different populations are needed to further explore biomarker characteristics and stability of its biases, compare its performance, and generate a unique, or population-specific biomarker calibration equations to be applied in future studies. A validated sugars biomarker is critical for informed interpretation of sugars-disease association studies.
Urinary Sugars—A Biomarker of Total Sugars Intake
Tasevska, Natasha
2015-01-01
Measurement error in self-reported sugars intake may explain the lack of consistency in the epidemiologic evidence on the association between sugars and disease risk. This review describes the development and applications of a biomarker of sugars intake, informs its future use and recommends directions for future research. Recently, 24 h urinary sucrose and fructose were suggested as a predictive biomarker for total sugars intake, based on findings from three highly controlled feeding studies conducted in the United Kingdom. From this work, a calibration equation for the biomarker that provides an unbiased measure of sugars intake was generated that has since been used in two US-based studies with free-living individuals to assess measurement error in dietary self-reports and to develop regression calibration equations that could be used in future diet-disease analyses. Further applications of the biomarker include its use as a surrogate measure of intake in diet-disease association studies. Although this biomarker has great potential and exhibits favorable characteristics, available data come from a few controlled studies with limited sample sizes conducted in the UK. Larger feeding studies conducted in different populations are needed to further explore biomarker characteristics and stability of its biases, compare its performance, and generate a unique, or population-specific biomarker calibration equations to be applied in future studies. A validated sugars biomarker is critical for informed interpretation of sugars-disease association studies. PMID:26184307
Measurement System Characterization in the Presence of Measurement Errors
NASA Technical Reports Server (NTRS)
Commo, Sean A.
2012-01-01
In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.
Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.
2014-01-01
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons. PMID:24788812
Automated drug dispensing system reduces medication errors in an intensive care setting.
Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick
2010-12-01
We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; p<.05); however, no significant difference was observed before automated dispensing system implementation (20.4% and 19.3%, respectively; not significant). Before-and-after comparisons in the study unit also showed a significantly reduced percentage of total opportunities for error (20.4% and 13.5%; p<.01). An analysis of detailed opportunities for error showed a significant impact of the automated dispensing system in reducing preparation errors (p<.05). Most errors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.
Arenas Jiménez, María Dolores; Ferre, Gabriel; Álvarez-Ude, Fernando
Haemodialysis (HD) patients are a high-risk population group. For these patients, an error could have catastrophic consequences. Therefore, systems that ensure the safety of these patients in an environment with high technology and great interaction of the human factor is a requirement. To show a systematic working approach, reproducible in any HD unit, which consists of recording the complications and errors that occurred during the HD session; defining which of those complications could be considered adverse event (AE), and therefore preventable; and carrying out a systematic analysis of them, as well as of underlying real or potential errors, evaluating their severity, frequency and detection; as well as establishing priorities for action (Failure Mode and Effects Analysis system [FMEA systems]). Retrospective analysis of the graphs of all HD sessions performed during one month (October 2015) on 97 patients, analysing all recorded complications. The consideration of these complications as AEs was based on a consensus among 13 health professionals and 2 patients. The severity, frequency and detection of each AE was evaluated by the FMEA system. We analysed 1303 HD treatments in 97 patients. A total of 383 complications (1 every 3.4 HD treatments) were recorded. Approximately 87.9% of them was deemed AEs and 23.7% complications related with patients' underlying pathology. There was one AE every 3.8 HD treatments. Hypertension and hypotension were the most frequent AEs (42.7 and 27.5% of all AEs recorded, respectively). Vascular-access related AEs were one every 68.5 HD treatments. A total of 21 errors (1 every 62 HD treatments), mainly related to the HD technique and to the administration of prescribed medication, were registered. The highest risk priority number, according to the FMEA, corresponded to errors related to patient body weight; dysfunction/rupture of the catheter; and needle extravasation. HD complications are frequent. Consideration of some of them as AEs could improve safety by facilitating the implementation of preventive measures. The application of the FMEA system allows stratifying real and potential errors in dialysis units and acting with the appropriate degree of urgency, developing and implementing the necessary preventive and improvement measures. Copyright © 2017 Sociedad Española de Nefrología. Published by Elsevier España, S.L.U. All rights reserved.
Sea Ice Topography Profiling using Laser Altimetry from Small Unmanned Aircraft Systems
NASA Astrophysics Data System (ADS)
Crocker, Roger Ian
Arctic sea ice is undergoing a dramatic transition from a perennial ice pack with a high prevalence of old multiyear ice, to a predominantly seasonal ice pack comprised primarily of young first-year and second-year ice. This transition has brought about changes in the sea ice thickness and topography characteristics, which will further affect the evolution and survivability of the ice pack. The varying ice conditions have substantial implications for commercial operations, international affairs, regional and global climate, our ability to model climate dynamics, and the livelihood of Arctic inhabitants. A number of satellite and airborne missions are dedicated to monitoring sea ice, but they are limited by their spatial and temporal resolution and coverage. Given the fast rate of sea ice change and its pervasive implications, enhanced observational capabilities are needed to augment the current strategies. The CU Laser Profilometer and Imaging System (CULPIS) is designed specifically for collecting fine-resolution elevation data and imagery from small unmanned aircraft systems (UAS), and has a great potential to compliment ongoing missions. This altimeter system has been integrated into four different UAS, and has been deployed during Arctic and Antarctic science campaigns. The CULPIS elevation measurement accuracy is shown to be 95±25 cm, and is limited primarily by GPS positioning error (<25 cm), aircraft attitude determination error (<20 cm), and sensor misalignment error (<20 cm). The relative error is considerably smaller over short flight distances, and the measurement precision is shown to be <10 cm over a distance of 200 m. Given its fine precision, the CULPIS is well suited for measuring sea ice topography, and observed ridge height and ridge separation distributions are found to agree with theoretical distributions to within 5%. Simulations demonstrate the inability of course-resolution measurements to accurately represent the theoretical distributions, with differences up to 30%. Future efforts should focus on reducing the total measurement error to <20 cm to make the CULPIS suitable for detecting ice sheet elevation change.
The error in total error reduction.
Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R
2014-02-01
Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.
Burr, Tom; Croft, Stephen; Jarman, Kenneth D.
2015-09-05
The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less
NASA Astrophysics Data System (ADS)
Kim, J. G.; Liu, H.
2007-10-01
Near-infrared spectroscopy or imaging has been extensively applied to various biomedical applications since it can detect the concentrations of oxyhaemoglobin (HbO2), deoxyhaemoglobin (Hb) and total haemoglobin (Hbtotal) from deep tissues. To quantify concentrations of these haemoglobin derivatives, the extinction coefficient values of HbO2 and Hb have to be employed. However, it was not well recognized among researchers that small differences in extinction coefficients could cause significant errors in quantifying the concentrations of haemoglobin derivatives. In this study, we derived equations to estimate errors of haemoglobin derivatives caused by the variation of haemoglobin extinction coefficients. To prove our error analysis, we performed experiments using liquid-tissue phantoms containing 1% Intralipid in a phosphate-buffered saline solution. The gas intervention of pure oxygen was given in the solution to examine the oxygenation changes in the phantom, and 3 mL of human blood was added twice to show the changes in [Hbtotal]. The error calculation has shown that even a small variation (0.01 cm-1 mM-1) in extinction coefficients can produce appreciable relative errors in quantification of Δ[HbO2], Δ[Hb] and Δ[Hbtotal]. We have also observed that the error of Δ[Hbtotal] is not always larger than those of Δ[HbO2] and Δ[Hb]. This study concludes that we need to be aware of any variation in haemoglobin extinction coefficients, which could result from changes in temperature, and to utilize corresponding animal's haemoglobin extinction coefficients for the animal experiments, in order to obtain more accurate values of Δ[HbO2], Δ[Hb] and Δ[Hbtotal] from in vivo tissue measurements.
Is visual short-term memory depthful?
Reeves, Adam; Lei, Quan
2014-03-01
Does visual short-term memory (VSTM) depend on depth, as it might be if information was stored in more than one depth layer? Depth is critical in natural viewing and might be expected to affect retention, but whether this is so is currently unknown. Cued partial reports of letter arrays (Sperling, 1960) were measured up to 700 ms after display termination. Adding stereoscopic depth hardly affected VSTM capacity or decay inferred from total errors. The pattern of transposition errors (letters reported from an uncued row) was almost independent of depth and cue delay. We conclude that VSTM is effectively two-dimensional. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Whitlock, C. H., III
1977-01-01
Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.
A mid-latitude balloon-borne observation of total odd nitrogen
NASA Technical Reports Server (NTRS)
Kondo, Y.; Aimedieu, P.; Matthews, W. A.; Sheldon, W. R.; Benbrook, J. R.
1990-01-01
A balloon-borne instrument to measure total odd nitrogen NO(y) has been developed. A converter which enables catalytic conversion of NO(y) into nitric oxide on a heated gold surface is combined with a chemiluminescence detector. The conversion efficiency for NO2 was measured to be close to 100 percent at pressures between 60 and 7 mb. The major source of errors in the balloon-borne measurements are the uncertainties in the estimates of the sample flow rate and the zero level of the instrument. The NO(y) concentration was measured at altitudes between 12 and 28 km with a precision of about 25 percent on a balloon experiment conducted at latitude 44 deg N in June 1989. The NO(y) concentration has been measured to be 1.5 + or - 0.4, 3 + or - 0.7, 10 + or - 3, and 14 + or - 4 ppbv at altitudes of 17, 20, 25, and 28 km, respectively.
Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs
NASA Astrophysics Data System (ADS)
Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken
2015-09-01
To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.
Research on calibration error of carrier phase against antenna arraying
NASA Astrophysics Data System (ADS)
Sun, Ke; Hou, Xiaomin
2016-11-01
It is the technical difficulty of uplink antenna arraying that signals from various quarters can not be automatically aligned at the target in deep space. The size of the far-field power combining gain is directly determined by the accuracy of carrier phase calibration. It is necessary to analyze the entire arraying system in order to improve the accuracy of the phase calibration. This paper analyzes the factors affecting the calibration error of carrier phase of uplink antenna arraying system including the error of phase measurement and equipment, the error of the uplink channel phase shift, the position error of ground antenna, calibration receiver and target spacecraft, the error of the atmospheric turbulence disturbance. Discuss the spatial and temporal autocorrelation model of atmospheric disturbances. Each antenna of the uplink antenna arraying is no common reference signal for continuous calibration. So it must be a system of the periodic calibration. Calibration is refered to communication of one or more spacecrafts in a certain period. Because the deep space targets are not automatically aligned to multiplexing received signal. Therefore the aligned signal should be done in advance on the ground. Data is shown that the error can be controlled within the range of demand by the use of existing technology to meet the accuracy of carrier phase calibration. The total error can be controlled within a reasonable range.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Lauvaux, Thomas; Miles, Natasha L.; Deng, Aijun; Richardson, Scott J.; Cambaliza, Maria O.; Davis, Kenneth J.; Gaudet, Brian; Gurney, Kevin R.; Huang, Jianhua; O'Keefe, Darragh;
2016-01-01
Urban emissions of greenhouse gases (GHG) represent more than 70% of the global fossil fuel GHG emissions. Unless mitigation strategies are successfully implemented, the increase in urban GHG emissions is almost inevitable as large metropolitan areas are projected to grow twice as fast as the world population in the coming 15 years. Monitoring these emissions becomes a critical need as their contribution to the global carbon budget increases rapidly. In this study, we developed the first comprehensive monitoring systems of CO2 emissions at high resolution using a dense network of CO2 atmospheric measurements over the city of Indianapolis. The inversion system was evaluated over a 8-month period and showed an increase compared to the Hestia CO2 emission estimate, a state-of-the-art building-level emission product, with a 20% increase in the total emissions over the area (from 4.5 to 5.7 Metric Megatons of Carbon +/- 0.23 Metric Megatons of Carbon). However, several key parameters of the inverse system need to be addressed to carefully characterize the spatial distribution of the emissions and the aggregated total emissions.We found that spatial structures in prior emission errors, mostly undetermined, affect significantly the spatial pattern in the inverse solution, as well as the carbon budget over the urban area. Several other parameters of the inversion were sufficiently constrained by additional observations such as the characterization of the GHG boundary inflow and the introduction of hourly transport model errors estimated from the meteorological assimilation system. Finally, we estimated the uncertainties associated with remaining systematic errors and undetermined parameters using an ensemble of inversions. The total CO2 emissions for the Indianapolis urban area based on the ensemble mean and quartiles are 5.26 - 5.91 Metric Megatons of Carbon, i.e. a statistically significant difference compared to the prior total emissions of 4.1 to 4.5 Metric Megatons of Carbon. We therefore conclude that atmospheric inversions are potentially able to constrain the carbon budget of the city, assuming sufficient data to measure the inflow of GHG over the city, but additional information on prior emissions and their associated error structures are required if we are to determine the spatial structures of urban emissions at high resolution.
NASA Astrophysics Data System (ADS)
Taylor, Thomas E.; L'Ecuyer, Tristan; Slusser, James; Stephens, Graeme; Krotkov, Nick; Davis, John; Goering, Christian
2005-08-01
Extensive sensitivity and error characteristics of a recently developed optimal estimation retrieval algorithm which simultaneously determines aerosol optical depth (AOD), aerosol single scatter albedo (SSA) and total ozone column (TOC) from ultra-violet irradiances are described. The algorithm inverts measured diffuse and direct irradiances at 7 channels in the UV spectral range obtained from the United States Department of Agriculture's (USDA) UV-B Monitoring and Research Program's (UVMRP) network of 33 ground-based UV-MFRSR instruments to produce aerosol optical properties and TOC at all seven wavelengths. Sensitivity studies of the Tropospheric Ultra-violet/Visible (TUV) radiative transfer model performed for various operating modes (Delta-Eddington versus n-stream Discrete Ordinate) over domains of AOD, SSA, TOC, asymmetry parameter and surface albedo show that the solutions are well constrained. Realistic input error budgets and diagnostic and error outputs from the retrieval are analyzed to demonstrate the atmospheric conditions under which the retrieval provides useful and significant results. After optimizing the algorithm for the USDA site in Panther Junction, Texas the retrieval algorithm was run on a cloud screened set of irradiance measurements for the month of May 2003. Comparisons to independently derived AOD's are favorable with root mean square (RMS) differences of about 3% to 7% at 300nm and less than 1% at 368nm, on May 12 and 22, 2003. This retrieval method will be used to build an aerosol climatology and provide ground-truthing of satellite measurements by running it operationally on the USDA UV network database.
Integrating Six Sigma with total quality management: a case example for measuring medication errors.
Revere, Lee; Black, Ken
2003-01-01
Six Sigma is a new management philosophy that seeks a nonexistent error rate. It is ripe for healthcare because many healthcare processes require a near-zero tolerance for mistakes. For most organizations, establishing a Six Sigma program requires significant resources and produces considerable stress. However, in healthcare, management can piggyback Six Sigma onto current total quality management (TQM) efforts so that minimal disruption occurs in the organization. Six Sigma is an extension of the Failure Mode and Effects Analysis that is required by JCAHO; it can easily be integrated into existing quality management efforts. Integrating Six Sigma into the existing TQM program facilitates process improvement through detailed data analysis. A drilled-down approach to root-cause analysis greatly enhances the existing TQM approach. Using the Six Sigma metrics, internal project comparisons facilitate resource allocation while external project comparisons allow for benchmarking. Thus, the application of Six Sigma makes TQM efforts more successful. This article presents a framework for including Six Sigma in an organization's TQM plan while providing a concrete example using medication errors. Using the process defined in this article, healthcare executives can integrate Six Sigma into all of their TQM projects.
Position Tracking During Human Walking Using an Integrated Wearable Sensing System.
Zizzo, Giulio; Ren, Lei
2017-12-10
Progress has been made enabling expensive, high-end inertial measurement units (IMUs) to be used as tracking sensors. However, the cost of these IMUs is prohibitive to their widespread use, and hence the potential of low-cost IMUs is investigated in this study. A wearable low-cost sensing system consisting of IMUs and ultrasound sensors was developed. Core to this system is an extended Kalman filter (EKF), which provides both zero-velocity updates (ZUPTs) and Heuristic Drift Reduction (HDR). The IMU data was combined with ultrasound range measurements to improve accuracy. When a map of the environment was available, a particle filter was used to impose constraints on the possible user motions. The system was therefore composed of three subsystems: IMUs, ultrasound sensors, and a particle filter. A Vicon motion capture system was used to provide ground truth information, enabling validation of the sensing system. Using only the IMU, the system showed loop misclosure errors of 1% with a maximum error of 4-5% during walking. The addition of the ultrasound sensors resulted in a 15% reduction in the total accumulated error. Lastly, the particle filter was capable of providing noticeable corrections, which could keep the tracking error below 2% after the first few steps.
Ying, Gui-shuang; Maguire, Maureen G.; Kulp, Marjean Taylor; Ciner, Elise; Moore, Bruce; Pistilli, Maxwell; Candy, Rowan
2017-01-01
PURPOSE To evaluate the agreement of cycloplegic refractive error measures between the Grand Seiko and Retinomax autorefractors in 4- and 5-year-old children. METHODS Cycloplegic refractive error of children was measured using the Grand Seiko and Retinomax during a comprehensive eye examination. Accommodative error was measured using the Grand Seiko. The differences in sphere, cylinder, spherical equivalent (SE) and intereye vector dioptric distance (VDD) between autorefractors were assessed using the Bland-Altman plot and 95% limits of agreement (95% LoA). RESULTS A total of 702 examinations were included. Compared to the Retinomax, the Grand Seiko provided statistically significantly larger values of sphere (mean difference, 0.34 D; 95% LoA, −0.46 to 1.14 D), SE (mean, 0.25 D; 95% LoA, −0.55 to 1.05 D), VDD (mean, 0.19 D; 95% LoA, −0.67 to 1.05 D), and more cylinder (mean, −0.18 D; 95% LoA, −0.91 to 0.55 D). The Grand Seiko measured ≥0.5 D than Retinomax in 43.1% of eyes for sphere and 29.8% of eyes for SE. In multivariate analysis, eyes with SE of >4 D (based on the average of two autorefractors) had larger differences in sphere (mean, 0.66 D vs 0.35 D; P < 0.0001) and SE (0.57 D vs 0.26 D; P < 0.0001) than eyes with SE of ≤4 D. CONCLUSIONS Under cycloplegia, the Grand Seiko provided higher measures of sphere, more cylinder, and higher SE than the Retinomax. Higher refractive error was associated with larger differences in sphere and SE between the Grand Seiko and Retinomax. (J AAPOS 2017;21: 219–223) PMID:28528993
The incidence and severity of errors in pharmacist-written discharge medication orders.
Onatade, Raliat; Sawieres, Sara; Veck, Alexandra; Smith, Lindsay; Gore, Shivani; Al-Azeib, Sumiah
2017-08-01
Background Errors in discharge prescriptions are problematic. When hospital pharmacists write discharge prescriptions improvements are seen in the quality and efficiency of discharge. There is limited information on the incidence of errors in pharmacists' medication orders. Objective To investigate the extent and clinical significance of errors in pharmacist-written discharge medication orders. Setting 1000-bed teaching hospital in London, UK. Method Pharmacists in this London hospital routinely write discharge medication orders as part of the clinical pharmacy service. Convenient days, based on researcher availability, between October 2013 and January 2014 were selected. Pre-registration pharmacists reviewed all discharge medication orders written by pharmacists on these days and identified discrepancies between the medication history, inpatient chart, patient records and discharge summary. A senior clinical pharmacist confirmed the presence of an error. Each error was assigned a potential clinical significance rating (based on the NCCMERP scale) by a physician and an independent senior clinical pharmacist, working separately. Main outcome measure Incidence of errors in pharmacist-written discharge medication orders. Results 509 prescriptions, written by 51 pharmacists, containing 4258 discharge medication orders were assessed (8.4 orders per prescription). Ten prescriptions (2%), contained a total of ten erroneous orders (order error rate-0.2%). The pharmacist considered that one error had the potential to cause temporary harm (0.02% of all orders). The physician did not rate any of the errors with the potential to cause harm. Conclusion The incidence of errors in pharmacists' discharge medication orders was low. The quality, safety and policy implications of pharmacists routinely writing discharge medication orders should be further explored.
Nimbus-7 Earth radiation budget calibration history. Part 1: The solar channels
NASA Technical Reports Server (NTRS)
Kyle, H. Lee; Hoyt, Douglas V.; Hickey, John R.; Maschhoff, Robert H.; Vallette, Brenda J.
1993-01-01
The Earth Radiation Budget (ERB) experiment on the Nimbus-7 satellite measured the total solar irradiance plus broadband spectral components on a nearly daily basis from 16 Nov. 1978, until 16 June 1992. Months of additional observations were taken in late 1992 and in 1993. The emphasis is on the electrically self calibrating cavity radiometer, channel 10c, which recorded accurate total solar irradiance measurements over the whole period. The spectral channels did not have inflight calibration adjustment capabilities. These channels can, with some additional corrections, be used for short-term studies (one or two solar rotations - 27 to 60 days), but not for long-term trend analysis. For channel 10c, changing radiometer pointing, the zero offsets, the stability of the gain, the temperature sensitivity, and the influences of other platform instruments are all examined and their effects on the measurements considered. Only the question of relative accuracy (not absolute) is examined. The final channel 10c product is also compared with solar measurements made by independent experiments on other satellites. The Nimbus experiment showed that the mean solar energy was about 0.1 percent (1.4 W/sqm) higher in the excited Sun years of 1979 and 1991 than in the quiet Sun years of 1985 and 1986. The error analysis indicated that the measured long-term trends may be as accurate as +/- 0.005 percent. The worse-case error estimate is +/- 0.03 percent.
Validating the Rett Syndrome Gross Motor Scale.
Downs, Jenny; Stahlhut, Michelle; Wong, Kingsley; Syhler, Birgit; Bisgaard, Anne-Marie; Jacoby, Peter; Leonard, Helen
2016-01-01
Rett syndrome is a pervasive neurodevelopmental disorder associated with a pathogenic mutation on the MECP2 gene. Impaired movement is a fundamental component and the Rett Syndrome Gross Motor Scale was developed to measure gross motor abilities in this population. The current study investigated the validity and reliability of the Rett Syndrome Gross Motor Scale. Video data showing gross motor abilities supplemented with parent report data was collected for 255 girls and women registered with the Australian Rett Syndrome Database, and the factor structure and relationships between motor scores, age and genotype were investigated. Clinical assessment scores for 38 girls and women with Rett syndrome who attended the Danish Center for Rett Syndrome were used to assess consistency of measurement. Principal components analysis enabled the calculation of three factor scores: Sitting, Standing and Walking, and Challenge. Motor scores were poorer with increasing age and those with the p.Arg133Cys, p.Arg294* or p.Arg306Cys mutation achieved higher scores than those with a large deletion. The repeatability of clinical assessment was excellent (intraclass correlation coefficient for total score 0.99, 95% CI 0.93-0.98). The standard error of measurement for the total score was 2 points and we would be 95% confident that a change 4 points in the 45-point scale would be greater than within-subject measurement error. The Rett Syndrome Gross Motor Scale could be an appropriate measure of gross motor skills in clinical practice and clinical trials.
NASA Astrophysics Data System (ADS)
Szeląg, Bartosz; Barbusiński, Krzysztof; Studziński, Jan; Bartkiewicz, Lidia
2017-11-01
In the study, models developed using data mining methods are proposed for predicting wastewater quality indicators: biochemical and chemical oxygen demand, total suspended solids, total nitrogen and total phosphorus at the inflow to wastewater treatment plant (WWTP). The models are based on values measured in previous time steps and daily wastewater inflows. Also, independent prediction systems that can be used in case of monitoring devices malfunction are provided. Models of wastewater quality indicators were developed using MARS (multivariate adaptive regression spline) method, artificial neural networks (ANN) of the multilayer perceptron type combined with the classification model (SOM) and cascade neural networks (CNN). The lowest values of absolute and relative errors were obtained using ANN+SOM, whereas the MARS method produced the highest error values. It was shown that for the analysed WWTP it is possible to obtain continuous prediction of selected wastewater quality indicators using the two developed independent prediction systems. Such models can ensure reliable WWTP work when wastewater quality monitoring systems become inoperable, or are under maintenance.
Fiber-optic evanescent-wave spectroscopy for fast multicomponent analysis of human blood
NASA Astrophysics Data System (ADS)
Simhi, Ronit; Gotshal, Yaron; Bunimovich, David; Katzir, Abraham; Sela, Ben-Ami
1996-07-01
A spectral analysis of human blood serum was undertaken by fiber-optic evanescent-wave spectroscopy (FEWS) by the use of a Fourier-transform infrared spectrometer. A special cell for the FEWS measurements was designed and built that incorporates an IR-transmitting silver halide fiber and a means for introducing the blood-serum sample. Further improvements in analysis were obtained by the adoption of multivariate calibration techniques that are already used in clinical chemistry. The partial least-squares algorithm was used to calculate the concentrations of cholesterol, total protein, urea, and uric acid in human blood serum. The estimated prediction errors obtained (in percent from the average value) were 6% for total protein, 15% for cholesterol, 30% for urea, and 30% for uric acid. These results were compared with another independent prediction method that used a neural-network model. This model yielded estimated prediction errors of 8.8% for total protein, 25% for cholesterol, and 21% for uric acid. spectroscopy, fiber-optic evanescent-wave spectroscopy, Fourier-transform infrared spectrometer, blood, multivariate calibration, neural networks.
Mehta, Saurabh P; George, Hannah R; Goering, Christian A; Shafer, Danielle R; Koester, Alan; Novotny, Steven
2017-11-01
Clinical measurement study. The push-off test (POT) was recently conceived and found to be reliable and valid for assessing weight bearing through injured wrist or elbow. However, further research with larger sample can lend credence to the preliminary findings supporting the use of the POT. This study examined the interrater reliability, construct validity, and measurement error for the POT in patients with wrist conditions. Participants with musculoskeletal (MSK) wrist conditions were recruited. The performance on the POT, grip isometric strength of wrist extensors was assessed. The shortened version of the Disabilities of the Arm, Shoulder and Hand and numeric pain rating scale were completed. The intraclass correlation coefficient assessed interrater reliability of the POT. Pearson correlation coefficients (r) examined the concurrent relationships between the POT and other measures. The standard error of measurement and the minimal detectable change at 90% confidence interval were assessed as measurement error and index of true change for the POT. A total of 50 participants with different elbow or wrist conditions (age: 48.1 ± 16.6 years) were included in this study. The results of this study strongly supported the interrater reliability (intraclass correlation coefficient: 0.96 and 0.93 for the affected and unaffected sides, respectively) of the POT in patients with wrist MSK conditions. The POT showed convergent relationships with the grip strength on the injured side (r = 0.89) and the wrist extensor strength (r = 0.7). The POT showed smaller standard error of measurement (1.9 kg). The minimal detectable change at 90% confidence interval for the POT was 4.4 kg for the sample. This study provides additional evidence to support the reliability and validity of the POT. This is the first study that provides the values for the measurement error and true change on the POT scores in patients with wrist MSK conditions. Further research should examine the responsiveness and discriminant validity of the POT in patients with wrist conditions. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
Niioka, Takenori; Uno, Tsukasa; Yasui-Furukori, Norio; Takahata, Takenori; Shimizu, Mikiko; Sugawara, Kazunobu; Tateishi, Tomonori
2007-04-01
The aim of this study was to determine the pharmacokinetics of low-dose nedaplatin combined with paclitaxel and radiation therapy in patients having non-small-cell lung carcinoma and establish the optimal dosage regimen for low-dose nedaplatin. We also evaluated predictive accuracy of reported formulas to estimate the area under the plasma concentration-time curve (AUC) of low-dose nedaplatin. A total of 19 patients were administered a constant intravenous infusion of 20 mg/m(2) body surface area (BSA) nedaplatin for an hour, and blood samples were collected at 1, 2, 3, 4, 6, 8, and 19 h after the administration. Plasma concentrations of unbound platinum were measured, and the actual value of platinum AUC (actual AUC) was calculated based on these data. The predicted value of platinum AUC (predicted AUC) was determined by three predictive methods reported in previous studies, consisting of Bayesian method, limited sampling strategies with plasma concentration at a single time point, and simple formula method (SFM) without measured plasma concentration. Three error indices, mean prediction error (ME, measure of bias), mean absolute error (MAE, measure of accuracy), and root mean squared prediction error (RMSE, measure of precision), were obtained from the difference between the actual and the predicted AUC, to compare the accuracy between the three predictive methods. The AUC showed more than threefold inter-patient variation, and there was a favorable correlation between nedaplatin clearance and creatinine clearance (Ccr) (r = 0.832, P < 0.01). In three error indices, MAE and RMSE showed significant difference between the three AUC predictive methods, and the method of SFM had the most favorable results, in which %ME, %MAE, and %RMSE were 5.5, 10.7, and 15.4, respectively. The dosage regimen of low-dose nedaplatin should be established based on Ccr rather than on BSA. Since prediction accuracy of SFM, which did not require measured plasma concentration, was most favorable among the three methods evaluated in this study, SFM could be the most practical method to predict AUC of low-dose nedaplatin in a clinical situation judging from its high accuracy in predicting AUC without measured plasma concentration.
The effect of urea on refractometric total protein measurement in dogs and cats with azotemia.
Legendre, Kelsey P; Leissinger, Mary; Le Donne, Viviana; Grasperge, Britton J; Gaunt, Stephen D
2017-03-01
While protein is the predominant solute measured in plasma or serum by a refractometer, nonprotein substances also contribute to the angle of refraction. There is debate in the current literature regarding which nonprotein substances cause factitiously high refractometric total protein measurements, as compared to the biuret assay. The purpose of the study was to determine if the blood of azotemic animals, specifically with increased blood urea concentration, will have significantly higher refractometric total protein concentrations compared to the total protein concentrations measured by biuret assay. A prospective case series was conducted by collecting data from azotemic (n = 26) and nonazotemic (n = 34) dogs and cats. In addition, an in vitro study was performed where urea was added to an enhanced electrolyte solution at increasing concentrations, and total protein was assessed by both the refractometer and spectrophotometer. Statistical analysis was performed to determine the effect of urea. The refractometric total protein measurement showed a positive bias when compared to the biuret protein measurement in both groups, but the bias was higher in the azotemic group vs the nonazotemic group. The mean difference in total protein measurements of the nonazotemic group (0.59 g/dL) was significantly less (P < .01) than the mean difference of the azotemic group (0.95 g/dL). The in vitro experiment revealed a positive bias with a proportional error. This study demonstrated that increasing concentrations of urea significantly increased the total protein concentration measured by the refractometer as compared to the biuret assay, both in vivo and in vitro. © 2017 American Society for Veterinary Clinical Pathology.
Comparison of keratometric values and corneal eccentricity.
Benes, Pavel; Synek, Svatopluk; Petrová, Sylvie
2013-04-01
The aim of this work is to compare the findings of keratometric values and their differences at various refractive errors. The eccentricity of the cornea in the sense compared to the possible influence of refraction of the eye is topographically observed. Groups of myopia, hyperopia and emmetropia (as a control group) are always represented in total 600 eyes. The studied cohort in total of 300 clients enrolled. Autorefraktokeratometer with Placido disc was used to measure the steepest and the flattest meridian to determine the corneal eccentricity. Group I consisted of 100 myopes, 35 men and 65 women, average age 37.3 years. Objective refraction--sphere: -2.9 D, cylinder: -0.88 D. Keratometry in this group is in the steepest meridian 7.62 mm and the flattest meridian is 7.76 mm. The eccentricity was 0.37. Group II consisting of 100 hyperopic subjects, 40 men and 60 women, average age 61.6 years. Objective refraction--sphere: +2.71 D, cylinder: -1.0 D. Keratometric measurement looks as follows: the steepest meridian is 7.67 mm, the flattest meridian then is 7.81 mm. The value of the eccentricity is 0.37. The third group III consists of 100 emetropic subjects, then clients without refractive errors who achieve without corrective aids Vmin = 1.0. This group is composed of 42 men and 58 women, mean age 41.4 years. Objective refraction--sphere: +0.32 D, cylinder: -0.28 D. The steepest meridian is 7.72 mm the flattest meridian then 7.83 mm. The eccentricity is represented by the observed values of 0.36. Keratometry as well as topography are fundamental methods of corneal anterior surface measurement. Their proportions are essential for the proper parameters selection especially in case of contact lenses as one of the possible means intended to correct refractive errors.
Dilution space ratio of 2H and 18O of doubly labeled water method in humans.
Sagayama, Hiroyuki; Yamada, Yosuke; Racine, Natalie M; Shriver, Timothy C; Schoeller, Dale A
2016-06-01
Variation of the dilution space ratio (Nd/No) between deuterium ((2)H) and oxygen-18 ((18)O) impacts the calculation of total energy expenditure (TEE) by doubly labeled water (DLW). Our aim was to examine the physiological and methodological sources of variation of Nd/No in humans. We analyzed data from 2,297 humans (0.25-89 yr old). This included the variables Nd/No, total body water, TEE, body mass index (BMI), and percent body fat (%fat). To differentiate between physiologic and methodologic sources of variation, the urine samples from 54 subjects were divided and blinded and analyzed separately, and repeated DLW dosing was performed in an additional 55 participants after 6 mo. Sex, BMI, and %fat did not significantly affect Nd/No, for which the interindividual SD was 0.017. The measurement error from the duplicate urine sample sets was 0.010, and intraindividual SD of Nd/No in repeats experiments was 0.013. An additional SD of 0.008 was contributed by calibration of the DLW dose water. The variation of measured Nd/No in humans was distributed within a small range and measurement error accounted for 68% of this variation. There was no evidence that Nd/No differed with respect to sex, BMI, and age between 1 and 80 yr, and thus use of a constant value is suggested to minimize the effect of stable isotope analysis error on calculation of TEE in the DLW studies in humans. Based on a review of 103 publications, the average dilution space ratio is 1.036 for individuals between 1 and 80 yr of age. Copyright © 2016 the American Physiological Society.
Forkey, Joseph N.; Quinlan, Margot E.; Goldman, Yale E.
2005-01-01
A new approach is presented for measuring the three-dimensional orientation of individual macromolecules using single molecule fluorescence polarization (SMFP) microscopy. The technique uses the unique polarizations of evanescent waves generated by total internal reflection to excite the dipole moment of individual fluorophores. To evaluate the new SMFP technique, single molecule orientation measurements from sparsely labeled F-actin are compared to ensemble-averaged orientation data from similarly prepared densely labeled F-actin. Standard deviations of the SMFP measurements taken at 40 ms time intervals indicate that the uncertainty for individual measurements of axial and azimuthal angles is ∼10° at 40 ms time resolution. Comparison with ensemble data shows there are no substantial systematic errors associated with the single molecule measurements. In addition to evaluating the technique, the data also provide a new measurement of the torsional rigidity of F-actin. These measurements support the smaller of two values of the torsional rigidity of F-actin previously reported. PMID:15894632
Oh-Oka, Hitoshi; Nose, Ryuichiro
2005-09-01
Using a portable three dimensional ultrasound scanning device (The Bladder Scan BVI6100, Diagnostic Ultrasound Corporation), we examined measured values of bladder volume, especially focusing on volume lower than 100 ml. A total of 100 patients (male: 66, female: 34) were enrolled in the study. We made a comparison study between the measured value (the average of three measurements of bladder urine volume after a trial in male and female modes) using BVI6100, and the actual measured value of the sample obtained by urethral catheterization in each patient. We examined the factors which could increase the error rate. We also introduced the effective techniques to reduce measurement errors. The actual measured values in all patients correlated well with the average value of three measurements after a trial in a male mode of the BVI6100. The correlation coefficient was 0.887, the error rate was--4.6 +/- 24.5%, and the average coefficient of variation was 15.2. It was observed that the measurement result using the BVI6100 is influenced by patient side factors (extracted edges between bladder wall and urine, thickened bladder wall, irregular bladder wall, flattened rate of bladder, mistaking prostate for bladder in male, mistaking bladder for uterus in a female mode, etc.) or examiner side factors (angle between BVI and abdominal wall, compatibility between abdominal wall and ultrasound probe, controlling deflection while using probe, etc). When appropriate patients are chosen and proper measurement is performed, BVI6100 provides significantly higher accuracy in determining bladder volume, compared with existing abdominal ultrasound methods. BVI6100 is a convenient and extremely effective device also for the measurement of bladder urine over 100 ml.
Measuring in-use ship emissions with international and U.S. federal methods.
Khan, M Yusuf; Ranganathan, Sindhuja; Agrawal, Harshit; Welch, William A; Laroo, Christopher; Miller, J Wayne; Cocker, David R
2013-03-01
Regulatory agencies have shifted their emphasis from measuring emissions during certification cycles to measuring emissions during actual use. Emission measurements in this research were made from two different large ships at sea to compare the Simplified Measurement Method (SMM) compliant with the International Maritime Organization (IMO) NOx Technical Code to the Portable Emission Measurement Systems (PEMS) compliant with the US. Environmental Protection Agency (EPA) 40 Code of Federal Regulations (CFR) Part 1065 for on-road emission testing. Emissions of nitrogen oxides (NOx), carbon dioxide (CO2), and carbon monoxide (CO) were measured at load points specified by the International Organization for Standardization (ISO) to compare the two measurement methods. The average percentage errors calculated for PEMS measurements were 6.5%, 0.6%, and 357% for NOx, CO2, and CO, respectively. The NOx percentage error of 6.5% corresponds to a 0.22 to 1.11 g/kW-hr error in moving from Tier III (3.4 g/kW-hr) to Tier I (17.0 g/kW-hr) emission limits. Emission factors (EFs) of NOx and CO2 measured via SMM were comparable to other studies and regulatory agencies estimates. However EF(PM2.5) for this study was up to 26% higher than that currently used by regulatory agencies. The PM2.5 was comprised predominantly of hydrated sulfate (70-95%), followed by organic carbon (11-14%), ash (6-11%), and elemental carbon (0.4-0.8%). This research provides direct comparison between the International Maritime Organization and U.S. Environmental Protection Agency reference methods for quantifying in-use emissions from ships. This research provides correlations for NOx, CO2, and CO measured by a PEMS unit (certified by U.S. EPA for on-road testing) against IMO's Simplified Measurement Method for on-board certification. It substantiates the measurements of NOx by PEMS and quantifies measurement error. This study also provides in-use modal and overall weighted emission factors of gaseous (NOx, CO, CO2, total hydrocarbons [THC], and SO2) and particulate pollutants from the main engine of a container ship, which are helpful in the development of emission inventory.
Tan, Aimin; Saffaj, Taoufiq; Musuku, Adrien; Awaiye, Kayode; Ihssane, Bouchaib; Jhilal, Fayçal; Sosse, Saad Alaoui; Trabelsi, Fethi
2015-03-01
The current approach in regulated LC-MS bioanalysis, which evaluates the precision and trueness of an assay separately, has long been criticized for inadequate balancing of lab-customer risks. Accordingly, different total error approaches have been proposed. The aims of this research were to evaluate the aforementioned risks in reality and the difference among four common total error approaches (β-expectation, β-content, uncertainty, and risk profile) through retrospective analysis of regulated LC-MS projects. Twenty-eight projects (14 validations and 14 productions) were randomly selected from two GLP bioanalytical laboratories, which represent a wide variety of assays. The results show that the risk of accepting unacceptable batches did exist with the current approach (9% and 4% of the evaluated QC levels failed for validation and production, respectively). The fact that the risk was not wide-spread was only because the precision and bias of modern LC-MS assays are usually much better than the minimum regulatory requirements. Despite minor differences in magnitude, very similar accuracy profiles and/or conclusions were obtained from the four different total error approaches. High correlation was even observed in the width of bias intervals. For example, the mean width of SFSTP's β-expectation is 1.10-fold (CV=7.6%) of that of Saffaj-Ihssane's uncertainty approach, while the latter is 1.13-fold (CV=6.0%) of that of Hoffman-Kringle's β-content approach. To conclude, the risk of accepting unacceptable batches was real with the current approach, suggesting that total error approaches should be used instead. Moreover, any of the four total error approaches may be used because of their overall similarity. Lastly, the difficulties/obstacles associated with the application of total error approaches in routine analysis and their desirable future improvements are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
From the Lab to the real world : sources of error in UF {sub 6} gas enrichment monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lombardi, Marcie L.
2012-03-01
Safeguarding uranium enrichment facilities is a serious concern for the International Atomic Energy Agency (IAEA). Safeguards methods have changed over the years, most recently switching to an improved safeguards model that calls for new technologies to help keep up with the increasing size and complexity of today’s gas centrifuge enrichment plants (GCEPs). One of the primary goals of the IAEA is to detect the production of uranium at levels greater than those an enrichment facility may have declared. In order to accomplish this goal, new enrichment monitors need to be as accurate as possible. This dissertation will look at themore » Advanced Enrichment Monitor (AEM), a new enrichment monitor designed at Los Alamos National Laboratory. Specifically explored are various factors that could potentially contribute to errors in a final enrichment determination delivered by the AEM. There are many factors that can cause errors in the determination of uranium hexafluoride (UF{sub 6}) gas enrichment, especially during the period when the enrichment is being measured in an operating GCEP. To measure enrichment using the AEM, a passive 186-keV (kiloelectronvolt) measurement is used to determine the {sup 235}U content in the gas, and a transmission measurement or a gas pressure reading is used to determine the total uranium content. A transmission spectrum is generated using an x-ray tube and a “notch” filter. In this dissertation, changes that could occur in the detection efficiency and the transmission errors that could result from variations in pipe-wall thickness will be explored. Additional factors that could contribute to errors in enrichment measurement will also be examined, including changes in the gas pressure, ambient and UF{sub 6} temperature, instrumental errors, and the effects of uranium deposits on the inside of the pipe walls will be considered. The sensitivity of the enrichment calculation to these various parameters will then be evaluated. Previously, UF{sub 6} gas enrichment monitors have required empty pipe measurements to accurately determine the pipe attenuation (the pipe attenuation is typically much larger than the attenuation in the gas). This dissertation reports on a method for determining the thickness of a pipe in a GCEP when obtaining an empty pipe measurement may not be feasible. This dissertation studies each of the components that may add to the final error in the enrichment measurement, and the factors that were taken into account to mitigate these issues are also detailed and tested. The use of an x-ray generator as a transmission source and the attending stability issues are addressed. Both analytical calculations and experimental measurements have been used. For completeness, some real-world analysis results from the URENCO Capenhurst enrichment plant have been included, where the final enrichment error has remained well below 1% for approximately two months.« less
Total Dose Effects on Error Rates in Linear Bipolar Systems
NASA Technical Reports Server (NTRS)
Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent
2007-01-01
The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.
Yang, Jie; Liu, Qingquan; Dai, Wei
2017-02-01
To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
Ballantyne, A. P.; Andres, R.; Houghton, R.; ...
2015-04-30
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr ₋1 in the 1960s to 0.3 Pg C yr ₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr ₋1 in the 1960s to almost 1.0 Pg C yr ₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO 2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.« less
Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H
2015-01-01
Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.
Interlaboratory comparison of red-cell ATP, 2,3-diphosphoglycerate and haemolysis measurements.
Hess, J R; Kagen, L R; van der Meer, P F; Simon, T; Cardigan, R; Greenwalt, T J; AuBuchon, J P; Brand, A; Lockwood, W; Zanella, A; Adamson, J; Snyder, E; Taylor, H L; Moroff, G; Hogman, C
2005-07-01
Red blood cell (RBC) storage systems are licensed based on their ability to prevent haemolysis and maintain RBC 24-h in vivo recovery. Preclinical testing includes measurement of RBC ATP as a surrogate for recovery, 2,3-diphosphoglycerate (DPG) as a surrogate for oxygen affinity, and free haemoglobin, which is indicative of red cell lysis. The reproducibility of RBC ATP, DPG and haemolysis measurements between centres was investigated. Five, 4-day-old leucoreduced AS-1 RBC units were pooled, aliquotted and shipped on ice to 14 laboratories in the USA and European Union (EU). Each laboratory was to sample the bag twice on day 7 and measure RBC ATP, DPG, haemoglobin and haemolysis levels in triplicate on each sample. The variability of results was assessed by using coefficients of variation (CV) and analysis of variance. Measurements were highly reproducible at the individual sites. Between sites, the CV was 16% for ATP, 35% for DPG, 2% for total haemoglobin and 54% for haemolysis. For ATP and total haemoglobin, 94 and 80% of the variance in measurements was contributed by differences between sites, and more than 80% of the variance for DPG and haemolysis measurements came from markedly discordant results from three sites and one site, respectively. In descending order, mathematical errors, unvalidated analytical methods, a lack of shared standards and fluid handling errors contributed to the variability in measurements from different sites. While the methods used by laboratories engaged in RBC storage system clinical trials demonstrated good precision, differences in results between laboratories may hinder comparative analysis. Efforts to improve performance should focus on developing robust methods, especially for measuring RBC ATP.
NASA Technical Reports Server (NTRS)
Sun, Jielun
1993-01-01
Results are presented of a test of the physically based total column water vapor retrieval algorithm of Wentz (1992) for sensitivity to realistic vertical distributions of temperature and water vapor. The ECMWF monthly averaged temperature and humidity fields are used to simulate the spatial pattern of systematic retrieval error of total column water vapor due to this sensitivity. The estimated systematic error is within 0.1 g/sq cm over about 70 percent of the global ocean area; systematic errors greater than 0.3 g/sq cm are expected to exist only over a few well-defined regions, about 3 percent of the global oceans, assuming that the global mean value is unbiased.
Validation of Calculations in a Digital Thermometer Firmware
NASA Astrophysics Data System (ADS)
Batagelj, V.; Miklavec, A.; Bojkovski, J.
2014-04-01
State-of-the-art digital thermometers are arguably remarkable measurement instruments, measuring outputs from resistance thermometers and/or thermocouples. Not only that they can readily achieve measuring accuracies in the parts-per-million range, but they also incorporate sophisticated algorithms for the transformation calculation of the measured resistance or voltage to temperature. These algorithms often include high-order polynomials, exponentials and logarithms, and must be performed using both standard coefficients and particular calibration coefficients. The numerical accuracy of these calculations and the associated uncertainty component must be much better than the accuracy of the raw measurement in order to be negligible in the total measurement uncertainty. In order for the end-user to gain confidence in these calculations as well as to conform to formal requirements of ISO/IEC 17025 and other standards, a way of validation of these numerical procedures performed in the firmware of the instrument is required. A software architecture which allows a simple validation of internal measuring instrument calculations is suggested. The digital thermometer should be able to expose all its internal calculation functions to the communication interface, so the end-user can compare the results of the internal measuring instrument calculation with reference results. The method can be regarded as a variation of the black-box software validation. Validation results on a thermometer prototype with implemented validation ability show that the calculation error of basic arithmetic operations is within the expected rounding error. For conversion functions, the calculation error is at least ten times smaller than the thermometer effective resolution for the particular probe type.
NASA Astrophysics Data System (ADS)
Duan, Y.; Wilson, A. M.; Barros, A. P.
2014-10-01
A diagnostic analysis of the space-time structure of error in Quantitative Precipitation Estimates (QPE) from the Precipitation Radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the Southern Appalachian Mountains, USA since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 V7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA, and missed detection, MD) and magnitude errors (underestimation, UND, and overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the Southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter), and especially in the inner region. Although UND dominates the magnitude error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total consistent with regional hydrometeorology. The 2A25 V7 product underestimates low level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the terrain topography mask used to remove ground clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to under-catch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground clutter correction.
NASA Astrophysics Data System (ADS)
Duan, Y.; Wilson, A. M.; Barros, A. P.
2015-03-01
A diagnostic analysis of the space-time structure of error in quantitative precipitation estimates (QPEs) from the precipitation radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the southern Appalachian Mountains, USA, since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 Version 7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA; missed detection, MD) and magnitude errors (underestimation, UND; overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter) and especially in the inner region. Although UND dominates the error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total, consistent with regional hydrometeorology. The 2A25 V7 product underestimates low-level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the topography mask used to remove ground-clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to undercatch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and a local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non-uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground-clutter correction.
Estimating patient-specific soft-tissue properties in a TKA knee.
Ewing, Joseph A; Kaufman, Michelle K; Hutter, Erin E; Granger, Jeffrey F; Beal, Matthew D; Piazza, Stephen J; Siston, Robert A
2016-03-01
Surgical technique is one factor that has been identified as critical to success of total knee arthroplasty. Researchers have shown that computer simulations can aid in determining how decisions in the operating room generally affect post-operative outcomes. However, to use simulations to make clinically relevant predictions about knee forces and motions for a specific total knee patient, patient-specific models are needed. This study introduces a methodology for estimating knee soft-tissue properties of an individual total knee patient. A custom surgical navigation system and stability device were used to measure the force-displacement relationship of the knee. Soft-tissue properties were estimated using a parameter optimization that matched simulated tibiofemoral kinematics with experimental tibiofemoral kinematics. Simulations using optimized ligament properties had an average root mean square error of 3.5° across all tests while simulations using generic ligament properties taken from literature had an average root mean square error of 8.4°. Specimens showed large variability among ligament properties regardless of similarities in prosthetic component alignment and measured knee laxity. These results demonstrate the importance of soft-tissue properties in determining knee stability, and suggest that to make clinically relevant predictions of post-operative knee motions and forces using computer simulations, patient-specific soft-tissue properties are needed. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Rumpf, R Wolfgang; Stewart, William C L; Martinez, Stephen K; Gerrard, Chandra Y; Adolphi, Natalie L; Thakkar, Rajan; Coleman, Alan; Rajab, Adrian; Ray, William C; Fabia, Renata
2018-01-01
Treating burns effectively requires accurately assessing the percentage of the total body surface area (%TBSA) affected by burns. Current methods for estimating %TBSA, such as Lund and Browder (L&B) tables, rely on historic body statistics. An increasingly obese population has been blamed for increasing errors in %TBSA estimates. However, this assumption has not been experimentally validated. We hypothesized that errors in %TBSA estimates using L&B were due to differences in the physical proportions of today's children compared with children in the early 1940s when the chart was developed and that these differences would appear as body mass index (BMI)-associated systematic errors in the L&B values versus actual body surface areas. We measured the TBSA of human pediatric cadavers using computed tomography scans. Subjects ranged from 9 mo to 15 y in age. We chose outliers of the BMI distribution (from the 31st percentile at the low through the 99th percentile at the high). We examined surface area proportions corresponding to L&B regions. Measured regional proportions based on computed tomography scans were in reasonable agreement with L&B, even with subjects in the tails of the BMI range. The largest deviation was 3.4%, significantly less than the error seen in real-world %TBSA estimates. While today's population is more obese than those studied by L&B, their body region proportions scale surprisingly well. The primary error in %TBSA estimation is not due to changing physical proportions of today's children and may instead lie in the application of the L&B table. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Houston, Lauren; Probst, Yasmine; Martin, Allison
2018-05-18
Data audits within clinical settings are extensively used as a major strategy to identify errors, monitor study operations and ensure high-quality data. However, clinical trial guidelines are non-specific in regards to recommended frequency, timing and nature of data audits. The absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment. This review aimed to assess the variability of source data verification (SDV) auditing methods to monitor data quality in a clinical research setting. The scientific databases MEDLINE, Scopus and Science Direct were searched for English language publications, with no date limits applied. Studies were considered if they included data from a clinical trial or clinical research setting and measured and/or reported data quality using a SDV auditing method. In total 15 publications were included. The nature and extent of SDV audit methods in the articles varied widely, depending upon the complexity of the source document, type of study, variables measured (primary or secondary), data audit proportion (3-100%) and collection frequency (6-24 months). Methods for coding, classifying and calculating error were also inconsistent. Transcription errors and inexperienced personnel were the main source of reported error. Repeated SDV audits using the same dataset demonstrated ∼40% improvement in data accuracy and completeness over time. No description was given in regards to what determines poor data quality in clinical trials. A wide range of SDV auditing methods are reported in the published literature though no uniform SDV auditing method could be determined for "best practice" in clinical trials. Published audit methodology articles are warranted for the development of a standardised SDV auditing method to monitor data quality in clinical research settings. Copyright © 2018. Published by Elsevier Inc.
Practical uncertainty reduction and quantification in shock physics measurements
Akin, M. C.; Nguyen, J. H.
2015-04-20
We report the development of a simple error analysis sampling method for identifying intersections and inflection points to reduce total uncertainty in experimental data. This technique was used to reduce uncertainties in sound speed measurements by 80% over conventional methods. Here, we focused on its impact on a previously published set of Mo sound speed data and possible implications for phase transition and geophysical studies. However, this technique's application can be extended to a wide range of experimental data.
Kofman, Rianne; Beekman, Anna M; Emmelot, Cornelis H; Geertzen, Jan H B; Dijkstra, Pieter U
2018-06-01
Non-contact scanners may have potential for measurement of residual limb volume. Different non-contact scanners have been introduced during the last decades. Reliability and usability (practicality and user friendliness) should be assessed before introducing these systems in clinical practice. The aim of this study was to analyze the measurement properties and usability of four non-contact scanners (TT Design, Omega Scanner, BioSculptor Bioscanner, and Rodin4D Scanner). Quasi experimental. Nine (geometric and residual limb) models were measured on two occasions, each consisting of two sessions, thus in total 4 sessions. In each session, four observers used the four systems for volume measurement. Mean for each model, repeatability coefficients for each system, variance components, and their two-way interactions of measurement conditions were calculated. User satisfaction was evaluated with the Post-Study System Usability Questionnaire. Systematic differences between the systems were found in volume measurements. Most of the variances were explained by the model (97%), while error variance was 3%. Measurement system and the interaction between system and model explained 44% of the error variance. Repeatability coefficient of the systems ranged from 0.101 (Omega Scanner) to 0.131 L (Rodin4D). Differences in Post-Study System Usability Questionnaire scores between the systems were small and not significant. The systems were reliable in determining residual limb volume. Measurement systems and the interaction between system and residual limb model explained most of the error variances. The differences in repeatability coefficient and usability between the four CAD/CAM systems were small. Clinical relevance If accurate measurements of residual limb volume are required (in case of research), modern non-contact scanners should be taken in consideration nowadays.
An educational and audit tool to reduce prescribing error in intensive care.
Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D
2008-10-01
To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.
Test-Retest Analyses of the Test of English as a Foreign Language. TOEFL Research Reports Report 45.
ERIC Educational Resources Information Center
Henning, Grant
This study provides information about the total and component scores of the Test of English as a Foreign Language (TOEFL). First, the study provides comparative global and component estimates of test-retest, alternate-form, and internal-consistency reliability, controlling for sources of measurement error inherent in the examinees and the testing…
Integrating LIDAR and forest inventories to fill the trees outside forests data gap
Kristofer D. Johnson; Richard Birdsey; Jason Cole; Anu Swatantran; Jarlath O' Neil-Dunne; Ralph Dubayah; Andrew Lister
2015-01-01
Forest inventories are commonly used to estimate total tree biomass of forest land even though they are not traditionally designed to measure biomass of trees outside forests (TOF). The consequence may be an inaccurate representation of all of the aboveground biomass, which propagates error to the outputs of spatial and process models that rely on the inventory data....
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon
2013-01-01
We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less
NASA Astrophysics Data System (ADS)
Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu
2018-05-01
Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.
Stratospheric N2O5, CH4, and N2O profiles from IR solar occultation spectra
NASA Technical Reports Server (NTRS)
Camy-Peyret, C.; Flaud, J.-M.; Perrin, A.; Rinsland, C. P.; Goldman, A.; Murcray, F. J.
1993-01-01
Stratospheric volume mixing ratio profiles of N2O5, CH4, and N2O have been retrieved from a set of 0.052/cm resolution (FWHM) solar occultation spectra recorded at sunrise during a balloon flight from Aire sur l'Adour, France (44 N latitude) on 12 October 1990. The N2O5 results have been derived from measurements of the integrated absorption by the 1246/cm band. Assuming a total intensity of 4.32 x 10 exp -17 cm/molecule/sq cm independent of temperature, the retrieved N2O5 volume mixing ratios in ppbv, interpolated to 2 km height spacings, are 1.64 +/- 0.49 at 37.5 km, 1.92 +/- 0.56 at 35.5 km, 2.06 +/- 0.47 at 33.5 km, 1.95 +/- 0.42 at 31.5 km, 1.60 +/- 0.33 at 29.5 km, 1.26 +/- 0.28 at 27.5 km, and 0.85 +/- 0.20 at 25.5 km. Error bars indicate the estimated 1-sigma uncertainty including the error in the total band intensity. The retrieved profiles are compared with previous measurements and photochemical model results.
Nurses' attitude and intention of medication administration error reporting.
Hung, Chang-Chiao; Chu, Tsui-Ping; Lee, Bih-O; Hsiao, Chia-Chi
2016-02-01
The Aims of this study were to explore the effects of nurses' attitudes and intentions regarding medication administration error reporting on actual reporting behaviours. Underreporting of medication errors is still a common occurrence. Whether attitude and intention towards medication administration error reporting connect to actual reporting behaviours remain unclear. This study used a cross-sectional design with self-administered questionnaires, and the theory of planned behaviour was used as the framework for this study. A total of 596 staff nurses who worked in general wards and intensive care units in a hospital were invited to participate in this study. The researchers used the instruments measuring nurses' attitude, nurse managers' and co-workers' attitude, report control, and nurses' intention to predict nurses' actual reporting behaviours. Data were collected from September-November 2013. Path analyses were used to examine the hypothesized model. Of the 596 nurses invited to participate, 548 (92%) completed and returned a valid questionnaire. The findings indicated that nurse managers' and co-workers' attitudes are predictors for nurses' attitudes towards medication administration error reporting. Nurses' attitudes also influenced their intention to report medication administration errors; however, no connection was found between intention and actual reporting behaviour. The findings reflected links among colleague perspectives, nurses' attitudes, and intention to report medication administration errors. The researchers suggest that hospitals should increase nurses' awareness and recognition of error occurrence. Regardless of nurse managers' and co-workers' attitudes towards medication administration error reporting, nurses are likely to report medication administration errors if they detect them. Management of medication administration errors should focus on increasing nurses' awareness and recognition of error occurrence. © 2015 John Wiley & Sons Ltd.
Relationship between impulsivity and decision-making in cocaine dependence
Kjome, Kimberly L.; Lane, Scott D.; Schmitz, Joy M.; Green, Charles; Ma, Liangsuo; Prasla, Irshad; Swann, Alan C.; Moeller, F. Gerard
2010-01-01
Impulsivity and decision-making are associated on a theoretical level in that impaired planning is a component of both. However, few studies have examined the relationship between measures of decision-making and impulsivity in clinical populations. The purpose of this study was to compare cocaine-dependent subjects to controls on a measure of decision-making (the Iowa Gambling Task or IGT), a questionnaire measure of impulsivity (the Barratt Impulsiveness Scale or BIS-11), and a measure of behavioral inhibition (the immediate memory task or IMT), and to examine the interrelationship among these measures. Results of the study showed that cocaine-dependent subjects made more disadvantageous choices on the IGT, had higher scores on the BIS, and more commission errors on the IMT. Cognitive model analysis showed that choice consistency factors on the IGT differed between cocaine-dependent subjects and controls. However, there was no significant correlation between IGT performance and the BIS total score or subscales or IMT commission errors. These results suggest that in cocaine dependent subjects there is little overlap between decision-making as measured by the IGT and impulsivity/behavioral inhibition as measured by the BIS and IMT. PMID:20478631
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
NASA Astrophysics Data System (ADS)
Kishcha, P.; Alpert, P.; Shtivelman, A.; Krichak, S. O.; Joseph, J. H.; Kallos, G.; Katsafados, P.; Spyrou, C.; Gobbi, G. P.; Barnaba, F.; Nickovic, S.; PéRez, C.; Baldasano, J. M.
2007-08-01
In this study, forecast errors in dust vertical distributions were analyzed. This was carried out by using quantitative comparisons between dust vertical profiles retrieved from lidar measurements over Rome, Italy, performed from 2001 to 2003, and those predicted by models. Three models were used: the four-particle-size Dust Regional Atmospheric Model (DREAM), the older one-particle-size version of the SKIRON model from the University of Athens (UOA), and the pre-2006 one-particle-size Tel Aviv University (TAU) model. SKIRON and DREAM are initialized on a daily basis using the dust concentration from the previous forecast cycle, while the TAU model initialization is based on the Total Ozone Mapping Spectrometer aerosol index (TOMS AI). The quantitative comparison shows that (1) the use of four-particle-size bins in the dust modeling instead of only one-particle-size bins improves dust forecasts; (2) cloud presence could contribute to noticeable dust forecast errors in SKIRON and DREAM; and (3) as far as the TAU model is concerned, its forecast errors were mainly caused by technical problems with TOMS measurements from the Earth Probe satellite. As a result, dust forecast errors in the TAU model could be significant even under cloudless conditions. The DREAM versus lidar quantitative comparisons at different altitudes show that the model predictions are more accurate in the middle part of dust layers than in the top and bottom parts of dust layers.
Preventable visual impairment in children with nonprofound intellectual disability.
Aslan, Lokman; Aslankurt, Murat; Aksoy, Adnan; Altun, Hatice
2013-01-01
To assess the preventable visual impairment in children with nonprofound intellectual disability (ID). A total of 215 children with IDs (90 Down syndrome [DS], 125 nonprofound ID) and 116 age- and sex-matched healthy subjects were enrolled in this study. All participants underwent ophthalmologic examinations including cycloplegic refraction measurements, ocular movement evaluation, screening for strabismus (Hirschberg, Krimsky, or prism cover test), slit-lamp biomicroscopy, funduscopy, and intraocular pressure measurements. All data were recorded for statistical analysis. Ocular findings in decreasing prevalence were as follows: refractive errors 55 (61.1%), strabismus 30 (33.2%), cataract 7 (7.8%), and nystagmus 7 (7.8%) in children with DS; refractive errors 57 (45.6%), strabismus 19 (15.2%), cataract 7 (6.4%), nystagmus 5 (4%), and glaucoma 1 (0.8%) in children with other ID; and refractive errors 13 (11.2%) and strabismus 4 (3.5%) in controls. Cataracts, glaucoma, and nystagmus were not observed in the control group. The most common ophthalmic findings in children with DS compared with other ID and controls were with hyperopia (p<0.03 and p<0.001, respectively) and esotropia (p<0.01 and p<0.01, respectively). The pediatric population with ID has a high prevalence of preventable visual impairments, refractive errors, strabismus, and cataracts. The prevalence of strabismus and refractive errors was more frequent in children with DS. The importance of further health screenings including ophthalmic examinations should be utilized to implement appropriate care management and improve quality of life.
Elliott, Amanda F.; McGwin, Gerald; Owsley, Cynthia
2009-01-01
OBJECTIVE To evaluate the effect of vision-enhancing interventions (i.e., cataract surgery or refractive error correction) on physical function and cognitive status in nursing home residents. DESIGN Longitudinal cohort study. SETTING Seventeen nursing homes in Birmingham, AL. PARTICIPANTS A total of 187 English-speaking older adults (>55 years of age). INTERVENTION Participants took part in one of two vision-enhancing interventions: cataract surgery or refractive error correction. Each group was compared against a control group (persons eligible for but who declined cataract surgery, or who received delayed correction of refractive error). MEASUREMENTS Physical function (i.e., ability to perform activities of daily living and mobility) was assessed with a series of self-report and certified nursing assistant ratings at baseline and at 2 months for the refractive error correction group, and at 4 months for the cataract surgery group. The Mini Mental State Exam was also administered. RESULTS No significant differences existed within or between groups from baseline to follow-up on any of the measures of physical function. Mental status scores significantly declined from baseline to follow-up for both the immediate (p= 0.05) and delayed (p< 0.02) refractive error correction groups and for the cataract surgery control group (p= 0.05). CONCLUSION Vision-enhancing interventions did not lead to short-term improvements in physical functioning or cognitive status in this sample of elderly nursing home residents. PMID:19170783
A Bayesian mixture model for missing data in marine mammal growth analysis
Shotwell, Mary E.; McFee, Wayne E.; Slate, Elizabeth H.
2016-01-01
Much of what is known about bottle nose dolphin (Tursiops truncatus) anatomy and physiology is based on necropsies from stranding events. Measurements of total body length, total body mass, and age are used to estimate growth. It is more feasible to retrieve and transport smaller animals for total body mass measurement than larger animals, introducing a systematic bias in sampling. Adverse weather events, volunteer availability, and other unforeseen circumstances also contribute to incomplete measurement. We have developed a Bayesian mixture model to describe growth in detected stranded animals using data from both those that are fully measured and those not fully measured. Our approach uses a shared random effect to link the missingness mechanism (i.e. full/partial measurement) to distinct growth curves in the fully and partially measured populations, thereby enabling drawing of strength for estimation. We use simulation to compare our model to complete case analysis and two common multiple imputation methods according to model mean square error. Results indicate that our mixture model provides better fit both when the two populations are present and when they are not. The feasibility and utility of our new method is demonstrated by application to South Carolina strandings data. PMID:28503080
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of themore » absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.« less
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)
2000-01-01
Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.
Roaldsen, Kirsti Skavberg; Måøy, Åsa Blad; Jørgensen, Vivien; Stanghelle, Johan Kvalvik
2016-05-01
Translation of the Spinal Cord Injury Falls Concern Scale (SCI-FCS), and investigation of test-retest reliability on item-level and total-score-level. Translation, adaptation and test-retest study. A specialized rehabilitation setting in Norway. Fifty-four wheelchair users with a spinal cord injury. The median age of the cohort was 49 years, and the median number of years after injury was 13. Interventions/measurements: The SCI-FCS was translated and back-translated according to guidelines. Individuals answered the SCI-FCS twice over the course of one week. We investigated item-level test-retest reliability using Svensson's rank-based statistical method for disagreement analysis of paired ordinal data. For relative reliability, we analyzed the total-score-level test-retest reliability with intraclass correlation coefficients (ICC2.1), the standard error of measurement (SEM), and the smallest detectable change (SDC) for absolute reliability/measurement-error assessment and Cronbach's alpha for internal consistency. All items showed satisfactory percentage agreement (≥69%) between test and retest. There were small but non-negligible systematic disagreements among three items; we recovered an 11-13% higher chance for a lower second score. There was no disagreement due to random variance. The test-retest agreement (ICC2.1) was excellent (0.83). The SEM was 2.6 (12%), and the SDC was 7.1 (32%). The Cronbach's alpha was high (0.88). The Norwegian SCI-FCS is highly reliable for wheelchair users with chronic spinal cord injuries.
Error analysis of leaf area estimates made from allometric regression models
NASA Technical Reports Server (NTRS)
Feiveson, A. H.; Chhikara, R. S.
1986-01-01
Biological net productivity, measured in terms of the change in biomass with time, affects global productivity and the quality of life through biochemical and hydrological cycles and by its effect on the overall energy balance. Estimating leaf area for large ecosystems is one of the more important means of monitoring this productivity. For a particular forest plot, the leaf area is often estimated by a two-stage process. In the first stage, known as dimension analysis, a small number of trees are felled so that their areas can be measured as accurately as possible. These leaf areas are then related to non-destructive, easily-measured features such as bole diameter and tree height, by using a regression model. In the second stage, the non-destructive features are measured for all or for a sample of trees in the plots and then used as input into the regression model to estimate the total leaf area. Because both stages of the estimation process are subject to error, it is difficult to evaluate the accuracy of the final plot leaf area estimates. This paper illustrates how a complete error analysis can be made, using an example from a study made on aspen trees in northern Minnesota. The study was a joint effort by NASA and the University of California at Santa Barbara known as COVER (Characterization of Vegetation with Remote Sensing).
Radiographic cup anteversion measurement corrected from pelvic tilt.
Wang, Liao; Thoreson, Andrew R; Trousdale, Robert T; Morrey, Bernard F; Dai, Kerong; An, Kai-Nan
2017-11-01
The purpose of this study was to develop a novel technique to improve the accuracy of radiographic cup anteversion measurement by correcting the influence of pelvic tilt. Ninety virtual total hip arthroplasties were simulated from computed tomography data of 6 patients with 15 predetermined cup orientations. For each simulated implantation, anteroposterior (AP) virtual pelvic radiographs were generated for 11 predetermined pelvic tilts. A linear regression model was created to capture the relationship between radiographic cup anteversion angle error measured on AP pelvic radiographs and pelvic tilt. Overall, nine hundred and ninety virtual AP pelvic radiographs were measured, and 90 linear regression models were created. Pearson's correlation analyses confirmed a strong correlation between the errors of conventional radiographic cup anteversion angle measured on AP pelvic radiographs and the magnitude of pelvic tilt (P < 0.001). The mean of 90 slopes and y-intercepts of the regression lines were -0.8 and -2.5°, which were applied as the general correction parameters for the proposed tool to correct conventional cup anteversion angle from the influence of pelvic tilt. The current method proposes to measure the pelvic tilt on a lateral radiograph, and to use it as a correction for the radiographic cup anteversion measurement on an AP pelvic radiograph. Thus, both AP and lateral pelvic radiographs are required for the measurement of pelvic posture-integrated cup anteversion. Compared with conventional radiographic cup anteversion, the errors of pelvic posture-integrated radiographic cup anteversion were reduced from 10.03 (SD = 5.13) degrees to 2.53 (SD = 1.33) degrees. Pelvic posture-integrated cup anteversion measurement improves the accuracy of radiographic cup anteversion measurement, which shows the potential of further clarifying the etiology of postoperative instability based on planar radiographs. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Gross, Eliza L.; Lindsey, Bruce D.; Rupert, Michael G.
2012-01-01
Field blank samples help determine the frequency and magnitude of contamination bias, and replicate samples help determine the sampling variability (error) of measured analyte concentrations. Quality control data were evaluated for calcium, magnesium, sodium, potassium, chloride, sulfate, fluoride, silica, and total dissolved solids. A 99-percent upper confidence limit is calculated from field blanks to assess the potential for contamination bias. For magnesium, potassium, chloride, sulfate, and fluoride, potential contamination in more than 95 percent of environmental samples is less than or equal to the common maximum reporting level. Contamination bias has little effect on measured concentrations greater than 4.74 mg/L (milligrams per liter) for calcium, 14.98 mg/L for silica, 4.9 mg/L for sodium, and 120 mg/L for total dissolved solids. Estimates of sampling variability are calculated for high and low ranges of concentration for major ions and total dissolved solids. Examples showing the calculation of confidence intervals and how to determine whether measured differences between two water samples are significant are presented.
Results and Conclusions from the NASA Isokinetic Total Water Content Probe 2009 IRT Test
NASA Technical Reports Server (NTRS)
Reehorst, Andrew; Brinker, David
2010-01-01
The NASA Glenn Research Center has developed and tested a Total Water Content Isokinetic Sampling Probe. Since, by its nature, it is not sensitive to cloud water particle phase nor size, it is particularly attractive to support super-cooled large droplet and high ice water content aircraft icing studies. The instrument comprises the Sampling Probe, Sample Flow Control, and Water Vapor Measurement subsystems. Results and conclusions are presented from probe tests in the NASA Glenn Icing Research Tunnel (IRT) during January and February 2009. The use of reference probe heat and the control of air pressure in the water vapor measurement subsystem are discussed. Several run-time error sources were found to produce identifiable signatures that are presented and discussed. Some of the differences between measured Isokinetic Total Water Content Probe and IRT calibration seems to be caused by tunnel humidification and moisture/ice crystal blow around. Droplet size, airspeed, and liquid water content effects also appear to be present in the IRT calibration. Based upon test results, the authors provide recommendations for future Isokinetic Total Water Content Probe development.
Mismeasurement and the resonance of strong confounders: correlated errors.
Marshall, J R; Hastrup, J L; Ross, J S
1999-07-01
Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.
Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E
2011-06-22
Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.
Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa
2013-01-01
To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.
Prinstein, Mitchell J; Wang, Shirley S
2005-06-01
Adolescents' perceptions of their friends' behavior strongly predict adolescents' own behavior, however, these perceptions often are erroneous. This study examined correlates of discrepancies between adolescents' perceptions and friends' reports of behavior. A total of 120 11th-grade adolescents provided data regarding their engagement in deviant and health risk behaviors, as well as their perceptions of the behavior of their best friend, as identified through sociometric assessment. Data from friends' own report were used to calculate discrepancy measures of adolescents' overestimations and estimation errors (absolute value of discrepancies) of friends' behavior. Adolescents also completed a measure of friendship quality, and a sociometric assessment yielding measures of peer acceptance/rejection and aggression. Findings revealed that adolescents' peer rejection and aggression were associated with greater overestimations of friends' behavior. This effect was partially mediated by adolescents' own behavior, consistent with a false consensus effect. Low levels of positive friendship quality were significantly associated with estimation errors, but not overestimations specifically.
NASA Astrophysics Data System (ADS)
Kemp, Z. D. C.
2018-04-01
Determining the phase of a wave from intensity measurements has many applications in fields such as electron microscopy, visible light optics, and medical imaging. Propagation based phase retrieval, where the phase is obtained from defocused images, has shown significant promise. There are, however, limitations in the accuracy of the retrieved phase arising from such methods. Sources of error include shot noise, image misalignment, and diffraction artifacts. We explore the use of artificial neural networks (ANNs) to improve the accuracy of propagation based phase retrieval algorithms applied to simulated intensity measurements. We employ a phase retrieval algorithm based on the transport-of-intensity equation to obtain the phase from simulated micrographs of procedurally generated specimens. We then train an ANN with pairs of retrieved and exact phases, and use the trained ANN to process a test set of retrieved phase maps. The total error in the phase is significantly reduced using this method. We also discuss a variety of potential extensions to this work.
[Analysis of an incident notification system and register in a critical care unit].
Murillo-Pérez, M A; García-Iglesias, M; Palomino-Sánchez, I; Cano Ruiz, G; Cuenca Solanas, M; Alted López, E
2016-01-01
To analyse the incident communicated through a notification system and register in a critical care unit. A cross-sectional descriptive study was conducted by performing an analysis of the records of incidents communicated anonymously and voluntarily from January 2007 to December 2013 in a critical care unit of adult patients with severe trauma. incident type and class, professional reports, and suggestions for improvement measures. A descriptive analysis was performed on the variables. Out of a total of 275 incidents reported, 58.5% of them were adverse events. Incident distributed by classes: medication, 33.7%; vascular access-drainage-catheter-sensor, 19.6%; devices-equipment, 13.3%, procedures, 11.5%; airway tract and mechanical ventilation, 10%; nursing care, 4.1%; inter-professional communication, 3%; diagnostic test, 3%; patient identification, 1.1%, and transfusion 0.7%. In the medication group, administrative errors accounted for a total of 62%; in vascular access-drainage-catheter-sensor group, central venous lines, a total of 27%; in devices and equipment group, respirators, a total of 46.9%; in airway self-extubations, a total of 32.1%. As regards to medication errors, 62% were incidents without damage. Incident notification by profession: doctors, 43%, residents, 5.6%, nurses, 51%, and technical assistants, 0.4%. Adverse events are the most communicated incidents. The events related to medication administration are the most frequent, although most of them were without damage. Nurses and doctors communicate the incidents with the same frequency. In order to highlight the low incident notification despite it being an anonymous and volunteer system, therefore, it is suggested to study measurements to increase the level of communication. Copyright © 2016 Elsevier España, S.L.U. y SEEIUC. All rights reserved.
A method to account for the temperature sensitivity of TCCON total column measurements
NASA Astrophysics Data System (ADS)
Niebling, Sabrina G.; Wunch, Debra; Toon, Geoffrey C.; Wennberg, Paul O.; Feist, Dietrich G.
2014-05-01
The Total Carbon Column Observing Network (TCCON) consists of ground-based Fourier Transform Spectrometer (FTS) systems all around the world. It achieves better than 0.25% precision and accuracy for total column measurements of CO2 [Wunch et al. (2011)]. In recent years, the TCCON data processing and retrieval software (GGG) has been improved to achieve better and better results (e. g. ghost correction, improved a priori profiles, more accurate spectroscopy). However, a small error is also introduced by the insufficent knowledge of the true temperature profile in the atmosphere above the individual instruments. This knowledge is crucial to retrieve highly precise gas concentrations. In the current version of the retrieval software, we use six-hourly NCEP reanalysis data to produce one temperature profile at local noon for each measurement day. For sites in the mid latitudes which can have a large diurnal variation of the temperature in the lowermost kilometers of the atmosphere, this approach can lead to small errors in the final gas concentration of the total column. Here, we present and describe a method to account for the temperature sensitivity of the total column measurements. We exploit the fact that H2O is most abundant in the lowermost kilometers of the atmosphere where the largest diurnal temperature variations occur. We use single H2O absorption lines with different temperature sensitivities to gain information about the temperature variations over the course of the day. This information is used to apply a posteriori correction of the retrieved gas concentration of total column. In addition, we show that the a posteriori temperature correction is effective by applying it to data from Lamont, Oklahoma, USA (36,6°N and 97,5°W). We chose this site because regular radiosonde launches with a time resolution of six hours provide detailed information of the real temperature in the atmosphere and allow us to test the effectiveness of our correction. References: Wunch, D., Toon, G. C., Blavier, J.-F. L., Washenfelder, R. A., Notholt, J., Connor, B. J., Griffith, D. W. T., Sherlock, V., and Wennberg, P. O.: The Total Carbon Column Observing Network, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 369, 2087-2112, 2011.
Idzinga, J C; de Jong, A L; van den Bemt, P M L A
2009-11-01
Previous studies, both in hospitals and in institutions for clients with an intellectual disability (ID), have shown that medication errors at the administration stage are frequent, especially when medication has to be administered through an enteral feeding tube. In hospitals a specially designed intervention programme has proven to be effective in reducing these feeding tube-related medication errors, but the effect of such a programme within an institution for clients with an ID is unknown. Therefore, a study was designed to measure the influence of such an intervention programme on the number of medication administration errors in clients with an ID who also have enteral feeding tubes. A before-after study design with disguised observation to document administration errors was used. The study was conducted from February to June 2008 within an institution for individuals with an ID in the Western part of The Netherlands. Included were clients with enteral feeding tubes. The intervention consisted of advice on medication administration through enteral feeding tubes by the pharmacist, a training programme and introduction of a 'medication through tube' box containing proper materials for crushing and suspending tablets. The outcome measure was the frequency of medication administration errors, comparing the pre-intervention period with the post-intervention period. A total of 245 medication administrations in six clients (by 23 nurse attendants) have been observed in the pre-intervention measurement period and 229 medication administrations in five clients (by 20 nurse attendants) have been observed in the post-intervention period. Before the intervention, 158 (64.5%) medication administration errors were observed, and after the intervention, this decreased to 69 (30.1%). Of all potential confounders and effect modifiers, only 'medication dispensed in automated dispensing system ("robot") packaging' contributed to the multivariate model; effect modification was shown for this determinant. Multilevel analysis using this multivariate model resulted in an odds ratio of 0.33 (95% confidence interval 0.13-0.71) for the error percentage in the post-intervention period compared with the pre-intervention period. The intervention was found to be effective in an institution for clients with an ID. However, additional efforts are needed to reduce the proportion of administration errors which is still high after the intervention.
Analysis on the dynamic error for optoelectronic scanning coordinate measurement network
NASA Astrophysics Data System (ADS)
Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie
2018-01-01
Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.
An active co-phasing imaging testbed with segmented mirrors
NASA Astrophysics Data System (ADS)
Zhao, Weirui; Cao, Genrui
2011-06-01
An active co-phasing imaging testbed with high accurate optical adjustment and control in nanometer scale was set up to validate the algorithms of piston and tip-tilt error sensing and real-time adjusting. Modularization design was adopted. The primary mirror was spherical and divided into three sub-mirrors. One of them was fixed and worked as reference segment, the others were adjustable respectively related to the fixed segment in three freedoms (piston, tip and tilt) by using sensitive micro-displacement actuators in the range of 15mm with a resolution of 3nm. The method of twodimension dispersed fringe analysis was used to sense the piston error between the adjacent segments in the range of 200μm with a repeatability of 2nm. And the tip-tilt error was gained with the method of centroid sensing. Co-phasing image could be realized by correcting the errors measured above with the sensitive micro-displacement actuators driven by a computer. The process of co-phasing error sensing and correcting could be monitored in real time by a scrutiny module set in this testbed. A FISBA interferometer was introduced to evaluate the co-phasing performance, and finally a total residual surface error of about 50nm rms was achieved.
A Review of Depth and Normal Fusion Algorithms
Štolc, Svorad; Pock, Thomas
2018-01-01
Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903
NASA Astrophysics Data System (ADS)
Teodor, V. G.; Baroiu, N.; Susac, F.; Oancea, N.
2016-11-01
The modelling of a curl of surfaces associated with a pair of rolling centrodes, when it is known the profile of the rack-gear's teeth profile, by direct measuring, as a coordinate matrix, has as goal the determining of the generating quality for an imposed kinematics of the relative motion of tool regarding the blank. In this way, it is possible to determine the generating geometrical error, as a base of the total error. The generation modelling allows highlighting the potential errors of the generating tool, in order to correct its profile, previously to use the tool in machining process. A method developed in CATIA is proposed, based on a new method, namely the method of “relative generating trajectories”. They are presented the analytical foundation, as so as some application for knows models of rack-gear type tools used on Maag teething machines.
An examination of the interrater reliability between practitioners and researchers on the static-99.
Quesada, Stephen P; Calkins, Cynthia; Jeglic, Elizabeth L
2014-11-01
Many studies have validated the psychometric properties of the Static-99, the most widely used measure of sexual offender recidivism risk. However much of this research relied on instrument coding completed by well-trained researchers. This study is the first to examine the interrater reliability (IRR) of the Static-99 between practitioners in the field and researchers. Using archival data from a sample of 1,973 formerly incarcerated sex offenders, field raters' scores on the Static-99 were compared with those of researchers. Overall, clinicians and researchers had excellent IRR on Static-99 total scores, with IRR coefficients ranging from "substantial" to "outstanding" for the individual 10 items of the scale. The most common causes of discrepancies were coding manual errors, followed by item subjectivity, inaccurate item scoring, and calculation errors. These results offer important data with regard to the frequency and perceived nature of scoring errors. © The Author(s) 2013.
Lin, Steve; Turgulov, Anuar; Taher, Ahmed; Buick, Jason E; Byers, Adam; Drennan, Ian R; Hu, Samantha; J Morrison, Laurie
2016-10-01
Cardiopulmonary resuscitation (CPR) process measures research and quality assurance has traditionally been limited to the first 5 minutes of resuscitation due to significant costs in time, resources, and personnel from manual data abstraction. CPR performance may change over time during prolonged resuscitations, which represents a significant knowledge gap. Moreover, currently available commercial software output of CPR process measures are difficult to analyze. The objective was to develop and validate a software program to help automate the abstraction and transfer of CPR process measures data from electronic defibrillators for complete episodes of cardiac arrest resuscitation. We developed a software program to facilitate and help automate CPR data abstraction and transfer from electronic defibrillators for entire resuscitation episodes. Using an intermediary Extensible Markup Language export file, the automated software transfers CPR process measures data (electrocardiogram [ECG] number, CPR start time, number of ventilations, number of chest compressions, compression rate per minute, compression depth per minute, compression fraction, and end-tidal CO 2 per minute). We performed an internal validation of the software program on 50 randomly selected cardiac arrest cases with resuscitation durations between 15 and 60 minutes. CPR process measures were manually abstracted and transferred independently by two trained data abstractors and by the automated software program, followed by manual interpretation of raw ECG tracings, treatment interventions, and patient events. Error rates and the time needed for data abstraction, transfer, and interpretation were measured for both manual and automated methods, compared to an additional independent reviewer. A total of 9,826 data points were each abstracted by the two abstractors and by the software program. Manual data abstraction resulted in a total of six errors (0.06%) compared to zero errors by the software program. The mean ± SD time measured per case for manual data abstraction was 20.3 ± 2.7 minutes compared to 5.3 ± 1.4 minutes using the software program (p = 0.003). We developed and validated an automated software program that efficiently abstracts and transfers CPR process measures data from electronic defibrillators for complete cardiac arrest episodes. This software will enable future cardiac arrest studies and quality assurance programs to evaluate the impact of CPR process measures during prolonged resuscitations. © 2016 by the Society for Academic Emergency Medicine.
Relationships of Measurement Error and Prediction Error in Observed-Score Regression
ERIC Educational Resources Information Center
Moses, Tim
2012-01-01
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Sone, Akihito; Shimakage, Toyonari; Suzuoki, Yasuo
A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on the demonstrative studies on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization and daily operation evaluated with the cost. The main results are as follows. The required capacity of NaS battery must be increased by 10-40% against the ideal situation without the forecast error of PVS power output. The influence of forecast error on the received grid electricity would not be so significant on annual basis because the positive and negative forecast error varies with days. The annual total cost of facility and operation increases by 2-7% due to the forecast error applied in this study. The impact of forecast error on the facility optimization and operation optimization is almost the same each other at a few percentages, implying that the forecast accuracy should be improved in terms of both the number of times with large forecast error and the average error.
Measurement error is often neglected in medical literature: a systematic review.
Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten
2018-06-01
In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.
Romero-Delmastro, Alejandro; Kadioglu, Onur; Currier, G Frans; Cook, Tanner
2014-08-01
Cone-beam computed tomography images have been previously used for evaluation of alveolar bone levels around teeth before, during, and after orthodontic treatment. Protocols described in the literature have been vague, have used unstable landmarks, or have required several software programs, file conversions, or hand tracings, among other factors that could compromise the precision of the measurements. The purposes of this article are to describe a totally digital tooth-based superimposition method for the quantitative assessment of alveolar bone levels and to evaluate its reliability. Ultra cone-beam computed tomography images (0.1-mm reconstruction) from 10 subjects were obtained from the data pool of the University of Oklahoma; 80 premolars were measured twice by the same examiner and a third time by a second examiner to determine alveolar bone heights and thicknesses before and more than 6 months after orthodontic treatment using OsiriX (version 3.5.1; Pixeo, Geneva, Switzerland). Intraexaminer and interexaminer reliabilities were evaluated, and Dahlberg's formula was used to calculate the error of the measurements. Cross-sectional and longitudinal evaluations of alveolar bone levels were possible using a digital tooth-based superimposition method. The mean differences for buccal alveolar crest heights and thicknesses were below 0.10 mm for the same examiner and below 0.17 mm for all examiners. The ranges of errors for any measurement were between 0.02 and 0.23 mm for intraexaminer errors, and between 0.06 and 0.29 mm for interexaminer errors. This protocol can be used for cross-sectional or longitudinal assessment of alveolar bone levels with low interexaminer and intraexaminer errors, and it eliminates the use of less reliable or less stable landmarks and the need for multiple software programs and image printouts. Standardization of the methods for bone assessment in orthodontics is necessary; this method could be the answer to this need. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Evaluation of airborne topographic lidar for quantifying beach changes
Sallenger, A.H.; Krabill, W.B.; Swift, R.N.; Brock, J.; List, J.; Hansen, M.; Holman, R.A.; Manizade, S.; Sontag, J.; Meredith, A.; Morgan, K.; Yunkel, J.K.; Frederick, E.B.; Stockdon, H.
2003-01-01
A scanning airborne topographic lidar was evaluated for its ability to quantify beach topography and changes during the Sandy Duck experiment in 1997 along the North Carolina coast. Elevation estimates, acquired with NASA's Airborne Topographic Mapper (ATM), were compared to elevations measured with three types of ground-based measurements - 1) differential GPS equipped all-terrain vehicle (ATV) that surveyed a 3-km reach of beach from the shoreline to the dune, 2) GPS antenna mounted on a stadia rod used to intensely survey a different 100 m reach of beach, and 3) a second GPS-equipped ATV that surveyed a 70-km-long transect along the coast. Over 40,000 individual intercomparisons between ATM and ground surveys were calculated. RMS vertical differences associated with the ATM when compared to ground measurements ranged from 13 to 19 cm. Considering all of the intercomparisons together, RMS ??? 15 cm. This RMS error represents a total error for individual elevation estimates including uncertainties associated with random and mean errors. The latter was the largest source of error and was attributed to drift in differential GPS. The ??? 15 cm vertical accuracy of the ATM is adequate to resolve beach-change signals typical of the impact of storms. For example, ATM surveys of Assateague Island (spanning the border of MD and VA) prior to and immediately following a severe northeaster showed vertical beach changes in places greater than 2 m, much greater than expected errors associated with the ATM. A major asset of airborne lidar is the high spatial data density. Measurements of elevation are acquired every few m2 over regional scales of hundreds of kilometers. Hence, many scales of beach morphology and change can be resolved, from beach cusps tens of meters in wavelength to entire coastal cells comprising tens to hundreds of kilometers of coast. Topographic lidars similar to the ATM are becoming increasingly available from commercial vendors and should, in the future, be widely used in beach surveying.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Tests of the Tully-Fisher relation. 1: Scatter in infrared magnitude versus 21 cm width
NASA Technical Reports Server (NTRS)
Bernstein, Gary M.; Guhathakurta, Puragra; Raychaudhury, Somak; Giovanelli, Riccardo; Haynes, Martha P.; Herter, Terry; Vogt, Nicole P.
1994-01-01
We examine the precision of the Tully-Fisher relation (TFR) using a sample of galaxies in the Coma region of the sky, and find that it is good to 5% or better in measuring relative distances. Total magnitudes and disk axis ratios are derived from H and I band surface photometry, and Arecibo 21 cm profiles define the rotation speeds of the galaxies. Using 25 galaxies for which the disk inclination and 21 cm width are well defined, we find an rms deviation of 0.10 mag from a linear TFR with dI/d(log W(sub c)) = -5.6. Each galaxy is assumed to be at a distance proportional to its redshift, and an extinction correction of 1.4(1-b/a) mag is applied to the total I magnitude. The measured scatter is less than 0.15 mag using milder extinction laws from the literature. The I band TFR scatter is consistent with measurement error, and the 95% CL limits on the intrinsic scatter are 0-0.10 mag. The rms scatter using H band magnitudes is 0.20 mag (N = 17). The low width galaxies have scatter in H significantly in excess of known measurement error, but the higher width half of the galaxies have scatter consistent with measurement error. The H band TFR slope may be as steep as the I band slope. As the first applications of this tight correlation, we note the following: (1) the data for the particular spirals commonly used to define the TFR distance to the Coma cluster are inconsistent with being at a common distance and are in fact in free Hubble expansion, with an upper limit of 300 km/s on the rms peculiar line-of-sight velocity of these gas-rich spirals; and (2) the gravitational potential in the disks of these galaxies has typical ellipticity less than 5%. The published data for three nearby spiral galaxies with Cepheid distance determinations are inconsistent with our Coma TFR, suggesting that these local calibrators are either ill-measured or peculiar relative to the Coma Supercluster spirals, or that the TFR has a varying form in different locales.
Tests of the Tully-Fisher relation. 1: Scatter in infrared magnitude versus 21 CM width
NASA Astrophysics Data System (ADS)
Bernstein, Gary M.; Guhathakurta, Puragra; Raychaudhury, Somak; Giovanelli, Riccardo; Haynes, Martha P.; Herter, Terry; Vogt, Nicole P.
1994-06-01
We examine the precision of the Tully-Fisher relation (TFR) using a sample of galaxies in the Coma region of the sky, and find that it is good to 5% or better in measuring relative distances. Total magnitudes and disk axis ratios are derived from H and I band surface photometry, and Arecibo 21 cm profiles define the rotation speeds of the galaxies. Using 25 galaxies for which the disk inclination and 21 cm width are well defined, we find an rms deviation of 0.10 mag from a linear TFR with dI/d(log Wc) = -5.6. Each galaxy is assumed to be at a distance proportional to its redshift, and an extinction correction of 1.4(1-b/a) mag is applied to the total I magnitude. The measured scatter is less than 0.15 mag using milder extinction laws from the literature. The I band TFR scatter is consistent with measurement error, and the 95% CL limits on the intrinsic scatter are 0-0.10 mag. The rms scatter using H band magnitudes is 0.20 mag (N = 17). The low width galaxies have scatter in H significantly in excess of known measurement error, but the higher width half of the galaxies have scatter consistent with measurement error. The H band TFR slope may be as steep as the I band slope. As the first applications of this tight correlation, we note the following: (1) the data for the particular spirals commonly used to define the TFR distance to the Coma cluster are inconsistent with being at a common distance and are in fact in free Hubble expansion, with an upper limit of 300 km/s on the rms peculiar line-of-sight velocity of these gas-rich spirals; and (2) the gravitational potential in the disks of these galaxies has typical ellipticity less than 5%. The published data for three nearby spiral galaxies with Cepheid distance determinations are inconsistent with our Coma TFR, suggesting that these local calibrators are either ill-measured or peculiar relative to the Coma Supercluster spirals, or that the TFR has a varying form in different locales.
NASA Astrophysics Data System (ADS)
Burton, Sharon P.; Chemyakin, Eduard; Liu, Xu; Knobelspiesse, Kirk; Stamnes, Snorre; Sawamura, Patricia; Moore, Richard H.; Hostetler, Chris A.; Ferrare, Richard A.
2016-11-01
There is considerable interest in retrieving profiles of aerosol effective radius, total number concentration, and complex refractive index from lidar measurements of extinction and backscatter at several wavelengths. The combination of three backscatter channels plus two extinction channels (3β + 2α) is particularly important since it is believed to be the minimum configuration necessary for the retrieval of aerosol microphysical properties and because the technological readiness of lidar systems permits this configuration on both an airborne and future spaceborne instrument. The second-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β + 2α measurements since 2012. The planned NASA Aerosol/Clouds/Ecosystems (ACE) satellite mission also recommends the 3β + 2α combination.Here we develop a deeper understanding of the information content and sensitivities of the 3β + 2α system in terms of aerosol microphysical parameters of interest. We use a retrieval-free methodology to determine the basic sensitivities of the measurements independent of retrieval assumptions and constraints. We calculate information content and uncertainty metrics using tools borrowed from the optimal estimation methodology based on Bayes' theorem, using a simplified forward model look-up table, with no explicit inversion. The forward model is simplified to represent spherical particles, monomodal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, the given simplified aerosol scenario is applicable as a best case for all existing retrievals in the absence of additional constraints. Retrieval-dependent errors due to mismatch between retrieval assumptions and true atmospheric aerosols are not included in this sensitivity study, and neither are retrieval errors that may be introduced in the inversion process. The choice of a simplified model adds clarity to the understanding of the uncertainties in such retrievals, since it allows for separately assessing the sensitivities and uncertainties of the measurements alone that cannot be corrected by any potential or theoretical improvements to retrieval methodology but must instead be addressed by adding information content.The sensitivity metrics allow for identifying (1) information content of the measurements vs. a priori information; (2) error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. The results suggest that the 3β + 2α measurement system is underdetermined with respect to the full suite of microphysical parameters considered in this study and that additional information is required, in the form of additional coincident measurements (e.g., sun-photometer or polarimeter) or a priori retrieval constraints. A specific recommendation is given for addressing cross-talk between effective radius and total number concentration.
Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan
To evaluate the cost-effectiveness of an automated medication system (AMS) implemented in a Danish hospital setting. An economic evaluation was performed alongside a controlled before-and-after effectiveness study with one control ward and one intervention ward. The primary outcome measure was the number of errors in the medication administration process observed prospectively before and after implementation. To determine the difference in proportion of errors after implementation of the AMS, logistic regression was applied with the presence of error(s) as the dependent variable. Time, group, and interaction between time and group were the independent variables. The cost analysis used the hospital perspective with a short-term incremental costing approach. The total 6-month costs with and without the AMS were calculated as well as the incremental costs. The number of avoided administration errors was related to the incremental costs to obtain the cost-effectiveness ratio expressed as the cost per avoided administration error. The AMS resulted in a statistically significant reduction in the proportion of errors in the intervention ward compared with the control ward. The cost analysis showed that the AMS increased the ward's 6-month cost by €16,843. The cost-effectiveness ratio was estimated at €2.01 per avoided administration error, €2.91 per avoided procedural error, and €19.38 per avoided clinical error. The AMS was effective in reducing errors in the medication administration process at a higher overall cost. The cost-effectiveness analysis showed that the AMS was associated with affordable cost-effectiveness rates. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
ERIC Educational Resources Information Center
Hayward, Denyse V.; Annable, Caitlin D.; Fung, Jennifer E.; Williamson, Robert D.; Lovell-Johnston, Meridith A.; Phillips, Linda M.
2017-01-01
Current phonological awareness assessment procedures consider only the total score a child achieves. Such an approach may result in children who achieve the same total score receiving the same instruction even though the configuration of their errors represent fundamental knowledge differences. The purpose of this study was to develop a tool for…
Multistrip Western blotting: a tool for comparative quantitative analysis of multiple proteins.
Aksamitiene, Edita; Hoek, Jan B; Kiyatkin, Anatoly
2015-01-01
The qualitative and quantitative measurements of protein abundance and modification states are essential in understanding their functions in diverse cellular processes. Typical Western blotting, though sensitive, is prone to produce substantial errors and is not readily adapted to high-throughput technologies. Multistrip Western blotting is a modified immunoblotting procedure based on simultaneous electrophoretic transfer of proteins from multiple strips of polyacrylamide gels to a single membrane sheet. In comparison with the conventional technique, Multistrip Western blotting increases data output per single blotting cycle up to tenfold; allows concurrent measurement of up to nine different total and/or posttranslationally modified protein expression obtained from the same loading of the sample; and substantially improves the data accuracy by reducing immunoblotting-derived signal errors. This approach enables statistically reliable comparison of different or repeated sets of data and therefore is advantageous to apply in biomedical diagnostics, systems biology, and cell signaling research.
Zhang, Meng; Zhang, Xuemei; Chen, Fei; Dong, Birong; Chen, Aiqing; Zheng, Dingchang
2017-04-01
This study aimed to examine the effects of measurement room environment and nursing experience on the accuracy of manual auscultatory blood pressure (BP) measurement. A training database with 32 Korotkoff sounds recordings from the British Hypertension Society was played randomly to 20 observers who were divided into four groups according to the years of their nursing experience (i.e. ≥10 years, 1-9 years, nursing students with frequent training, and those without any medical background; five observers in each group). All the observers were asked to determine manual auscultatory systolic blood pressure (SBP) and diastolic blood pressure (DBP) both in a quiet clinical assessment room and in a noisy nurse station area. This procedure was repeated on another day, yielding a total of four measurements from each observer (i.e. two room environments and two repeated determinations on 2 separate days) for each Korotkoff sound. The measurement error was then calculated against the reference answer, with the effects of room environment and nursing experience of the observer investigated. Our results showed that there was no statistically significant difference for BPs measured under both quiet and noisy environments (P>0.80 for both SBP and DBP). However, there was a significant effect on the measurement accuracy between the observer groups (P<0.001 for both SBP and DBP). The nursing students performed best with overall SBP and DBP errors of -0.8±2.4 and 0.1±1.8 mmHg, respectively. The SBP measurement error from the nursing students was significantly smaller than that for each of the other three groups (all P<0.001). Our results indicate that frequent nursing trainings are important for nurses to achieve accurate manual auscultatory BP measurement.
Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.
Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep
2017-06-12
Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.
Error measuring system of rotary Inductosyn
NASA Astrophysics Data System (ADS)
Liu, Chengjun; Zou, Jibin; Fu, Xinghe
2008-10-01
The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
The impact of response measurement error on the analysis of designed experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
NASA Astrophysics Data System (ADS)
Zude, Manuela; Spinelli, Lorenzo; Dosche, Carsten; Torricelli, Alessandro
2009-08-01
In sweet cherry (Prunus avium), the red pigmentation is correlated with the fruit maturity stage and can be measured by non-invasive spectroscopy. In the present study, the influence of varying fruit scattering coefficients on the fruit remittance spectrum (cw) were corrected with the effective pathlength and refractive index in the fruit tissue obtained with distribution of time-of-flight (DTOF) readings and total internal reflection fluorescence (TIRF) analysis, respectively. The approach was validated on fruits providing variation in the scattering coefficient outside the calibration sample set. In the validation, the measuring uncertainty when non-invasively analyzing fruits with cw method in comparison with combined application of cw, DTOF, and TIRF measurements showed an increase in r2 up to 22.7 % with, however, high errors in all approaches.
Choi, Young; Eom, Youngsub; Song, Jong Suk; Kim, Hyo Myung
2018-05-15
To compare the effect of posterior corneal astigmatism on the estimation of total corneal astigmatism using anterior corneal measurements (simulated keratometry [K]) between eyes with keratoconus and healthy eyes. Thirty-three eyes of 33 patients with keratoconus of grade I or II and 33 eyes of 33 age- and sex-matched healthy control subjects were enrolled. Anterior, posterior, and total corneal cylinder powers and flat meridians measured by a single Scheimpflug camera were analyzed. The difference in corneal astigmatism between the simulated K and total cornea was evaluated. The mean anterior, posterior, and total corneal cylinder powers of the keratoconus group (4.37 ± 1.73, 0.95 ± 0.39, and 4.36 ± 1.74 CD, respectively) were significantly greater than those of the control group (1.10 ± 0.68, 0.39 ± 0.18, and 0.97 ± 0.63 CD, respectively). The cylinder power difference between the simulated K and total cornea was positively correlated with the posterior corneal cylinder power and negatively correlated with the absolute flat meridian difference between the simulated K and total cornea in both groups. The mean magnitude of the vector difference between the astigmatism of the simulated K and total cornea of the keratoconus group (0.67 ± 0.67 CD) was significantly larger than that of the control group (0.28 ± 0.12 CD). Eyes with keratoconus had greater estimation errors of total corneal astigmatism based on anterior corneal measurement than did healthy eyes. Posterior corneal surface measurement should be more emphasized to determine the total corneal astigmatism in eyes with keratoconus. © 2018 The Korean Ophthalmological Society.
Choi, Young; Song, Jong Suk; Kim, Hyo Myung
2018-01-01
Purpose To compare the effect of posterior corneal astigmatism on the estimation of total corneal astigmatism using anterior corneal measurements (simulated keratometry [K]) between eyes with keratoconus and healthy eyes. Methods Thirty-three eyes of 33 patients with keratoconus of grade I or II and 33 eyes of 33 age- and sex-matched healthy control subjects were enrolled. Anterior, posterior, and total corneal cylinder powers and flat meridians measured by a single Scheimpflug camera were analyzed. The difference in corneal astigmatism between the simulated K and total cornea was evaluated. Results The mean anterior, posterior, and total corneal cylinder powers of the keratoconus group (4.37 ± 1.73, 0.95 ± 0.39, and 4.36 ± 1.74 cylinder diopters [CD], respectively) were significantly greater than those of the control group (1.10 ± 0.68, 0.39 ± 0.18, and 0.97 ± 0.63 CD, respectively). The cylinder power difference between the simulated K and total cornea was positively correlated with the posterior corneal cylinder power and negatively correlated with the absolute flat meridian difference between the simulated K and total cornea in both groups. The mean magnitude of the vector difference between the astigmatism of the simulated K and total cornea of the keratoconus group (0.67 ± 0.67 CD) was significantly larger than that of the control group (0.28 ± 0.12 CD). Conclusions Eyes with keratoconus had greater estimation errors of total corneal astigmatism based on anterior corneal measurement than did healthy eyes. Posterior corneal surface measurement should be more emphasized to determine the total corneal astigmatism in eyes with keratoconus. PMID:29770640
NASA Astrophysics Data System (ADS)
Bao, Chuanchen; Li, Jiakun; Feng, Qibo; Zhang, Bin
2018-07-01
This paper introduces an error-compensation model for our measurement method to measure five motion errors of a rotary axis based on fibre laser collimation. The error-compensation model is established in a matrix form using the homogeneous coordinate transformation theory. The influences of the installation errors, error crosstalk, and manufacturing errors are analysed. The model is verified by both ZEMAX simulation and measurement experiments. The repeatability values of the radial and axial motion errors are significantly suppressed by more than 50% after compensation. The repeatability experiments of five degrees of freedom motion errors and the comparison experiments of two degrees of freedom motion errors of an indexing table were performed by our measuring device and a standard instrument. The results show that the repeatability values of the angular positioning error ε z and tilt motion error around the Y axis ε y are 1.2″ and 4.4″, and the comparison deviations of the two motion errors are 4.0″ and 4.4″, respectively. The repeatability values of the radial and axial motion errors, δ y and δ z , are 1.3 and 0.6 µm, respectively. The repeatability value of the tilt motion error around the X axis ε x is 3.8″.
Impact of Measurement Error on Synchrophasor Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less